idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
29,101
Combining probabilities of nuclear accidents
To answer the pure probabilistic question that J Presley presented, using bayer's notation (p=probability of an item failing), the the probability of at least one element failing is 1-P(none fail)= 1-(1-p)^n. This type of calculation is common in system reliability where a bunch of components are linked in parallel, so that the system continues to function if at least one component is functioning. You can still use this formula even if each plant item has a different failure probability (p_i). The formula would then be 1- (1-p_1)(1-p_2)...(1-p_n).
Combining probabilities of nuclear accidents
To answer the pure probabilistic question that J Presley presented, using bayer's notation (p=probability of an item failing), the the probability of at least one element failing is 1-P(none fail)= 1-
Combining probabilities of nuclear accidents To answer the pure probabilistic question that J Presley presented, using bayer's notation (p=probability of an item failing), the the probability of at least one element failing is 1-P(none fail)= 1-(1-p)^n. This type of calculation is common in system reliability where a bunch of components are linked in parallel, so that the system continues to function if at least one component is functioning. You can still use this formula even if each plant item has a different failure probability (p_i). The formula would then be 1- (1-p_1)(1-p_2)...(1-p_n).
Combining probabilities of nuclear accidents To answer the pure probabilistic question that J Presley presented, using bayer's notation (p=probability of an item failing), the the probability of at least one element failing is 1-P(none fail)= 1-
29,102
Combining probabilities of nuclear accidents
Before you set up your analysis, keep in mind the reality of what the current situation involves. This meltdown was not directly caused by the earthquake or the tsunami. It was because of a lack of back-up power. If they had enough back-up power, regardless of the earthquake/tsunami, they could have kept the cooling water running, and none of the meltdowns would have happened. The plant would probably be back up and running by now. Japan, for whatever reason, has two electrical frequencies (50 Hz and 60 Hz). And, you can't run a 50 Hz motor at 60 Hz or vice versa. So, whatever frequency the plant was using/providing is the frequency they need to power up. "U.S. type" equipment runs at 60 Hz and "European type" equipment runs at 50 Hz, so in providing an alternative power source, keep that in mind. Next, that plant is in a fairly remote mountainous area. To supply external power requires a LONG power line from another area (requiring days/weeks to build) or large gasoline/diesel driven generators. Those generators are heavy enough that flying them in with a helicopter is not an option. Trucking them in may also be a problem due to the roads being blocked from the earthquake/tsunami. Bringing them in by ship is an option, but it also takes days/weeks. The bottom line is, the risk analysis for this plant comes down to a lack of SEVERAL (not just one or two) layers of back-ups. And, because this reactor is an "active design", which means it requires power to stay safe, those layers are not a luxury, they're required. This is an old plant. A new plant would not be designed this way. Edit (03/19/2011) ============================================== J Presley: To answer your question requires a short explanation of terms. As I said in my comment, to me, this is a matter of "when", not "if", and as a crude model, I suggested the Poisson Distribution/Process. The Poisson Process is a series of events that happen at an average rate over time (or space, or some other measure). These events are independent of each other and random (no patterns). The events happen one at a time (2 or more events don't happen at the exact same time). It is basically a binomial situation ("event" or "no event") where the probability that the event will happen is relatively small. Here are some links: http://en.wikipedia.org/wiki/Poisson_process http://en.wikipedia.org/wiki/Poisson_distribution Next, the data. Here's a list of nuclear accidents since 1952 with the INES Level: http://en.wikipedia.org/wiki/Nuclear_and_radiation_accidents I count 19 accidents, 9 state an INES Level. For those without an INES level, all I can do is assume the level is below Level 1, so I'll assign them Level 0. So, one way to quantify this is 19 accidents in 59 years (59 = 2011 -1952). That's 19/59 = 0.322 acc/yr. In terms of a century, that's 32.2 accidents per 100 years. Assuming a Poisson Process gives the following graphs. Originally, I suggested a Lognormal, Gamma, or Exponential Distribution for the severity of the accidents. However, since the INES Levels are given as discrete values, the distribution would need to be discrete. I would suggest either the Geometric or Negative Binomial Distribution. Here are their descriptions: http://en.wikipedia.org/wiki/Negative_binomial_distribution http://en.wikipedia.org/wiki/Geometric_distribution They both fit the data about the same, which is not very well (lots of Level 0's, one Level 1, zero Level 2's, etc). Fit for Negative Binomial Distribution Fitting of the distribution ' nbinom ' by maximum likelihood Parameters : estimate Std. Error size 0.460949 0.2583457 mu 1.894553 0.7137625 Loglikelihood: -34.57827 AIC: 73.15655 BIC: 75.04543 Correlation matrix: size mu size 1.0000000000 0.0001159958 mu 0.0001159958 1.0000000000 #==================== Fit for Geometric Distribution Fitting of the distribution ' geom ' by maximum likelihood Parameters : estimate Std. Error prob 0.3454545 0.0641182 Loglikelihood: -35.4523 AIC: 72.9046 BIC: 73.84904 The Geometric Distribution is a simple one parameter function while the Negative Binomial Distribution is a more flexible two parameter function. I would go for the flexibility, plus the underlying assumptions of how the Negative Binomial Distribution was derived. Below is a graph of the fitted Negative Binomial Distribution. Below is the code for all this stuff. If anyone finds a problem with my assumptions or coding, don't be afraid to point it out. I checked through the results, but I didn't have enough time to really chew on this. library(fitdistrplus) #Generate the data for the Poisson plots x <- dpois(0:60, 32.2) y <- ppois(0:60, 32.2, lower.tail = FALSE) #Cram the Poisson Graphs into one plot par(pty="m", plt=c(0.1, 1, 0, 1), omd=c(0.1,0.9,0.1,0.9)) par(mfrow = c(2, 1)) #Plot the Probability Graph plot(x, type="n", main="", xlab="", ylab="", xaxt="n", yaxt="n") mtext(side=3, line=1, "Poisson Distribution Averaging 32.2 Nuclear Accidents Per Century", cex=1.1, font=2) xaxisdat <- seq(0, 60, 10) pardat <- par() yaxisdat <- seq(pardat$yaxp[1], pardat$yaxp[2], (pardat$yaxp[2]-pardat$yaxp[1])/pardat$yaxp[3]) axis(2, at=yaxisdat, labels=paste(100*yaxisdat, "%", sep=""), las=2, padj=0.5, cex.axis=0.7, hadj=0.5, tcl=-0.3) mtext("Probability", 2, line=2.3) abline(h=yaxisdat, col="lightgray") abline(v=xaxisdat, col="lightgray") lines(x, type="h", lwd=3, col="blue") #Plot the Cumulative Probability Graph plot(y, type="n", main="", xlab="", ylab="", xaxt="n", yaxt="n") pardat <- par() yaxisdat <- seq(pardat$yaxp[1], pardat$yaxp[2], (pardat$yaxp[2]-pardat$yaxp[1])/pardat$yaxp[3]) axis(2, at=yaxisdat, labels=paste(100*yaxisdat, "%", sep=""), las=2, padj=0.5, cex.axis=0.7, hadj=0.5, tcl=-0.3) mtext("Cumulative Probability", 2, line=2.3) abline(h=yaxisdat, col="lightgray") abline(v=xaxisdat, col="lightgray") lines(y, type="h", lwd=3, col="blue") axis(1, at=xaxisdat, padj=-2, cex.axis=0.7, hadj=0.5, tcl=-0.3) mtext("Number of Nuclear Accidents Per Century", 1, line=1) legend("topright", legend=c("99% Probability - 20 Accidents or More", " 1% Probability - 46 Accidents or More"), bg="white", cex=0.8) #Calculate the 1% and 99% values qpois(0.01, 32.2, lower.tail = FALSE) qpois(0.99, 32.2, lower.tail = FALSE) #Fit the Severity Data z <- c(rep(0,10), 1, rep(3,2), rep(4,3), rep(5,2), 7) zdis <- fitdist(z, "nbinom") plot(zdis, lwd=3, col="blue") summary(zdis) Edit (03/20/2011) ====================================================== J Presley: I'm sorry I couldn't finish this up yesterday. You know how it is on weekends, lots of duties. The last step in this process is to assemble a simulation using the Poisson Distribution to determine when an event happens, and then the Negative Binomial Distribution to determine the severity of the event. You might run 1000 sets of "century chunks" to generate the 8 probability distributions for Level 0 through Level 7 events. If I get the time, I might run the simulation, but for now, the description will have to do. Maybe someone reading this stuff will run it. After that is done, you'll have a "base case" where all of the events are assumed to be INDEPENDENT. Obviously, the next step is to relax one or more of the above assumptions. An easy place to start is with the Poisson Distribution. It assumes that all events are 100% independent. You can change that in all sorts of ways. Here are some links to Non-homogeneous Poisson Distributions: http://www.math.wm.edu/~leemis/icrsa03.pdf http://filebox.vt.edu/users/pasupath/papers/nonhompoisson_streams.pdf The same idea goes for the Negative Binomial Distribution. This combination will lead you down all sorts of paths. Here are some examples: http://surveillance.r-forge.r-project.org/ http://www.m-hikari.com/ijcms-2010/45-48-2010/buligaIJCMS45-48-2010.pdf http://www.michaeltanphd.com/evtrm.pdf The bottom line is, you asked a question where the answer depends on how far you want to take it. My guess is, someone, somewhere will be commissioned to generate "an answer" and will be surprised at how long it takes to do the work. Edit (03/21/2011) ==================================================== I had a chance to slap together the above mentioned simulation. The results are shown below. From the original Poisson Distribution, the simulation provides eight Poisson Distributions, one for each INES Level. As the severity level rises (INES Level Number rises), the number of expected events per century drops. This may be a crude model, but it's a reasonable place to start.
Combining probabilities of nuclear accidents
Before you set up your analysis, keep in mind the reality of what the current situation involves. This meltdown was not directly caused by the earthquake or the tsunami. It was because of a lack of
Combining probabilities of nuclear accidents Before you set up your analysis, keep in mind the reality of what the current situation involves. This meltdown was not directly caused by the earthquake or the tsunami. It was because of a lack of back-up power. If they had enough back-up power, regardless of the earthquake/tsunami, they could have kept the cooling water running, and none of the meltdowns would have happened. The plant would probably be back up and running by now. Japan, for whatever reason, has two electrical frequencies (50 Hz and 60 Hz). And, you can't run a 50 Hz motor at 60 Hz or vice versa. So, whatever frequency the plant was using/providing is the frequency they need to power up. "U.S. type" equipment runs at 60 Hz and "European type" equipment runs at 50 Hz, so in providing an alternative power source, keep that in mind. Next, that plant is in a fairly remote mountainous area. To supply external power requires a LONG power line from another area (requiring days/weeks to build) or large gasoline/diesel driven generators. Those generators are heavy enough that flying them in with a helicopter is not an option. Trucking them in may also be a problem due to the roads being blocked from the earthquake/tsunami. Bringing them in by ship is an option, but it also takes days/weeks. The bottom line is, the risk analysis for this plant comes down to a lack of SEVERAL (not just one or two) layers of back-ups. And, because this reactor is an "active design", which means it requires power to stay safe, those layers are not a luxury, they're required. This is an old plant. A new plant would not be designed this way. Edit (03/19/2011) ============================================== J Presley: To answer your question requires a short explanation of terms. As I said in my comment, to me, this is a matter of "when", not "if", and as a crude model, I suggested the Poisson Distribution/Process. The Poisson Process is a series of events that happen at an average rate over time (or space, or some other measure). These events are independent of each other and random (no patterns). The events happen one at a time (2 or more events don't happen at the exact same time). It is basically a binomial situation ("event" or "no event") where the probability that the event will happen is relatively small. Here are some links: http://en.wikipedia.org/wiki/Poisson_process http://en.wikipedia.org/wiki/Poisson_distribution Next, the data. Here's a list of nuclear accidents since 1952 with the INES Level: http://en.wikipedia.org/wiki/Nuclear_and_radiation_accidents I count 19 accidents, 9 state an INES Level. For those without an INES level, all I can do is assume the level is below Level 1, so I'll assign them Level 0. So, one way to quantify this is 19 accidents in 59 years (59 = 2011 -1952). That's 19/59 = 0.322 acc/yr. In terms of a century, that's 32.2 accidents per 100 years. Assuming a Poisson Process gives the following graphs. Originally, I suggested a Lognormal, Gamma, or Exponential Distribution for the severity of the accidents. However, since the INES Levels are given as discrete values, the distribution would need to be discrete. I would suggest either the Geometric or Negative Binomial Distribution. Here are their descriptions: http://en.wikipedia.org/wiki/Negative_binomial_distribution http://en.wikipedia.org/wiki/Geometric_distribution They both fit the data about the same, which is not very well (lots of Level 0's, one Level 1, zero Level 2's, etc). Fit for Negative Binomial Distribution Fitting of the distribution ' nbinom ' by maximum likelihood Parameters : estimate Std. Error size 0.460949 0.2583457 mu 1.894553 0.7137625 Loglikelihood: -34.57827 AIC: 73.15655 BIC: 75.04543 Correlation matrix: size mu size 1.0000000000 0.0001159958 mu 0.0001159958 1.0000000000 #==================== Fit for Geometric Distribution Fitting of the distribution ' geom ' by maximum likelihood Parameters : estimate Std. Error prob 0.3454545 0.0641182 Loglikelihood: -35.4523 AIC: 72.9046 BIC: 73.84904 The Geometric Distribution is a simple one parameter function while the Negative Binomial Distribution is a more flexible two parameter function. I would go for the flexibility, plus the underlying assumptions of how the Negative Binomial Distribution was derived. Below is a graph of the fitted Negative Binomial Distribution. Below is the code for all this stuff. If anyone finds a problem with my assumptions or coding, don't be afraid to point it out. I checked through the results, but I didn't have enough time to really chew on this. library(fitdistrplus) #Generate the data for the Poisson plots x <- dpois(0:60, 32.2) y <- ppois(0:60, 32.2, lower.tail = FALSE) #Cram the Poisson Graphs into one plot par(pty="m", plt=c(0.1, 1, 0, 1), omd=c(0.1,0.9,0.1,0.9)) par(mfrow = c(2, 1)) #Plot the Probability Graph plot(x, type="n", main="", xlab="", ylab="", xaxt="n", yaxt="n") mtext(side=3, line=1, "Poisson Distribution Averaging 32.2 Nuclear Accidents Per Century", cex=1.1, font=2) xaxisdat <- seq(0, 60, 10) pardat <- par() yaxisdat <- seq(pardat$yaxp[1], pardat$yaxp[2], (pardat$yaxp[2]-pardat$yaxp[1])/pardat$yaxp[3]) axis(2, at=yaxisdat, labels=paste(100*yaxisdat, "%", sep=""), las=2, padj=0.5, cex.axis=0.7, hadj=0.5, tcl=-0.3) mtext("Probability", 2, line=2.3) abline(h=yaxisdat, col="lightgray") abline(v=xaxisdat, col="lightgray") lines(x, type="h", lwd=3, col="blue") #Plot the Cumulative Probability Graph plot(y, type="n", main="", xlab="", ylab="", xaxt="n", yaxt="n") pardat <- par() yaxisdat <- seq(pardat$yaxp[1], pardat$yaxp[2], (pardat$yaxp[2]-pardat$yaxp[1])/pardat$yaxp[3]) axis(2, at=yaxisdat, labels=paste(100*yaxisdat, "%", sep=""), las=2, padj=0.5, cex.axis=0.7, hadj=0.5, tcl=-0.3) mtext("Cumulative Probability", 2, line=2.3) abline(h=yaxisdat, col="lightgray") abline(v=xaxisdat, col="lightgray") lines(y, type="h", lwd=3, col="blue") axis(1, at=xaxisdat, padj=-2, cex.axis=0.7, hadj=0.5, tcl=-0.3) mtext("Number of Nuclear Accidents Per Century", 1, line=1) legend("topright", legend=c("99% Probability - 20 Accidents or More", " 1% Probability - 46 Accidents or More"), bg="white", cex=0.8) #Calculate the 1% and 99% values qpois(0.01, 32.2, lower.tail = FALSE) qpois(0.99, 32.2, lower.tail = FALSE) #Fit the Severity Data z <- c(rep(0,10), 1, rep(3,2), rep(4,3), rep(5,2), 7) zdis <- fitdist(z, "nbinom") plot(zdis, lwd=3, col="blue") summary(zdis) Edit (03/20/2011) ====================================================== J Presley: I'm sorry I couldn't finish this up yesterday. You know how it is on weekends, lots of duties. The last step in this process is to assemble a simulation using the Poisson Distribution to determine when an event happens, and then the Negative Binomial Distribution to determine the severity of the event. You might run 1000 sets of "century chunks" to generate the 8 probability distributions for Level 0 through Level 7 events. If I get the time, I might run the simulation, but for now, the description will have to do. Maybe someone reading this stuff will run it. After that is done, you'll have a "base case" where all of the events are assumed to be INDEPENDENT. Obviously, the next step is to relax one or more of the above assumptions. An easy place to start is with the Poisson Distribution. It assumes that all events are 100% independent. You can change that in all sorts of ways. Here are some links to Non-homogeneous Poisson Distributions: http://www.math.wm.edu/~leemis/icrsa03.pdf http://filebox.vt.edu/users/pasupath/papers/nonhompoisson_streams.pdf The same idea goes for the Negative Binomial Distribution. This combination will lead you down all sorts of paths. Here are some examples: http://surveillance.r-forge.r-project.org/ http://www.m-hikari.com/ijcms-2010/45-48-2010/buligaIJCMS45-48-2010.pdf http://www.michaeltanphd.com/evtrm.pdf The bottom line is, you asked a question where the answer depends on how far you want to take it. My guess is, someone, somewhere will be commissioned to generate "an answer" and will be surprised at how long it takes to do the work. Edit (03/21/2011) ==================================================== I had a chance to slap together the above mentioned simulation. The results are shown below. From the original Poisson Distribution, the simulation provides eight Poisson Distributions, one for each INES Level. As the severity level rises (INES Level Number rises), the number of expected events per century drops. This may be a crude model, but it's a reasonable place to start.
Combining probabilities of nuclear accidents Before you set up your analysis, keep in mind the reality of what the current situation involves. This meltdown was not directly caused by the earthquake or the tsunami. It was because of a lack of
29,103
Combining probabilities of nuclear accidents
The underlying difficulty behind the question is that situations that have been anticipated, have generally been planned for, with mitigation measures in place. Which means that the situation should not even turn into a serious accident. The serious accidents stem from unanticipated situations. Which means that you cannot assess probabilities for them - they are your Rumsfeldian unknown unknowns. The assumption of independence is clearly invalid - Fukushima Daiichi shows that. Nuclear plants can have common-mode failures. (i.e. more than one reactor becoming unavailable at once, due to a common cause). Although probabilities cannot be quantitatively calculated, we can make some qualitative assertions about common-mode failures. For example: if the plants are all built to the same design, then they are more likely to have common-mode failures (for example the known problem with pressurizer cracks in EPRs / PWRs) If the plant sites share geographic commonalities, they are more likely to have common-mode failures: for example, if they all lie on the same earthquake fault line; or if they all rely on similar rivers within a single climatic zone for cooling (when a very dry summer can cause all such plants to be taken offline).
Combining probabilities of nuclear accidents
The underlying difficulty behind the question is that situations that have been anticipated, have generally been planned for, with mitigation measures in place. Which means that the situation should n
Combining probabilities of nuclear accidents The underlying difficulty behind the question is that situations that have been anticipated, have generally been planned for, with mitigation measures in place. Which means that the situation should not even turn into a serious accident. The serious accidents stem from unanticipated situations. Which means that you cannot assess probabilities for them - they are your Rumsfeldian unknown unknowns. The assumption of independence is clearly invalid - Fukushima Daiichi shows that. Nuclear plants can have common-mode failures. (i.e. more than one reactor becoming unavailable at once, due to a common cause). Although probabilities cannot be quantitatively calculated, we can make some qualitative assertions about common-mode failures. For example: if the plants are all built to the same design, then they are more likely to have common-mode failures (for example the known problem with pressurizer cracks in EPRs / PWRs) If the plant sites share geographic commonalities, they are more likely to have common-mode failures: for example, if they all lie on the same earthquake fault line; or if they all rely on similar rivers within a single climatic zone for cooling (when a very dry summer can cause all such plants to be taken offline).
Combining probabilities of nuclear accidents The underlying difficulty behind the question is that situations that have been anticipated, have generally been planned for, with mitigation measures in place. Which means that the situation should n
29,104
Combining probabilities of nuclear accidents
As commentators pointed out, this has the very strong independency assumption. Let the probability that a plant blows up be $p$. Then the probability that a plant does not blow up is $1-p$. Then the probability that $n$ plants do not blow up is $(1-p)^n$. The expected number of plants blown up per year is $np$. In case you're interested: binomial distribution.
Combining probabilities of nuclear accidents
As commentators pointed out, this has the very strong independency assumption. Let the probability that a plant blows up be $p$. Then the probability that a plant does not blow up is $1-p$. Then the p
Combining probabilities of nuclear accidents As commentators pointed out, this has the very strong independency assumption. Let the probability that a plant blows up be $p$. Then the probability that a plant does not blow up is $1-p$. Then the probability that $n$ plants do not blow up is $(1-p)^n$. The expected number of plants blown up per year is $np$. In case you're interested: binomial distribution.
Combining probabilities of nuclear accidents As commentators pointed out, this has the very strong independency assumption. Let the probability that a plant blows up be $p$. Then the probability that a plant does not blow up is $1-p$. Then the p
29,105
Is it possible to use kernel PCA for feature selection?
I think the answer to your question is negative: it is not possible. Standard PCA can be used for feature selection, because each principal component is a linear combination of original features, and so one can see which original features contribute most to the most prominent principal components, see e.g. here: Using principal component analysis (PCA) for feature selection. But in kernel PCA each principal component is a linear combination of features in the target space, and for e.g. Gaussian kernel (which is often used) the target space is infinite-dimensional. So the concept of "loadings" does not really make sense for kPCA, and in fact, kernel principal components are computed directly, bypassing computation of principal axes (which for standard PCA are given in R by prcomp$rotation) altogether, thanks to what is known as kernel trick. See e.g. here: Is Kernel PCA with linear kernel equivalent to standard PCA? for more details. So no, it is not possible. At least there is no easy way.
Is it possible to use kernel PCA for feature selection?
I think the answer to your question is negative: it is not possible. Standard PCA can be used for feature selection, because each principal component is a linear combination of original features, and
Is it possible to use kernel PCA for feature selection? I think the answer to your question is negative: it is not possible. Standard PCA can be used for feature selection, because each principal component is a linear combination of original features, and so one can see which original features contribute most to the most prominent principal components, see e.g. here: Using principal component analysis (PCA) for feature selection. But in kernel PCA each principal component is a linear combination of features in the target space, and for e.g. Gaussian kernel (which is often used) the target space is infinite-dimensional. So the concept of "loadings" does not really make sense for kPCA, and in fact, kernel principal components are computed directly, bypassing computation of principal axes (which for standard PCA are given in R by prcomp$rotation) altogether, thanks to what is known as kernel trick. See e.g. here: Is Kernel PCA with linear kernel equivalent to standard PCA? for more details. So no, it is not possible. At least there is no easy way.
Is it possible to use kernel PCA for feature selection? I think the answer to your question is negative: it is not possible. Standard PCA can be used for feature selection, because each principal component is a linear combination of original features, and
29,106
Is it possible to use kernel PCA for feature selection?
The following example (taken from kernlab reference manual) shows you how to access the various components of the kernel PCA: data(iris) test <- sample(1:50,20) kpc <- kpca(~.,data=iris[-test,-5],kernel="rbfdot",kpar=list(sigma=0.2),features=2) pcv(kpc) # returns the principal component vectors eig(kpc) # returns the eigenvalues rotated(kpc) # returns the data projected in the (kernel) pca space kernelf(kpc) # returns the kernel used when kpca was performed Does this answer your question?
Is it possible to use kernel PCA for feature selection?
The following example (taken from kernlab reference manual) shows you how to access the various components of the kernel PCA: data(iris) test <- sample(1:50,20) kpc <- kpca(~.,data=iris[-test,-5],kern
Is it possible to use kernel PCA for feature selection? The following example (taken from kernlab reference manual) shows you how to access the various components of the kernel PCA: data(iris) test <- sample(1:50,20) kpc <- kpca(~.,data=iris[-test,-5],kernel="rbfdot",kpar=list(sigma=0.2),features=2) pcv(kpc) # returns the principal component vectors eig(kpc) # returns the eigenvalues rotated(kpc) # returns the data projected in the (kernel) pca space kernelf(kpc) # returns the kernel used when kpca was performed Does this answer your question?
Is it possible to use kernel PCA for feature selection? The following example (taken from kernlab reference manual) shows you how to access the various components of the kernel PCA: data(iris) test <- sample(1:50,20) kpc <- kpca(~.,data=iris[-test,-5],kern
29,107
Using the stats package in R for kmeans clustering
I did not grasp question 1 completely, but I'll attempt an answer. The plot of Q1 shows how the within sum of squares (wss) changes as cluster number changes. In this kind of plots you must look for the kinks in the graph, a kink at 5 indicates that it is a good idea to use 5 clusters. WSS has a relationship with your variables in the following sense, the formula for WSS is $\sum_{j} \sum_{x_i \in C_j} ||x_i - \mu_j||^2$ where $\mu_j$ is the mean point for cluster $j$ and $x_i$ is the $i$-th observation. We denote cluster j as $C_j$. WSS is sometimes interpreted as "how similar are the points inside of each cluster". This similarity refers to the variables. The answer to question 2 is this. What you are actually watching in the clusplot() is the plot of your observations in the principal plane. What this function is doing is calculating the principal component score for each of your observations, plotting those scores and coloring by cluster. Principal component analysis (PCA) is a dimension reduction technique; it "summarizes" the information of all variables into a couple of "new" variables called components. Each component is responsible of explaining certain percentage of the total variability. In the example you read "This two components explain 73.95% of the total variability". The function clusplot() is used to identify the effectiveness of clustering. In case you have a successful clustering you will see that clusters are clearly separated in the principal plane. On the other hand, you will see the clusters merged in the principal plane when clustering is unsuccessful. For further reference on principal component analysis you may read wiki. if you want a book I suggest Modern Multivariate Techniques by Izenmann, there you will find PCA and k-means. Hope this helps :)
Using the stats package in R for kmeans clustering
I did not grasp question 1 completely, but I'll attempt an answer. The plot of Q1 shows how the within sum of squares (wss) changes as cluster number changes. In this kind of plots you must look for t
Using the stats package in R for kmeans clustering I did not grasp question 1 completely, but I'll attempt an answer. The plot of Q1 shows how the within sum of squares (wss) changes as cluster number changes. In this kind of plots you must look for the kinks in the graph, a kink at 5 indicates that it is a good idea to use 5 clusters. WSS has a relationship with your variables in the following sense, the formula for WSS is $\sum_{j} \sum_{x_i \in C_j} ||x_i - \mu_j||^2$ where $\mu_j$ is the mean point for cluster $j$ and $x_i$ is the $i$-th observation. We denote cluster j as $C_j$. WSS is sometimes interpreted as "how similar are the points inside of each cluster". This similarity refers to the variables. The answer to question 2 is this. What you are actually watching in the clusplot() is the plot of your observations in the principal plane. What this function is doing is calculating the principal component score for each of your observations, plotting those scores and coloring by cluster. Principal component analysis (PCA) is a dimension reduction technique; it "summarizes" the information of all variables into a couple of "new" variables called components. Each component is responsible of explaining certain percentage of the total variability. In the example you read "This two components explain 73.95% of the total variability". The function clusplot() is used to identify the effectiveness of clustering. In case you have a successful clustering you will see that clusters are clearly separated in the principal plane. On the other hand, you will see the clusters merged in the principal plane when clustering is unsuccessful. For further reference on principal component analysis you may read wiki. if you want a book I suggest Modern Multivariate Techniques by Izenmann, there you will find PCA and k-means. Hope this helps :)
Using the stats package in R for kmeans clustering I did not grasp question 1 completely, but I'll attempt an answer. The plot of Q1 shows how the within sum of squares (wss) changes as cluster number changes. In this kind of plots you must look for t
29,108
How to re-sample an XTS time series in R?
library(xts) ?endpoints For instance tmp=zoo(rnorm(1000), as.POSIXct("2010-02-1")+(1:1000)*60) tmp[endpoints(tmp, "minutes", 20)] to subsample every 20 minutes. You might also want to check out to.minutes, to.daily, etc.
How to re-sample an XTS time series in R?
library(xts) ?endpoints For instance tmp=zoo(rnorm(1000), as.POSIXct("2010-02-1")+(1:1000)*60) tmp[endpoints(tmp, "minutes", 20)] to subsample every 20 minutes. You might also want to check out to.m
How to re-sample an XTS time series in R? library(xts) ?endpoints For instance tmp=zoo(rnorm(1000), as.POSIXct("2010-02-1")+(1:1000)*60) tmp[endpoints(tmp, "minutes", 20)] to subsample every 20 minutes. You might also want to check out to.minutes, to.daily, etc.
How to re-sample an XTS time series in R? library(xts) ?endpoints For instance tmp=zoo(rnorm(1000), as.POSIXct("2010-02-1")+(1:1000)*60) tmp[endpoints(tmp, "minutes", 20)] to subsample every 20 minutes. You might also want to check out to.m
29,109
How to re-sample an XTS time series in R?
I'm still not sure what you're trying to do and I still think an example would help, but I thought I'd guess that you may be interested in align.time. # Compare this: tmp[endpoints(tmp, "minutes", 20)] # with this: align.time( tmp[endpoints(tmp, "minutes", 20)], n=60*20 )
How to re-sample an XTS time series in R?
I'm still not sure what you're trying to do and I still think an example would help, but I thought I'd guess that you may be interested in align.time. # Compare this: tmp[endpoints(tmp, "minutes", 20)
How to re-sample an XTS time series in R? I'm still not sure what you're trying to do and I still think an example would help, but I thought I'd guess that you may be interested in align.time. # Compare this: tmp[endpoints(tmp, "minutes", 20)] # with this: align.time( tmp[endpoints(tmp, "minutes", 20)], n=60*20 )
How to re-sample an XTS time series in R? I'm still not sure what you're trying to do and I still think an example would help, but I thought I'd guess that you may be interested in align.time. # Compare this: tmp[endpoints(tmp, "minutes", 20)
29,110
How to re-sample an XTS time series in R?
If a is an xts object with entries to second resolution, this knocks off all the seconds: index(a)=trunc(index(a),"mins") You can also use this to round down to "hours" resolution too. But 10 minutes is not supported. For that you have to do this: x=as.POSIXlt(index(a)) x$sec[]=0;x$min[]=x$min[]%/%10 index(a)=x Or a=align.time.down(a,600) where you've defined: align.time.down=function(x,n){index(x)=index(x)-n;align.time(x,n)} ('ve gone with that last choice in my own script.)
How to re-sample an XTS time series in R?
If a is an xts object with entries to second resolution, this knocks off all the seconds: index(a)=trunc(index(a),"mins") You can also use this to round down to "hours" resolution too. But 10 minute
How to re-sample an XTS time series in R? If a is an xts object with entries to second resolution, this knocks off all the seconds: index(a)=trunc(index(a),"mins") You can also use this to round down to "hours" resolution too. But 10 minutes is not supported. For that you have to do this: x=as.POSIXlt(index(a)) x$sec[]=0;x$min[]=x$min[]%/%10 index(a)=x Or a=align.time.down(a,600) where you've defined: align.time.down=function(x,n){index(x)=index(x)-n;align.time(x,n)} ('ve gone with that last choice in my own script.)
How to re-sample an XTS time series in R? If a is an xts object with entries to second resolution, this knocks off all the seconds: index(a)=trunc(index(a),"mins") You can also use this to round down to "hours" resolution too. But 10 minute
29,111
What to do with confounding variables?
Here are some suggestions relating your to bullet points above: What about using the daily takings as an explanatory variable? What you need to do is form an equation where you predict gaming sales given a number of other factors. There factors will include things you are interested in such as whether they used a prepaid card. However, you need to also include factors that you aren't interested in but have to adjust for, such as daily takings. Obviously, if the film is a blockbuster then gaming sales will increase. Suppose you have N cinemas. Select N/2 cinemas and put them in group A and rest go in Group B. Now let Group A be the control group and B the experimental group. If possible, alternate this set-up, i.e. make Group A the experimental setup for a few weeks. If you can mix over groups (point above) then this isn't problem. Even if you can't you can include a variable representing the number of gaming units. The statistical techniques you will probably need is multiple linear regression (MLR). Essentially, you build an equation of the form: Gaming sales = a0 + a1*Prepaid + a2*Takens + a3*<other things> where a0, a1, a2 are just numbers Prepaid is either 0 or 1 Takens are the daily takens. MLR will allow you calculate the values of a0-a2. So if a1 is large this indicates that Prepaid is important.
What to do with confounding variables?
Here are some suggestions relating your to bullet points above: What about using the daily takings as an explanatory variable? What you need to do is form an equation where you predict gaming sales
What to do with confounding variables? Here are some suggestions relating your to bullet points above: What about using the daily takings as an explanatory variable? What you need to do is form an equation where you predict gaming sales given a number of other factors. There factors will include things you are interested in such as whether they used a prepaid card. However, you need to also include factors that you aren't interested in but have to adjust for, such as daily takings. Obviously, if the film is a blockbuster then gaming sales will increase. Suppose you have N cinemas. Select N/2 cinemas and put them in group A and rest go in Group B. Now let Group A be the control group and B the experimental group. If possible, alternate this set-up, i.e. make Group A the experimental setup for a few weeks. If you can mix over groups (point above) then this isn't problem. Even if you can't you can include a variable representing the number of gaming units. The statistical techniques you will probably need is multiple linear regression (MLR). Essentially, you build an equation of the form: Gaming sales = a0 + a1*Prepaid + a2*Takens + a3*<other things> where a0, a1, a2 are just numbers Prepaid is either 0 or 1 Takens are the daily takens. MLR will allow you calculate the values of a0-a2. So if a1 is large this indicates that Prepaid is important.
What to do with confounding variables? Here are some suggestions relating your to bullet points above: What about using the daily takings as an explanatory variable? What you need to do is form an equation where you predict gaming sales
29,112
What to do with confounding variables?
How about comparing the before and after you introduce the cash option across the two groups? Say you assign half the cinemas to the cash option (treatment) and half continue with no-cash (control). Now, you can compare how sales changed in the treatment group following the introduction of the cash option, and also compare how sales changes in the control group. If indeed the cash option is effective, then the change in the treatment group will be bigger than the change in the control group. I recall reading an interesting statistical analysis done by Prof Ayala Cohen at the Technion's statistical lab for assessing the effect of removing advertising boards from a major highway in Israel on accidents in a similar fashion: to control for other factors that changed during this period, they compared the reduction in accidents before/after to a parallel highway where advertising boards remained there throughout the period.
What to do with confounding variables?
How about comparing the before and after you introduce the cash option across the two groups? Say you assign half the cinemas to the cash option (treatment) and half continue with no-cash (control). N
What to do with confounding variables? How about comparing the before and after you introduce the cash option across the two groups? Say you assign half the cinemas to the cash option (treatment) and half continue with no-cash (control). Now, you can compare how sales changed in the treatment group following the introduction of the cash option, and also compare how sales changes in the control group. If indeed the cash option is effective, then the change in the treatment group will be bigger than the change in the control group. I recall reading an interesting statistical analysis done by Prof Ayala Cohen at the Technion's statistical lab for assessing the effect of removing advertising boards from a major highway in Israel on accidents in a similar fashion: to control for other factors that changed during this period, they compared the reduction in accidents before/after to a parallel highway where advertising boards remained there throughout the period.
What to do with confounding variables? How about comparing the before and after you introduce the cash option across the two groups? Say you assign half the cinemas to the cash option (treatment) and half continue with no-cash (control). N
29,113
What to do with confounding variables?
Aside from my practical statistical suggestion, I wanted to raise a slightly different issue: I realize that the cinema's goal is to maximize revenues, and of course the analysis (and strategy) can be geared towards that goal. However, I would like to suggest a broader, holistic view that companies as well as analysts should consider: the overall benefit. In this case, we can consider the value of the gaming addition to cinema goers. Are they happier or more satisfied with the overall experience? (this can be evaluated, e.g., via a quick questionnaire). Or, if the gaming is educational, for instance, then perhaps there is added benefit to those playing? I recall that in several cinemas in the U.S. there are word games on the screen before a movie starts. These can be perceived as fun and educational and could therefore be value added. In fact, if movie goers perceive the gaming service as value added, then they will likely choose this cinema over others and perhaps even visit it more frequently. What I am trying to say is that it is useful to define "success" in a broad manner and to think big. In the end, success will depend also on the wellness of the "customers" and the impact of "treatments" on society, culture, the environment, etc. Sorry if this is too philosophical, but I have had so many MBA students maximizing short-term financial gains and too few thinking of issues that are not monetary. Yet, data mining and statistics can be used for broader causes.
What to do with confounding variables?
Aside from my practical statistical suggestion, I wanted to raise a slightly different issue: I realize that the cinema's goal is to maximize revenues, and of course the analysis (and strategy) can be
What to do with confounding variables? Aside from my practical statistical suggestion, I wanted to raise a slightly different issue: I realize that the cinema's goal is to maximize revenues, and of course the analysis (and strategy) can be geared towards that goal. However, I would like to suggest a broader, holistic view that companies as well as analysts should consider: the overall benefit. In this case, we can consider the value of the gaming addition to cinema goers. Are they happier or more satisfied with the overall experience? (this can be evaluated, e.g., via a quick questionnaire). Or, if the gaming is educational, for instance, then perhaps there is added benefit to those playing? I recall that in several cinemas in the U.S. there are word games on the screen before a movie starts. These can be perceived as fun and educational and could therefore be value added. In fact, if movie goers perceive the gaming service as value added, then they will likely choose this cinema over others and perhaps even visit it more frequently. What I am trying to say is that it is useful to define "success" in a broad manner and to think big. In the end, success will depend also on the wellness of the "customers" and the impact of "treatments" on society, culture, the environment, etc. Sorry if this is too philosophical, but I have had so many MBA students maximizing short-term financial gains and too few thinking of issues that are not monetary. Yet, data mining and statistics can be used for broader causes.
What to do with confounding variables? Aside from my practical statistical suggestion, I wanted to raise a slightly different issue: I realize that the cinema's goal is to maximize revenues, and of course the analysis (and strategy) can be
29,114
The probability of making a Type S error, and the average amount of magnification (type M error) as a function of power
Sketch of the t-test Let $x_1, \dots, x_n \sim N(\mu_1,\sigma^2)$ and $y_1, \dots, y_n \sim N(\mu_2,\sigma^2)$ be independent samples. Let's define The raw effect $\theta = \mu_2-\mu_1$ The estimate of the raw effect $\hat{\theta} = \bar{Y} - \bar{X} \sim N \left(\theta,\frac{2}{n} \sigma^2 \right)$ The standard error of the raw effect $\text{se}(\hat\theta) = \sqrt\frac{2}{n} \hat\sigma = \sqrt\frac{2}{n} \sqrt{\frac{\sum_{i=1}^n (X_i-\bar{X})^2 + \sum_{i=1}^n (Y_i-\bar{Y})^2}{2n -2}} \sim \sigma\sqrt{\frac{2}{n}} \sqrt{\frac{1}{2n-2}} \chi_{2n-2}$ where $a\chi_{2n-2}$ means a scaled chi distribution (also known as a case of the gamma distribution). And the t-statistic is defined as $$t = \frac{\hat\theta}{\text{se}(\hat\theta)} \sim t_{2n-2,\theta\sqrt{n/2}}$$ this statistic follows a non-central t-distribution where $2n-2$ are the degrees of freedom and $\theta \sqrt{n/2}$ is the non-centrality parameter. A typical hypothesis test will regard the significance based on whether or not the t-statistic is above some level. (and the reason for all this hassle with the t-statistic is that it is a pivotal statistic that does not depend on the standard deviation $\sigma$ of the population.) Geometric view of the t-test A geometric view of the t-test can be made with a scatterplot with $\hat{\theta}$ on the horizontal axis and $\text{se}(\hat{\theta})$ on the vertical axis. We do this below in an example with simulations for the null hypothesis $\theta = 0$ and for the alternative hypothesis $d = \frac{\theta}{\sigma} = 0.5$ and $d=2$ (where the effect size $d$ is expressed relative to the population deviation, see also the question Power Analysis and the non central t distribution: what is the non-centrality parameter?). The simulations are made with samples of size $n=5$. (click on the image to view a larger size) Figure 1: Simulations of results for an independent two sample t-test with sample sizes 5. We simulated 3000 points under the null hypothesis of zero effect size (upper image) and under the alternative hypotheses of an effect size equal to $d = 0.5$ and $d=2$. The effect size is used for the horizontal axis and the standard error for the vertical axis. The t-statistic is proportional to the ratio of the two axes $t = \frac{\hat{\theta}}{\text{se}(\hat{\theta})}$. The angle relates to the t-statistic and points at a smaller angle will have a larger t-statistic. Points with $|t|>2.3$ are considered significantly different, and in the case of the null hypothesis this occurs approximately 5% of the time. Graphical illustration of the functions Let's focus on the middle case of the three simulations, the t-tests when the true effect size is $d=0.5$. We can plot the distribution of the effect size for the cases when the observation is significant and for the cases when the observation is not significant: Figure 2: Histogram of the 3000 cases in the middle plot from Figure 1, when the true effect size is $d=0.5$. Based on these histograms one can compute power, s-type error and m-type error. The consequence of the hypothesis test is that mostly relatively large effects are accepted/reported (smaller effects than the true effect size can be reported, if the estimated standard deviation is small). This makes that reported values have a bias and are often larger than the true effect sizes. The rejection region is based on the distribution of the null hypothesis and for a two sided test it is such that $\alpha/2$ percent of the results on both sides of the distribution are falsely rejected if the null hypothesis is true. In the first panel of Figure 1, the simulations when the null hypothesis is true, we reject the null hypothesis when $|t|>2.3$, this occurs in 5% of the cases. The power is the probability to reject when the alternative hypothesis is correct. In the Figure 1 we see that this occurs in respectively 10% and 80% of the cases for relative effect sizes $d=0.5$ and $d=2$. The S-type error is the fraction of the rejected/significant cases with the wrong sign (this occurs mostly when the power is small). In the Figure 2 this is the fraction of the red area compared to the total of the red and green area. $\frac{0.4}{9.97} \approx 0.04$ The M-type error or exaggeration is the mean of the absolute observed effect of the rejected/significant cases relative to the true effect. In Figure 2 this is the mean of the red and the green points divided by the true effect. In the example this is approximately 1.70. Connection between power and S-type and M-type errors For two given samples with size $n$ and significance level $\alpha$ the power is a function of the relative effect size (Cohen's d) as demonstrated in the question: Power Analysis and the non central t distribution: what is the non-centrality parameter?. The type-S and type-M errors are similarly functions of the effect size (explained further in the last section). We can plot them side by side as monotonous functions of the effect size. Since the three values, power, type-S and type-M errors, all have relationships with the effect size, this makes that they all have relationships among each other as well. These relationships may not be easy to express with a simple mathematical expression, but one can use the graphs to find one from the other. For example, with a given power one estimates the effect size, then for this effect size one computes the s-type and M-type errors. (or for several different effect sizes one computes both power an error, then plot those two versus each other as horizontal and vertical axis in a scatterplot, see also Figure 4) Figure 3: example of relationships of three values (power, type-S error and type-M error) as function of the effect size for sample size $n=5$. Small difference to the retrodesign function Note, in the article from Gelman and Carlin, they have a function retrodesign which they use to make the computations by using a shifted t-distribution. Here I used a non-central t-distribution which represents a t-test more accurately. Also for computing the M-error with a t-test, the computation of the M-error should not simulate only the t-values but instead both the t-value and the effect size. The M-error is the mean absolute observed effect relative to the true effect, given that the observed effect is significant (we need to use the distribution in Figure 2 that is based on simulations like in Figure 1). Generalization This answer discusses the case of the two-sample t-test as in the reference of the question, but for other tests the computations might be different. There is no single fixed relationship between power and S-type and M-type errors. The image 4 below demonstrates this for different values of $n$ and two types of statistical tests. (although arguably one may consider the cases to be very close) Figure 4: Example of different relationships between type S and type M errors with power, or different testing conditions. Replacing simulation with exact computations Note that the histograms from Figure 2 can be expressed in terms of the distribution functions of the normal distribution and $\chi$ distribution. The histogram of the total of the points is normal distributed. The histogram of the significant points can be expressed as the density of the normal distribution multiplied with the cdf of a chi distribution. Potentially one might compute the m-type error based on this instead of using a simulation. In the case of large $n$ the chi distribution approaches a singular distribution and the distribution of the significant/non-significant cases will become a truncated normal distribution. Code for figures With the code below one can make figures 3 and 4 library(retrodesign) ### for shifted t-distribution significance tests ### compute power and error rates for two sample t-test retropower = function(d, n, alpha = 0.05, n.sim = 10^4) { nu = 2*n-2 ### boundary for alpha level t-test tc = qt(1-alpha/2, df = nu) ### power power = pt(-tc, df = nu, ncp=d/sqrt(2/n)) + 1-pt( tc, df = nu, ncp=d/sqrt(2/n)) ### s-error rate type_s = pt(-tc, df = nu, ncp=d/sqrt(2/n))/power ### simulate experiments x0 = rnorm(n.sim,0) s = sqrt(rchisq(n.sim,nu)/nu) ### m-error type_m = sapply(d, FUN = function(di) { x = abs(x0+di*sqrt(n/2)) significant = x/s>tc return(mean(x[significant == 1]/sqrt(n/2))/di) }) return(list(power = power, type_s = type_s, type_m = type_m)) } ### some settings set.seed(1) d = seq(0,3,0.05) #### ### creating plots for image 4 #### layout(matrix(1:2,1, byrow = TRUE)) par(mgp =c(2,1,0), mar = c(4,4,3,1)) plot(2,2, log = "xy", type = "l", xlim = c(0.05,1), ylim = c(0.0001,1), xlab = "power", ylab = "type-s error") m = c(5,20,100) i = 0 for (mi in m) { i = i+1 r = retropower(d/sqrt(mi/5),mi) q = retrodesign(A = as.list(d/sqrt(mi/5)), s = sqrt(2/mi), df = 2*mi - 2) lines(r$power,r$type_s, col = i) lines(q$power,q$type_s, col = i, lty = 2) } plot(2,2, log = "xy", type = "l", xlim = c(0.05,1), ylim = c(1,20), xlab = "power", ylab = "type-m error") m = c(5,20,100) i = 0 for (mi in m) { i = i+1 r = retropower(d/sqrt(mi/5),mi) q = retrodesign(A = as.list(d/sqrt(mi/5)), s = sqrt(2/mi), df = 2*mi - 2) lines(r$power,r$type_m, col = i) lines(q$power,q$type_m, col = i, lty = 2) } title(main="type S/M errors versus power for different type of tests", outer=TRUE, line=-1, cex = 1) legend(0.08,18, c("n=5","n=20","n=100"), col = c(1,2,3), lty = 1, cex = 0.7, title = "non-central\n t-distribution", box.lwd = 0) legend(0.3,18, c("n=5","n=20","n=100"), col = c(1,2,3), lty = 2, cex = 0.7, title = "shifted\n t-distribution", box.lwd = 0) #### ## creating plots for image 3 ### d = seq(0,5,0.025) r = retropower(d,5) layout(matrix(1:3,3)) par(mgp =c(2,1,0), mar = c(4,4,2,1)) plot(d,r$power, type ="l", xlab = "effect size in terms of sigma", ylab = "power", main = "power for two sample test with n = 5" , ylim = c(0,1)) plot(d,r$type_s, type ="l", xlab = "effect size in terms of sigma", ylab = "error rate", main = "S-type error for two sample test with n = 5" , ylim = c(0,0.5)) plot(d,r$type_m, type ="l", xlab = "effect size in terms of sigma", ylab = "magnification", main = "M-type error for two sample test with n = 5" , ylim = c(0,30))
The probability of making a Type S error, and the average amount of magnification (type M error) as
Sketch of the t-test Let $x_1, \dots, x_n \sim N(\mu_1,\sigma^2)$ and $y_1, \dots, y_n \sim N(\mu_2,\sigma^2)$ be independent samples. Let's define The raw effect $\theta = \mu_2-\mu_1$ The estimate
The probability of making a Type S error, and the average amount of magnification (type M error) as a function of power Sketch of the t-test Let $x_1, \dots, x_n \sim N(\mu_1,\sigma^2)$ and $y_1, \dots, y_n \sim N(\mu_2,\sigma^2)$ be independent samples. Let's define The raw effect $\theta = \mu_2-\mu_1$ The estimate of the raw effect $\hat{\theta} = \bar{Y} - \bar{X} \sim N \left(\theta,\frac{2}{n} \sigma^2 \right)$ The standard error of the raw effect $\text{se}(\hat\theta) = \sqrt\frac{2}{n} \hat\sigma = \sqrt\frac{2}{n} \sqrt{\frac{\sum_{i=1}^n (X_i-\bar{X})^2 + \sum_{i=1}^n (Y_i-\bar{Y})^2}{2n -2}} \sim \sigma\sqrt{\frac{2}{n}} \sqrt{\frac{1}{2n-2}} \chi_{2n-2}$ where $a\chi_{2n-2}$ means a scaled chi distribution (also known as a case of the gamma distribution). And the t-statistic is defined as $$t = \frac{\hat\theta}{\text{se}(\hat\theta)} \sim t_{2n-2,\theta\sqrt{n/2}}$$ this statistic follows a non-central t-distribution where $2n-2$ are the degrees of freedom and $\theta \sqrt{n/2}$ is the non-centrality parameter. A typical hypothesis test will regard the significance based on whether or not the t-statistic is above some level. (and the reason for all this hassle with the t-statistic is that it is a pivotal statistic that does not depend on the standard deviation $\sigma$ of the population.) Geometric view of the t-test A geometric view of the t-test can be made with a scatterplot with $\hat{\theta}$ on the horizontal axis and $\text{se}(\hat{\theta})$ on the vertical axis. We do this below in an example with simulations for the null hypothesis $\theta = 0$ and for the alternative hypothesis $d = \frac{\theta}{\sigma} = 0.5$ and $d=2$ (where the effect size $d$ is expressed relative to the population deviation, see also the question Power Analysis and the non central t distribution: what is the non-centrality parameter?). The simulations are made with samples of size $n=5$. (click on the image to view a larger size) Figure 1: Simulations of results for an independent two sample t-test with sample sizes 5. We simulated 3000 points under the null hypothesis of zero effect size (upper image) and under the alternative hypotheses of an effect size equal to $d = 0.5$ and $d=2$. The effect size is used for the horizontal axis and the standard error for the vertical axis. The t-statistic is proportional to the ratio of the two axes $t = \frac{\hat{\theta}}{\text{se}(\hat{\theta})}$. The angle relates to the t-statistic and points at a smaller angle will have a larger t-statistic. Points with $|t|>2.3$ are considered significantly different, and in the case of the null hypothesis this occurs approximately 5% of the time. Graphical illustration of the functions Let's focus on the middle case of the three simulations, the t-tests when the true effect size is $d=0.5$. We can plot the distribution of the effect size for the cases when the observation is significant and for the cases when the observation is not significant: Figure 2: Histogram of the 3000 cases in the middle plot from Figure 1, when the true effect size is $d=0.5$. Based on these histograms one can compute power, s-type error and m-type error. The consequence of the hypothesis test is that mostly relatively large effects are accepted/reported (smaller effects than the true effect size can be reported, if the estimated standard deviation is small). This makes that reported values have a bias and are often larger than the true effect sizes. The rejection region is based on the distribution of the null hypothesis and for a two sided test it is such that $\alpha/2$ percent of the results on both sides of the distribution are falsely rejected if the null hypothesis is true. In the first panel of Figure 1, the simulations when the null hypothesis is true, we reject the null hypothesis when $|t|>2.3$, this occurs in 5% of the cases. The power is the probability to reject when the alternative hypothesis is correct. In the Figure 1 we see that this occurs in respectively 10% and 80% of the cases for relative effect sizes $d=0.5$ and $d=2$. The S-type error is the fraction of the rejected/significant cases with the wrong sign (this occurs mostly when the power is small). In the Figure 2 this is the fraction of the red area compared to the total of the red and green area. $\frac{0.4}{9.97} \approx 0.04$ The M-type error or exaggeration is the mean of the absolute observed effect of the rejected/significant cases relative to the true effect. In Figure 2 this is the mean of the red and the green points divided by the true effect. In the example this is approximately 1.70. Connection between power and S-type and M-type errors For two given samples with size $n$ and significance level $\alpha$ the power is a function of the relative effect size (Cohen's d) as demonstrated in the question: Power Analysis and the non central t distribution: what is the non-centrality parameter?. The type-S and type-M errors are similarly functions of the effect size (explained further in the last section). We can plot them side by side as monotonous functions of the effect size. Since the three values, power, type-S and type-M errors, all have relationships with the effect size, this makes that they all have relationships among each other as well. These relationships may not be easy to express with a simple mathematical expression, but one can use the graphs to find one from the other. For example, with a given power one estimates the effect size, then for this effect size one computes the s-type and M-type errors. (or for several different effect sizes one computes both power an error, then plot those two versus each other as horizontal and vertical axis in a scatterplot, see also Figure 4) Figure 3: example of relationships of three values (power, type-S error and type-M error) as function of the effect size for sample size $n=5$. Small difference to the retrodesign function Note, in the article from Gelman and Carlin, they have a function retrodesign which they use to make the computations by using a shifted t-distribution. Here I used a non-central t-distribution which represents a t-test more accurately. Also for computing the M-error with a t-test, the computation of the M-error should not simulate only the t-values but instead both the t-value and the effect size. The M-error is the mean absolute observed effect relative to the true effect, given that the observed effect is significant (we need to use the distribution in Figure 2 that is based on simulations like in Figure 1). Generalization This answer discusses the case of the two-sample t-test as in the reference of the question, but for other tests the computations might be different. There is no single fixed relationship between power and S-type and M-type errors. The image 4 below demonstrates this for different values of $n$ and two types of statistical tests. (although arguably one may consider the cases to be very close) Figure 4: Example of different relationships between type S and type M errors with power, or different testing conditions. Replacing simulation with exact computations Note that the histograms from Figure 2 can be expressed in terms of the distribution functions of the normal distribution and $\chi$ distribution. The histogram of the total of the points is normal distributed. The histogram of the significant points can be expressed as the density of the normal distribution multiplied with the cdf of a chi distribution. Potentially one might compute the m-type error based on this instead of using a simulation. In the case of large $n$ the chi distribution approaches a singular distribution and the distribution of the significant/non-significant cases will become a truncated normal distribution. Code for figures With the code below one can make figures 3 and 4 library(retrodesign) ### for shifted t-distribution significance tests ### compute power and error rates for two sample t-test retropower = function(d, n, alpha = 0.05, n.sim = 10^4) { nu = 2*n-2 ### boundary for alpha level t-test tc = qt(1-alpha/2, df = nu) ### power power = pt(-tc, df = nu, ncp=d/sqrt(2/n)) + 1-pt( tc, df = nu, ncp=d/sqrt(2/n)) ### s-error rate type_s = pt(-tc, df = nu, ncp=d/sqrt(2/n))/power ### simulate experiments x0 = rnorm(n.sim,0) s = sqrt(rchisq(n.sim,nu)/nu) ### m-error type_m = sapply(d, FUN = function(di) { x = abs(x0+di*sqrt(n/2)) significant = x/s>tc return(mean(x[significant == 1]/sqrt(n/2))/di) }) return(list(power = power, type_s = type_s, type_m = type_m)) } ### some settings set.seed(1) d = seq(0,3,0.05) #### ### creating plots for image 4 #### layout(matrix(1:2,1, byrow = TRUE)) par(mgp =c(2,1,0), mar = c(4,4,3,1)) plot(2,2, log = "xy", type = "l", xlim = c(0.05,1), ylim = c(0.0001,1), xlab = "power", ylab = "type-s error") m = c(5,20,100) i = 0 for (mi in m) { i = i+1 r = retropower(d/sqrt(mi/5),mi) q = retrodesign(A = as.list(d/sqrt(mi/5)), s = sqrt(2/mi), df = 2*mi - 2) lines(r$power,r$type_s, col = i) lines(q$power,q$type_s, col = i, lty = 2) } plot(2,2, log = "xy", type = "l", xlim = c(0.05,1), ylim = c(1,20), xlab = "power", ylab = "type-m error") m = c(5,20,100) i = 0 for (mi in m) { i = i+1 r = retropower(d/sqrt(mi/5),mi) q = retrodesign(A = as.list(d/sqrt(mi/5)), s = sqrt(2/mi), df = 2*mi - 2) lines(r$power,r$type_m, col = i) lines(q$power,q$type_m, col = i, lty = 2) } title(main="type S/M errors versus power for different type of tests", outer=TRUE, line=-1, cex = 1) legend(0.08,18, c("n=5","n=20","n=100"), col = c(1,2,3), lty = 1, cex = 0.7, title = "non-central\n t-distribution", box.lwd = 0) legend(0.3,18, c("n=5","n=20","n=100"), col = c(1,2,3), lty = 2, cex = 0.7, title = "shifted\n t-distribution", box.lwd = 0) #### ## creating plots for image 3 ### d = seq(0,5,0.025) r = retropower(d,5) layout(matrix(1:3,3)) par(mgp =c(2,1,0), mar = c(4,4,2,1)) plot(d,r$power, type ="l", xlab = "effect size in terms of sigma", ylab = "power", main = "power for two sample test with n = 5" , ylim = c(0,1)) plot(d,r$type_s, type ="l", xlab = "effect size in terms of sigma", ylab = "error rate", main = "S-type error for two sample test with n = 5" , ylim = c(0,0.5)) plot(d,r$type_m, type ="l", xlab = "effect size in terms of sigma", ylab = "magnification", main = "M-type error for two sample test with n = 5" , ylim = c(0,30))
The probability of making a Type S error, and the average amount of magnification (type M error) as Sketch of the t-test Let $x_1, \dots, x_n \sim N(\mu_1,\sigma^2)$ and $y_1, \dots, y_n \sim N(\mu_2,\sigma^2)$ be independent samples. Let's define The raw effect $\theta = \mu_2-\mu_1$ The estimate
29,115
The probability of making a Type S error, and the average amount of magnification (type M error) as a function of power
To avoid notational difficulties, I will use notation to Gelman and Carlin, with the effect size represented as $\theta$ and upper-case $D$, $D^\text{rep}$, etc., used to represent the data as a random variable. We will consider a test with null hypothesis $H_0: \theta = 0$ with a test statistic $D$ which is an estimator for the true effect size. We assume that the test is constructed with an evidentiary ordering that is more in favour of the alternative hypothesis for larger (absolute) magnitude of $D$, so the null hypothesis is rejected if the absolute magnitude of $D$ falls too far from zero. Given a significance level $\alpha$ we let $d_\text{crit}$ denote the (positive) critical point of the test so that the the null is rejected if $|D| > d_\text{crit}$. (Note that all of our analysis will implicitly depend on $\alpha$.) In Gelman and Carlin there are some further simplifying assumptions made about the test that lead to a particular form for the sampling density for the test statistic. Here we will proceed in generality by giving formulae that apply for any distribution of the test statistic. To facilitate analysis of the quantities of interest, we define the intermediate quantities: $$H_-(\theta) \equiv \mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} ) \quad \quad \quad \quad \quad H_+(\theta) \equiv \mathbb{P}_\theta(D^\text{rep} > d_\text{crit} ).$$ The quantities of interest can then be written as: $$\begin{align} \text{Power} (\theta) &\equiv \mathbb{P}_\theta(p(D^\text{rep}) < \alpha) \\[18pt] &= \mathbb{P}_\theta(|D^\text{rep}| > d_\text{crit}) \\[18pt] &= \mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} ) + \mathbb{P}_\theta(D^\text{rep} > d_\text{crit} ) \\[18pt] &= H_-(\theta) + H_+(\theta). \\[24pt] \text{Type-S Error Rate} (\theta) &\equiv \mathbb{P}_\theta(D^\text{rep} < 0 |p(D^\text{rep}) < \alpha) \\[18pt] &= \mathbb{P}_\theta(D^\text{rep} < 0 | |D^\text{rep}| > d_\text{crit} ) \\[10pt] &= \frac{\mathbb{P}_\theta(D^\text{rep} < 0, |D^\text{rep}| > d_\text{crit} )}{\mathbb{P}_\theta(D^\text{rep} < 0, |D^\text{rep}| > d_\text{crit} ) + \mathbb{P}_\theta(D^\text{rep} \geqslant 0, |D^\text{rep}| > d_\text{crit} )} \\[6pt] &= \frac{\mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} )}{\mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} ) + \mathbb{P}_\theta(D^\text{rep} > d_\text{crit} )} \\[6pt] &= \frac{H_{-\text{sgn}(\theta)}(\theta)}{H_-(\theta) + H_+(\theta)}. \\[16pt] \text{Exaggeration Ratio} (\theta) &\equiv \frac{1}{\theta} \cdot \mathbb{E}_\theta(|D^\text{rep}| |p(D^\text{rep}) < \alpha) \\[10pt] &= \frac{1}{\theta} \cdot \mathbb{E}_\theta(|D^\text{rep}| ||D^\text{rep}| > d_\text{crit}) \\[10pt] &= \frac{1}{\theta} \Bigg[ \mathbb{E}_\theta(D^\text{rep}| D^\text{rep} > d_\text{crit}) \cdot H_+(\theta) - \mathbb{E}_\theta(D^\text{rep}| D^\text{rep} < -d_\text{crit}) \cdot H_-(\theta) \Bigg]. \\[10pt] \end{align}$$ (Note that even though we are proceeding here for the case where $\theta>0$, to make things easier I have given the more general formula for the Type-S error in the last step.)
The probability of making a Type S error, and the average amount of magnification (type M error) as
To avoid notational difficulties, I will use notation to Gelman and Carlin, with the effect size represented as $\theta$ and upper-case $D$, $D^\text{rep}$, etc., used to represent the data as a rando
The probability of making a Type S error, and the average amount of magnification (type M error) as a function of power To avoid notational difficulties, I will use notation to Gelman and Carlin, with the effect size represented as $\theta$ and upper-case $D$, $D^\text{rep}$, etc., used to represent the data as a random variable. We will consider a test with null hypothesis $H_0: \theta = 0$ with a test statistic $D$ which is an estimator for the true effect size. We assume that the test is constructed with an evidentiary ordering that is more in favour of the alternative hypothesis for larger (absolute) magnitude of $D$, so the null hypothesis is rejected if the absolute magnitude of $D$ falls too far from zero. Given a significance level $\alpha$ we let $d_\text{crit}$ denote the (positive) critical point of the test so that the the null is rejected if $|D| > d_\text{crit}$. (Note that all of our analysis will implicitly depend on $\alpha$.) In Gelman and Carlin there are some further simplifying assumptions made about the test that lead to a particular form for the sampling density for the test statistic. Here we will proceed in generality by giving formulae that apply for any distribution of the test statistic. To facilitate analysis of the quantities of interest, we define the intermediate quantities: $$H_-(\theta) \equiv \mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} ) \quad \quad \quad \quad \quad H_+(\theta) \equiv \mathbb{P}_\theta(D^\text{rep} > d_\text{crit} ).$$ The quantities of interest can then be written as: $$\begin{align} \text{Power} (\theta) &\equiv \mathbb{P}_\theta(p(D^\text{rep}) < \alpha) \\[18pt] &= \mathbb{P}_\theta(|D^\text{rep}| > d_\text{crit}) \\[18pt] &= \mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} ) + \mathbb{P}_\theta(D^\text{rep} > d_\text{crit} ) \\[18pt] &= H_-(\theta) + H_+(\theta). \\[24pt] \text{Type-S Error Rate} (\theta) &\equiv \mathbb{P}_\theta(D^\text{rep} < 0 |p(D^\text{rep}) < \alpha) \\[18pt] &= \mathbb{P}_\theta(D^\text{rep} < 0 | |D^\text{rep}| > d_\text{crit} ) \\[10pt] &= \frac{\mathbb{P}_\theta(D^\text{rep} < 0, |D^\text{rep}| > d_\text{crit} )}{\mathbb{P}_\theta(D^\text{rep} < 0, |D^\text{rep}| > d_\text{crit} ) + \mathbb{P}_\theta(D^\text{rep} \geqslant 0, |D^\text{rep}| > d_\text{crit} )} \\[6pt] &= \frac{\mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} )}{\mathbb{P}_\theta(D^\text{rep} < -d_\text{crit} ) + \mathbb{P}_\theta(D^\text{rep} > d_\text{crit} )} \\[6pt] &= \frac{H_{-\text{sgn}(\theta)}(\theta)}{H_-(\theta) + H_+(\theta)}. \\[16pt] \text{Exaggeration Ratio} (\theta) &\equiv \frac{1}{\theta} \cdot \mathbb{E}_\theta(|D^\text{rep}| |p(D^\text{rep}) < \alpha) \\[10pt] &= \frac{1}{\theta} \cdot \mathbb{E}_\theta(|D^\text{rep}| ||D^\text{rep}| > d_\text{crit}) \\[10pt] &= \frac{1}{\theta} \Bigg[ \mathbb{E}_\theta(D^\text{rep}| D^\text{rep} > d_\text{crit}) \cdot H_+(\theta) - \mathbb{E}_\theta(D^\text{rep}| D^\text{rep} < -d_\text{crit}) \cdot H_-(\theta) \Bigg]. \\[10pt] \end{align}$$ (Note that even though we are proceeding here for the case where $\theta>0$, to make things easier I have given the more general formula for the Type-S error in the last step.)
The probability of making a Type S error, and the average amount of magnification (type M error) as To avoid notational difficulties, I will use notation to Gelman and Carlin, with the effect size represented as $\theta$ and upper-case $D$, $D^\text{rep}$, etc., used to represent the data as a rando
29,116
Does it make sense to find confidence intervals for neural networks?
For simple regression models, if you have the joint distribution of the parameters you get both confidence intervals and in a sort of derived fashion prediction intervals. You typically have the joint distribution for regression models, either for Bayesian models fit using MCMC samplers you have pseudo-random samples from that distribution and for maximum likelihood estimation you typically have a multivariate normal distribution approximation. Let's start with the latter case and your example: your confidence interval for a parameter is usually something like $(\hat{\beta}_0 - 1.96 \times \text{SE}(\hat{\beta}_0),\ \hat{\beta}_0 + 1.96 \times \text{SE}(\hat{\beta}_0))$ and you have the same for $\hat{\beta}_1$. Once you want a prediction interval, you get the point prediction for a new observation with covariates $x_*$ is $\hat{\beta}_0 + \hat{\beta}_1 x_*$, but the standard error is $\sqrt{ \text{SE}(\hat{\beta}_0)^2 + x_*^2 \text{SE}(\hat{\beta}_1)^2 + 2 x_* \text{Cov}(\hat{\beta}_0, \hat{\beta}_1) + \sigma^2}$ (if we know the standard deviation $\sigma$ of the residual error term - it gets a little more complicated if we estimate that, too). So, one difference is that prediction intervals is that they take the variation in outcomes (from the residual error term) into account, too. Both are useful, because we are interested in the uncertainty of predictions, but possibly also in interpreting individual coefficients and seeing how much they might vary through sampling variation. Similarly, for a Bayesian model you get credible intervals from the $K$ MCMC samples by considering the distribution of samples $\hat{\beta}^{(k)}_0$ for $k=1,\ldots,K$. You get a confidence interval for the linear prediction term via the distribution of $\hat{\beta}^{(k)}_0 + \hat{\beta}^{(k)}_1 x_*$ and the prediction interval via sampling for each $k$ from a $N(\hat{\beta}^{(k)}_0 + \hat{\beta}^{(k)}_1 x_*, \hat{\sigma}^{(k)})$ distribution (repeatedly or just once, as you wish). You might say that this is a lot easier and more straightforward that the frequentist case, especially taking the uncertainty around the estimated residual standard deviation is trivial. For neural networks, gradient boosted trees etc., I don't think a CI for an individual model parameter / weight / tree split is really useful, even if you can calculate it. We just typically have a lot of trouble interpreting individual parameters, but rather tend to look at the influence of features of an input on the output. I guess you could get confidence intervals for something like SHAP values (probably just by bootstrapping), but I've indeed never seen that. What people are much more (only?) interested in is prediction intervals. Ideas for getting them include: in theory (in practice only for super simple cases), you can do the same things as above, but the complexity usually makes this challenging ensemble based methods (as you mentioned - one variant of that is leaving dropout on at inference time in neural networks trained with dropout) bootstrapping (obviously rather time consuming) quantile regression (e.g. your neural network has three outputs: a point prediction and, say, the 10th and 90th percentile of the distribution for points with such covariates, fit using, say, quantile regression loss of some form / pinball loss - see e.g. this discussion on a Kaggle competition) There's probably quite a few more approaches.
Does it make sense to find confidence intervals for neural networks?
For simple regression models, if you have the joint distribution of the parameters you get both confidence intervals and in a sort of derived fashion prediction intervals. You typically have the joint
Does it make sense to find confidence intervals for neural networks? For simple regression models, if you have the joint distribution of the parameters you get both confidence intervals and in a sort of derived fashion prediction intervals. You typically have the joint distribution for regression models, either for Bayesian models fit using MCMC samplers you have pseudo-random samples from that distribution and for maximum likelihood estimation you typically have a multivariate normal distribution approximation. Let's start with the latter case and your example: your confidence interval for a parameter is usually something like $(\hat{\beta}_0 - 1.96 \times \text{SE}(\hat{\beta}_0),\ \hat{\beta}_0 + 1.96 \times \text{SE}(\hat{\beta}_0))$ and you have the same for $\hat{\beta}_1$. Once you want a prediction interval, you get the point prediction for a new observation with covariates $x_*$ is $\hat{\beta}_0 + \hat{\beta}_1 x_*$, but the standard error is $\sqrt{ \text{SE}(\hat{\beta}_0)^2 + x_*^2 \text{SE}(\hat{\beta}_1)^2 + 2 x_* \text{Cov}(\hat{\beta}_0, \hat{\beta}_1) + \sigma^2}$ (if we know the standard deviation $\sigma$ of the residual error term - it gets a little more complicated if we estimate that, too). So, one difference is that prediction intervals is that they take the variation in outcomes (from the residual error term) into account, too. Both are useful, because we are interested in the uncertainty of predictions, but possibly also in interpreting individual coefficients and seeing how much they might vary through sampling variation. Similarly, for a Bayesian model you get credible intervals from the $K$ MCMC samples by considering the distribution of samples $\hat{\beta}^{(k)}_0$ for $k=1,\ldots,K$. You get a confidence interval for the linear prediction term via the distribution of $\hat{\beta}^{(k)}_0 + \hat{\beta}^{(k)}_1 x_*$ and the prediction interval via sampling for each $k$ from a $N(\hat{\beta}^{(k)}_0 + \hat{\beta}^{(k)}_1 x_*, \hat{\sigma}^{(k)})$ distribution (repeatedly or just once, as you wish). You might say that this is a lot easier and more straightforward that the frequentist case, especially taking the uncertainty around the estimated residual standard deviation is trivial. For neural networks, gradient boosted trees etc., I don't think a CI for an individual model parameter / weight / tree split is really useful, even if you can calculate it. We just typically have a lot of trouble interpreting individual parameters, but rather tend to look at the influence of features of an input on the output. I guess you could get confidence intervals for something like SHAP values (probably just by bootstrapping), but I've indeed never seen that. What people are much more (only?) interested in is prediction intervals. Ideas for getting them include: in theory (in practice only for super simple cases), you can do the same things as above, but the complexity usually makes this challenging ensemble based methods (as you mentioned - one variant of that is leaving dropout on at inference time in neural networks trained with dropout) bootstrapping (obviously rather time consuming) quantile regression (e.g. your neural network has three outputs: a point prediction and, say, the 10th and 90th percentile of the distribution for points with such covariates, fit using, say, quantile regression loss of some form / pinball loss - see e.g. this discussion on a Kaggle competition) There's probably quite a few more approaches.
Does it make sense to find confidence intervals for neural networks? For simple regression models, if you have the joint distribution of the parameters you get both confidence intervals and in a sort of derived fashion prediction intervals. You typically have the joint
29,117
What is the best metric for machine learning model to predict customer probability to buy
Certainly, you shouldn’t use the common classification metrics like accuracy. They don’t do much good about having the probabilities correct. If you want to estimate the probabilities precisely, you need proper scoring rules (see other questions tagged as scoring-rules), like Brier score (squared error) or log loss (aka cross-entropy loss). There was recently an interesting paper by Hui and Belkin (2020) showing that using squared error as a loss function for a classier may give as good if not better results as compared to the “default” log loss. On another hand, you are saying that you want to use the probabilities to rank the customers, that’s a different problem. For ranking, you don’t care that much about the probabilities being correct, as far as they’re ordered correctly. There are specialised metrics like mean percentage ranking, mean reciprocal score, top-$k$ accuracy, precission@$k$, etc. Assuming it is a ranking problem, you probably should consider using specialized ranking algorithms as well. The choice depends on how exactly you want to use the results and details about your data.
What is the best metric for machine learning model to predict customer probability to buy
Certainly, you shouldn’t use the common classification metrics like accuracy. They don’t do much good about having the probabilities correct. If you want to estimate the probabilities precisely, you n
What is the best metric for machine learning model to predict customer probability to buy Certainly, you shouldn’t use the common classification metrics like accuracy. They don’t do much good about having the probabilities correct. If you want to estimate the probabilities precisely, you need proper scoring rules (see other questions tagged as scoring-rules), like Brier score (squared error) or log loss (aka cross-entropy loss). There was recently an interesting paper by Hui and Belkin (2020) showing that using squared error as a loss function for a classier may give as good if not better results as compared to the “default” log loss. On another hand, you are saying that you want to use the probabilities to rank the customers, that’s a different problem. For ranking, you don’t care that much about the probabilities being correct, as far as they’re ordered correctly. There are specialised metrics like mean percentage ranking, mean reciprocal score, top-$k$ accuracy, precission@$k$, etc. Assuming it is a ranking problem, you probably should consider using specialized ranking algorithms as well. The choice depends on how exactly you want to use the results and details about your data.
What is the best metric for machine learning model to predict customer probability to buy Certainly, you shouldn’t use the common classification metrics like accuracy. They don’t do much good about having the probabilities correct. If you want to estimate the probabilities precisely, you n
29,118
How can someone improve a synthetic control?
In addition to the other excellent answer (and yes, more data can really help, this could include different levels of aggregation, e.g. instead of state-level data, can you get county-level data?), there's also the option of shortening or weighting the time horizon for which you are matching the synthetic control. E.g. in the book example, they could have gone further back (assuming the data are available), but they clearly made a choice that e.g. matching the trajectory for data from before the second world war is irrelevant to the question at hand. You could also decide that on a sliding scale of importance matching the data 1987 is twice as important as matching the data for 1977 (which you can either achieve by weighting the loss functions or just scaling the values you are matching - e.g. scaling standardized values in 1977 by a factor of $\sqrt{0.5} \approx 0.707$). This would be arguing that a similar recent trajectory is just more important than what happened further in the past. However, there's a situation where interpolated synthetic controls just cannot match the treated unit. That's when it's the most extreme of the available units. E.g. int he book example, if California had had by far the highest (or the lowest) per capita cigarette sales prior to the intervention, there would just no weighted average of the other states that can match its numbers. While there might be ways around it (e.g. if you go to the county level and there are some counties somewhere that can match the California counties), this would also have been a big red flag warning you that perhaps no other states really look like California and trying to compare what happens in California to what happens in these states is really problematic. I.e. it might be a warning that there might be no good answer. To quote John Tukey:"The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data."
How can someone improve a synthetic control?
In addition to the other excellent answer (and yes, more data can really help, this could include different levels of aggregation, e.g. instead of state-level data, can you get county-level data?), th
How can someone improve a synthetic control? In addition to the other excellent answer (and yes, more data can really help, this could include different levels of aggregation, e.g. instead of state-level data, can you get county-level data?), there's also the option of shortening or weighting the time horizon for which you are matching the synthetic control. E.g. in the book example, they could have gone further back (assuming the data are available), but they clearly made a choice that e.g. matching the trajectory for data from before the second world war is irrelevant to the question at hand. You could also decide that on a sliding scale of importance matching the data 1987 is twice as important as matching the data for 1977 (which you can either achieve by weighting the loss functions or just scaling the values you are matching - e.g. scaling standardized values in 1977 by a factor of $\sqrt{0.5} \approx 0.707$). This would be arguing that a similar recent trajectory is just more important than what happened further in the past. However, there's a situation where interpolated synthetic controls just cannot match the treated unit. That's when it's the most extreme of the available units. E.g. int he book example, if California had had by far the highest (or the lowest) per capita cigarette sales prior to the intervention, there would just no weighted average of the other states that can match its numbers. While there might be ways around it (e.g. if you go to the county level and there are some counties somewhere that can match the California counties), this would also have been a big red flag warning you that perhaps no other states really look like California and trying to compare what happens in California to what happens in these states is really problematic. I.e. it might be a warning that there might be no good answer. To quote John Tukey:"The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data."
How can someone improve a synthetic control? In addition to the other excellent answer (and yes, more data can really help, this could include different levels of aggregation, e.g. instead of state-level data, can you get county-level data?), th
29,119
How can someone improve a synthetic control?
You can sometimes improve fit by: Getting more pre-treatment data when possible, though that runs the risk of going so far back that the structural relationship is too different from the period in the study, like when CA was Mexico. Also, does not always work. Adding variables that are not just the lagged outcome in calculating the weights, like beer consumption and income per capita. Removing units from the potential donor pool that are dissimilar to the treated unit: Only use untreated units that do not adopt interventions similar to the one under investigation during the period of the study. Do not suffer large idiosyncratic/local shocks to the outcome of interest during the study period. Have characteristics similar to the characteristics of the affected unit. Fit can sometimes be improved by using a transformation of the dependent variable (e.g., $\Delta Y_{it}$ or $\frac{\Delta Y_{it}}{Y_{it}}$, so you are matching the trend or growth rather than the levels. Normalizing by population can help if your treated unit is large relative to the untreated, like CA would be had they not done per capita. If the bias in fit is constant over time, you can simply subtract it from the effect. However, I have never seen that happen in the wild. -- If the poor fit is mostly in the early pre-treatment period, that period can be excluded if motivated well. If the poor fit is in the late pre-treatment period, that could indicate evidence of treatment anticipation. For example, if consumers knew the tax was coming soon, they could stock up and put the tobacco in a freezer. Then moving back the treatment start date to the tax increase announcement can help. If the fit is still bad, abandon the project and move on with your life or try something else.
How can someone improve a synthetic control?
You can sometimes improve fit by: Getting more pre-treatment data when possible, though that runs the risk of going so far back that the structural relationship is too different from the period in th
How can someone improve a synthetic control? You can sometimes improve fit by: Getting more pre-treatment data when possible, though that runs the risk of going so far back that the structural relationship is too different from the period in the study, like when CA was Mexico. Also, does not always work. Adding variables that are not just the lagged outcome in calculating the weights, like beer consumption and income per capita. Removing units from the potential donor pool that are dissimilar to the treated unit: Only use untreated units that do not adopt interventions similar to the one under investigation during the period of the study. Do not suffer large idiosyncratic/local shocks to the outcome of interest during the study period. Have characteristics similar to the characteristics of the affected unit. Fit can sometimes be improved by using a transformation of the dependent variable (e.g., $\Delta Y_{it}$ or $\frac{\Delta Y_{it}}{Y_{it}}$, so you are matching the trend or growth rather than the levels. Normalizing by population can help if your treated unit is large relative to the untreated, like CA would be had they not done per capita. If the bias in fit is constant over time, you can simply subtract it from the effect. However, I have never seen that happen in the wild. -- If the poor fit is mostly in the early pre-treatment period, that period can be excluded if motivated well. If the poor fit is in the late pre-treatment period, that could indicate evidence of treatment anticipation. For example, if consumers knew the tax was coming soon, they could stock up and put the tobacco in a freezer. Then moving back the treatment start date to the tax increase announcement can help. If the fit is still bad, abandon the project and move on with your life or try something else.
How can someone improve a synthetic control? You can sometimes improve fit by: Getting more pre-treatment data when possible, though that runs the risk of going so far back that the structural relationship is too different from the period in th
29,120
Can we derive cross entropy formula as maximum likelihood estimation for SOFT LABELS?
Soft labels define a 'true' target distribution over class labels for each data point. As I described previously, a probabilistic classifier can be fit by minimizing the cross entropy between the target distribution and the predicted distribution. In this context, minimizing the cross entropy is equivalent to minimizing the KL divergence. So, what we're doing is finding a good approximation to the target distribution (as measured by KL divergence). However, as described below, the problem can equivalently be cast as a weighted maximum likelihood problem, where the soft labels determine the weights. I'll show this for binary classification, but the same reasoning also applies to multiclass problems. Probabilistic binary classification with soft labels Let $X = \{x_1, \dots, x_n\}$ be a set of data points with binary class labels $\mathbf{y} \in \{0, 1\}^n$. Assume the class labels are conditionally independent, given $X$. The class labels are unknown, but we have soft labels $\mathbf{\ell} \in [0,1]^n$, where $\ell_i$ gives the probability that $y_i=1$. The soft labels define a Bernoulli target distribution over class labels for each data point: $$p(y \mid \ell_i) = \left\{ \begin{array}{cl} \ell_i & y = 1 \\ 1 - \ell_i & y = 0 \\ \end{array} \right.$$ The goal is to learn a conditional distribution $q(y \mid x, \theta)$ (a.k.a. probabilistic classifier, parameterized by $\theta$), such that the predicted class probabilities approximate those given by the soft labels. We do this by minimizing the cross entropy between the target and predicted distributions over class labels, summed over data points: $$\min_\theta \ \sum_{i=1}^n H \Big( p(y \mid \ell_i), q(y \mid x_i, \theta) \Big) \tag{1}$$ Writing out the expression for the cross entropy, the problem is: $$\min_\theta \ -\sum_{i=1}^n \ell_i \log q(y=1 \mid x_i, \theta) - \sum_{i=1}^n (1-\ell_i) \log q(y=0 \mid x_i, \theta) \tag{3}$$ Equivalence to weighted maximum likelihood Suppose we define a new dataset $(\tilde{X}, \tilde{\mathbf{y}})$ by duplicating each data point. We assign hard class label $1$ to the first duplicate, and $0$ to the second duplicate. Furthermore, we assign a weight to each new data point. The first duplicates are weighted by the soft labels, and the second duplicates are weighted by one minus the soft labels. That is: $$\begin{array}{ccl} \tilde{X} & = & \{x_1, \dots, x_n, x_1, \dots, x_n\} \\ \tilde{y} & = & [1, \dots, 1, 0, \dots, 0]^T \\ \tilde{w} & = & [\ell_1, \dots, \ell_n, 1-\ell_1, \dots, 1-\ell_n]^T \end{array} \tag{4}$$ Intuitively, you can think of the weights as a continuous analog of 'how many times' we've seen each case. We've constructed the new dataset in a way that translates soft labels into 'replications'. For example, if a point has soft label $0.75$, this is like seeing the same point three times with hard label $1$ and once with hard label $0$ (giving weights .75 and .25, respectively). As above, we want to learn a conditional distribution $q(y \mid x, \theta)$, but this time using the new dataset with hard labels and weights. We do this by maximizing the weighted likelihood: $$L_{\tilde{w}}(\theta; \tilde{X}, \tilde{\mathbf{y}}) = \prod_{i=1}^{2 n} q(\tilde{y}_i \mid \tilde{x}_i, \theta)^{\tilde{w}_i} \tag{5}$$ This is equivalent to minimizing the weighted negative log likelihood: $$-\log L_{\tilde{w}}(\theta; \tilde{X}, \tilde{\mathbf{y}}) = -\sum_{i=1}^{2 n} \tilde{w}_i \log q(\tilde{y}_i \mid \tilde{x}_i, \theta) \tag{6}$$ Substitute in our expressions for $\tilde{X}, \tilde{\mathbf{y}}, \tilde{w}$: $$\begin{matrix} -\log L_{\tilde{w}}(\theta; \tilde{X}, \tilde{\mathbf{y}}) = \\ -\sum_{i=1}^n \ell_i \log q(y=1 \mid x_i, \theta) - \sum_{i=1}^n (1-\ell_i) \log q(y=0 \mid x_i, \theta) \end{matrix}\tag{7}$$ The weighted negative log likelihood in $(7)$ is the same as the cross entropy loss in $(3)$. So, the weighted maximum likelihood problem here is equivalent to the cross entropy minimization problem above.
Can we derive cross entropy formula as maximum likelihood estimation for SOFT LABELS?
Soft labels define a 'true' target distribution over class labels for each data point. As I described previously, a probabilistic classifier can be fit by minimizing the cross entropy between the targ
Can we derive cross entropy formula as maximum likelihood estimation for SOFT LABELS? Soft labels define a 'true' target distribution over class labels for each data point. As I described previously, a probabilistic classifier can be fit by minimizing the cross entropy between the target distribution and the predicted distribution. In this context, minimizing the cross entropy is equivalent to minimizing the KL divergence. So, what we're doing is finding a good approximation to the target distribution (as measured by KL divergence). However, as described below, the problem can equivalently be cast as a weighted maximum likelihood problem, where the soft labels determine the weights. I'll show this for binary classification, but the same reasoning also applies to multiclass problems. Probabilistic binary classification with soft labels Let $X = \{x_1, \dots, x_n\}$ be a set of data points with binary class labels $\mathbf{y} \in \{0, 1\}^n$. Assume the class labels are conditionally independent, given $X$. The class labels are unknown, but we have soft labels $\mathbf{\ell} \in [0,1]^n$, where $\ell_i$ gives the probability that $y_i=1$. The soft labels define a Bernoulli target distribution over class labels for each data point: $$p(y \mid \ell_i) = \left\{ \begin{array}{cl} \ell_i & y = 1 \\ 1 - \ell_i & y = 0 \\ \end{array} \right.$$ The goal is to learn a conditional distribution $q(y \mid x, \theta)$ (a.k.a. probabilistic classifier, parameterized by $\theta$), such that the predicted class probabilities approximate those given by the soft labels. We do this by minimizing the cross entropy between the target and predicted distributions over class labels, summed over data points: $$\min_\theta \ \sum_{i=1}^n H \Big( p(y \mid \ell_i), q(y \mid x_i, \theta) \Big) \tag{1}$$ Writing out the expression for the cross entropy, the problem is: $$\min_\theta \ -\sum_{i=1}^n \ell_i \log q(y=1 \mid x_i, \theta) - \sum_{i=1}^n (1-\ell_i) \log q(y=0 \mid x_i, \theta) \tag{3}$$ Equivalence to weighted maximum likelihood Suppose we define a new dataset $(\tilde{X}, \tilde{\mathbf{y}})$ by duplicating each data point. We assign hard class label $1$ to the first duplicate, and $0$ to the second duplicate. Furthermore, we assign a weight to each new data point. The first duplicates are weighted by the soft labels, and the second duplicates are weighted by one minus the soft labels. That is: $$\begin{array}{ccl} \tilde{X} & = & \{x_1, \dots, x_n, x_1, \dots, x_n\} \\ \tilde{y} & = & [1, \dots, 1, 0, \dots, 0]^T \\ \tilde{w} & = & [\ell_1, \dots, \ell_n, 1-\ell_1, \dots, 1-\ell_n]^T \end{array} \tag{4}$$ Intuitively, you can think of the weights as a continuous analog of 'how many times' we've seen each case. We've constructed the new dataset in a way that translates soft labels into 'replications'. For example, if a point has soft label $0.75$, this is like seeing the same point three times with hard label $1$ and once with hard label $0$ (giving weights .75 and .25, respectively). As above, we want to learn a conditional distribution $q(y \mid x, \theta)$, but this time using the new dataset with hard labels and weights. We do this by maximizing the weighted likelihood: $$L_{\tilde{w}}(\theta; \tilde{X}, \tilde{\mathbf{y}}) = \prod_{i=1}^{2 n} q(\tilde{y}_i \mid \tilde{x}_i, \theta)^{\tilde{w}_i} \tag{5}$$ This is equivalent to minimizing the weighted negative log likelihood: $$-\log L_{\tilde{w}}(\theta; \tilde{X}, \tilde{\mathbf{y}}) = -\sum_{i=1}^{2 n} \tilde{w}_i \log q(\tilde{y}_i \mid \tilde{x}_i, \theta) \tag{6}$$ Substitute in our expressions for $\tilde{X}, \tilde{\mathbf{y}}, \tilde{w}$: $$\begin{matrix} -\log L_{\tilde{w}}(\theta; \tilde{X}, \tilde{\mathbf{y}}) = \\ -\sum_{i=1}^n \ell_i \log q(y=1 \mid x_i, \theta) - \sum_{i=1}^n (1-\ell_i) \log q(y=0 \mid x_i, \theta) \end{matrix}\tag{7}$$ The weighted negative log likelihood in $(7)$ is the same as the cross entropy loss in $(3)$. So, the weighted maximum likelihood problem here is equivalent to the cross entropy minimization problem above.
Can we derive cross entropy formula as maximum likelihood estimation for SOFT LABELS? Soft labels define a 'true' target distribution over class labels for each data point. As I described previously, a probabilistic classifier can be fit by minimizing the cross entropy between the targ
29,121
Can we derive cross entropy formula as maximum likelihood estimation for SOFT LABELS?
If we consider a continuous relaxation of Bernoulli that allows the true probability to be between 0 and 1, a recent paper argues [1] that, no, cross-entropy is not adequate for $y \in [0,1]$, because it is not a Bernoulli distributed variable. While their work is concerned with Variational Autoencoders, the argument can be extended to other uses of the Bernoulli likelihood. The continuous $y$ can be regarded as a soft-label. A Beta distribution could be used instead, but they also propose a new distribution that augments the Bernoulli, which entails a simple correction to cross-entropy. The Continuous Bernoulli distribution is given by, with $\lambda \in (0,1)$, $x \in [0,1]$: $$p_{\mathcal{CB}}(x|\lambda) = C(\lambda)\lambda^x(1-\lambda)^{1-x}$$ Contrast it with the original Bernoulli, with $p \in (0,1)$, $ k \in \{0,1\} $: $$p_{\mathcal{B}}(k|p) = p^k(1-p)^{1-k}$$ The Continuous Bernoulli is proportional to the Bernoulli, but with continuous $k$, and the correction term is introduced to make it a valid distribution. The new cross-entropy then is: $$\mathcal L(\hat y, y) = y\log(\hat y) + (1 - y) \log(1-\hat y) + \color{red}{\log C(\hat y)}$$ This last term, the normalizing correction, is given by: $$C(x) = \begin{cases} \begin{align} &\frac{2\tanh^{-1}(1-2x)}{1-2x} \quad &\text{if} \quad x \neq 0.5\\ &2 \quad &\text{if} \quad x = 0.5 \end{align} \end{cases}$$ [1] Loaiza-Ganem, G., & Cunningham, J. P. (2019). The continuous Bernoulli: fixing a pervasive error in variational autoencoders. In Advances in Neural Information Processing Systems (pp. 13266-13276).
Can we derive cross entropy formula as maximum likelihood estimation for SOFT LABELS?
If we consider a continuous relaxation of Bernoulli that allows the true probability to be between 0 and 1, a recent paper argues [1] that, no, cross-entropy is not adequate for $y \in [0,1]$, because
Can we derive cross entropy formula as maximum likelihood estimation for SOFT LABELS? If we consider a continuous relaxation of Bernoulli that allows the true probability to be between 0 and 1, a recent paper argues [1] that, no, cross-entropy is not adequate for $y \in [0,1]$, because it is not a Bernoulli distributed variable. While their work is concerned with Variational Autoencoders, the argument can be extended to other uses of the Bernoulli likelihood. The continuous $y$ can be regarded as a soft-label. A Beta distribution could be used instead, but they also propose a new distribution that augments the Bernoulli, which entails a simple correction to cross-entropy. The Continuous Bernoulli distribution is given by, with $\lambda \in (0,1)$, $x \in [0,1]$: $$p_{\mathcal{CB}}(x|\lambda) = C(\lambda)\lambda^x(1-\lambda)^{1-x}$$ Contrast it with the original Bernoulli, with $p \in (0,1)$, $ k \in \{0,1\} $: $$p_{\mathcal{B}}(k|p) = p^k(1-p)^{1-k}$$ The Continuous Bernoulli is proportional to the Bernoulli, but with continuous $k$, and the correction term is introduced to make it a valid distribution. The new cross-entropy then is: $$\mathcal L(\hat y, y) = y\log(\hat y) + (1 - y) \log(1-\hat y) + \color{red}{\log C(\hat y)}$$ This last term, the normalizing correction, is given by: $$C(x) = \begin{cases} \begin{align} &\frac{2\tanh^{-1}(1-2x)}{1-2x} \quad &\text{if} \quad x \neq 0.5\\ &2 \quad &\text{if} \quad x = 0.5 \end{align} \end{cases}$$ [1] Loaiza-Ganem, G., & Cunningham, J. P. (2019). The continuous Bernoulli: fixing a pervasive error in variational autoencoders. In Advances in Neural Information Processing Systems (pp. 13266-13276).
Can we derive cross entropy formula as maximum likelihood estimation for SOFT LABELS? If we consider a continuous relaxation of Bernoulli that allows the true probability to be between 0 and 1, a recent paper argues [1] that, no, cross-entropy is not adequate for $y \in [0,1]$, because
29,122
How does Hamiltonian Monte Carlo work?
Before answering the question about an intuitive way to think about Hamiltonian Monte Carlo, it's probably best to get a really firm grasp on regular MCMC. Let's set aside the satellite metaphor for now. MCMC is useful when you want an unbiassed sample from a distribution where you only have something available which is proportional to the PDF, but not the PDF itself. This arises in (eg) physics simulations: the PDF is given by the Boltzmann distribution, p ~ exp(-E/kT), but the thing that you can calculate for any configuration of the system is E, not p. The constant of proportionality is not known, because the integral of exp(-E/kT) over the whole space of possible configuration is usually too difficult to calculate. MCMC solves that problem by doing a random walk in a specific way, where the probability of taking ("accepting") each step is related to the ratio of p values (the constant of proportionality cancels out). Over time, the distrubution of accepted samples from the random walk converges to the PDF that we want, without ever needing to explicitly calculate p. Note that in the above, any method of taking random steps is equally valid, as long as the random walker can explore the whole space. The acceptance criterion guarantees that the selected samples converge to the real PDF. In practice, a gaussian distribution around the current sample is used (and the sigma can be varied so that the fraction of accepted steps stays relatively high). There would be nothing wrong in principle with taking steps from any other continuous distribution ("jumping distribution") around the current sample, although the convergence may be a lot slower. Now, Hamiltonian Monte Carlo extends the physics metaphor by specifically trying to take steps in a direction which is more likely to be accepted than a gaussian step. The steps are what a leapfrog integrator would take, if it was trying to solve the motion of a system where the potential energy was E. These equations of motion also include a kinetic energy term, with a (not literally physical) "mass" and "momentum". The steps that the leapfrog integrator takes in "time" are then passed as proposals to the MCMC algorithm. Why does this work? The gaussian MC takes steps the same distance in every direction with equal probability; the only thing that biases it towards more densely populated areas of the PDF is that steps in the wrong direction are more likely to be rejected. The Hamiltonian MC proposes steps both in the direction of E gradient, and the direction of accumulated motion in recent steps (direction and magnitude of the "momentum"). This enables faster exploration of the space, and also higher probability of reaching more densely populated regions faster. Now, the satellite metaphor: I think this is not a very useful way to think about it. Satellites move in an exact orbit; what you have here is quite random, more like a particle of gas in a container with other particles. Each random collision gives you a "step"; over time the particle will be everywhere in the container with an equal probability (since the PDF here is equal everywhere, except the walls which represent very high energy / effectively zero PDF). Gaussian MCMC is like a effectively zero-mass particle doing a random walk (or non-zero mass particle in a relatively viscous medium): it will get there through brownian motion, but not necessarily fast. Hamiltonian MC is a particle with a non-zero mass: it may gather enough momentum to keep going in the same direction despite collisions, and so it may sometimes shoot from one end of the container to another (depending on its mass vs the frequency/magnitude of collisions). It would still bounce off the walls, of course, but it would in general tend to explore faster.
How does Hamiltonian Monte Carlo work?
Before answering the question about an intuitive way to think about Hamiltonian Monte Carlo, it's probably best to get a really firm grasp on regular MCMC. Let's set aside the satellite metaphor for
How does Hamiltonian Monte Carlo work? Before answering the question about an intuitive way to think about Hamiltonian Monte Carlo, it's probably best to get a really firm grasp on regular MCMC. Let's set aside the satellite metaphor for now. MCMC is useful when you want an unbiassed sample from a distribution where you only have something available which is proportional to the PDF, but not the PDF itself. This arises in (eg) physics simulations: the PDF is given by the Boltzmann distribution, p ~ exp(-E/kT), but the thing that you can calculate for any configuration of the system is E, not p. The constant of proportionality is not known, because the integral of exp(-E/kT) over the whole space of possible configuration is usually too difficult to calculate. MCMC solves that problem by doing a random walk in a specific way, where the probability of taking ("accepting") each step is related to the ratio of p values (the constant of proportionality cancels out). Over time, the distrubution of accepted samples from the random walk converges to the PDF that we want, without ever needing to explicitly calculate p. Note that in the above, any method of taking random steps is equally valid, as long as the random walker can explore the whole space. The acceptance criterion guarantees that the selected samples converge to the real PDF. In practice, a gaussian distribution around the current sample is used (and the sigma can be varied so that the fraction of accepted steps stays relatively high). There would be nothing wrong in principle with taking steps from any other continuous distribution ("jumping distribution") around the current sample, although the convergence may be a lot slower. Now, Hamiltonian Monte Carlo extends the physics metaphor by specifically trying to take steps in a direction which is more likely to be accepted than a gaussian step. The steps are what a leapfrog integrator would take, if it was trying to solve the motion of a system where the potential energy was E. These equations of motion also include a kinetic energy term, with a (not literally physical) "mass" and "momentum". The steps that the leapfrog integrator takes in "time" are then passed as proposals to the MCMC algorithm. Why does this work? The gaussian MC takes steps the same distance in every direction with equal probability; the only thing that biases it towards more densely populated areas of the PDF is that steps in the wrong direction are more likely to be rejected. The Hamiltonian MC proposes steps both in the direction of E gradient, and the direction of accumulated motion in recent steps (direction and magnitude of the "momentum"). This enables faster exploration of the space, and also higher probability of reaching more densely populated regions faster. Now, the satellite metaphor: I think this is not a very useful way to think about it. Satellites move in an exact orbit; what you have here is quite random, more like a particle of gas in a container with other particles. Each random collision gives you a "step"; over time the particle will be everywhere in the container with an equal probability (since the PDF here is equal everywhere, except the walls which represent very high energy / effectively zero PDF). Gaussian MCMC is like a effectively zero-mass particle doing a random walk (or non-zero mass particle in a relatively viscous medium): it will get there through brownian motion, but not necessarily fast. Hamiltonian MC is a particle with a non-zero mass: it may gather enough momentum to keep going in the same direction despite collisions, and so it may sometimes shoot from one end of the container to another (depending on its mass vs the frequency/magnitude of collisions). It would still bounce off the walls, of course, but it would in general tend to explore faster.
How does Hamiltonian Monte Carlo work? Before answering the question about an intuitive way to think about Hamiltonian Monte Carlo, it's probably best to get a really firm grasp on regular MCMC. Let's set aside the satellite metaphor for
29,123
Does too many variables in a regression model affect inference?
One problem with dumping all of your predictors into the model is the invitation to extreme collinearity, which will inflate your standard errors and likely make your results uninterpretable. Judea Pearl has pointed to a second problem, if your inference is aimed at modeling causal relationships. In trying to "control for everything" by including all available predictors, you may actually "unblock" new confounder paths and move farther away from, not closer to, good estimates of causal relationships. In the language of his graphical system, you create a confound if you condition on a collider or on a descendant of a collider. A third problem, with your limited sample size, statistical power with so many predictors will be low, which will inflate the likelihood that what seems like a finding now will prove not to be later on, following the reasoning of John Ioannidis (2005).
Does too many variables in a regression model affect inference?
One problem with dumping all of your predictors into the model is the invitation to extreme collinearity, which will inflate your standard errors and likely make your results uninterpretable. Judea Pe
Does too many variables in a regression model affect inference? One problem with dumping all of your predictors into the model is the invitation to extreme collinearity, which will inflate your standard errors and likely make your results uninterpretable. Judea Pearl has pointed to a second problem, if your inference is aimed at modeling causal relationships. In trying to "control for everything" by including all available predictors, you may actually "unblock" new confounder paths and move farther away from, not closer to, good estimates of causal relationships. In the language of his graphical system, you create a confound if you condition on a collider or on a descendant of a collider. A third problem, with your limited sample size, statistical power with so many predictors will be low, which will inflate the likelihood that what seems like a finding now will prove not to be later on, following the reasoning of John Ioannidis (2005).
Does too many variables in a regression model affect inference? One problem with dumping all of your predictors into the model is the invitation to extreme collinearity, which will inflate your standard errors and likely make your results uninterpretable. Judea Pe
29,124
Does too many variables in a regression model affect inference?
I am certainly not an expert in cancer research, but have read that genomic markers (also known as genetic markers) have been studied with respect to their relationship to various diseases. To unmask the true strength of such genetic markers, I suspect, one might want to control for exposure to agents likely associated with cancer especially relating to geographic locations. The latter can serve as proxies for areas of elevated pollution either in the air, water or the food stock, or places noted for fresh fruits (like mangos,...) believed to be beneficial to the immune system. Other control factors could be age (as a nonlinear variable to proxy the general strength of the immune system). Also, income levels, may be a proxy for healthcare access and a possible indicator for those starting point with better health. Also, my research on chemotherapy and survival rates, revealed a positive benefit that may arise from one's ancestry. For example, ancestors of people who endured long ocean voyages seeking new habitats (a darwinian survival of the fittest) apparently have better cancer survival rates following chemotherapy. Controlling for such factors may make the statistical process of assessing the impact of genetic markers more accurate (and also may allow MORE such factors to be included in the model). Here are some studies mentioning some of the controlling factors .
Does too many variables in a regression model affect inference?
I am certainly not an expert in cancer research, but have read that genomic markers (also known as genetic markers) have been studied with respect to their relationship to various diseases. To unmask
Does too many variables in a regression model affect inference? I am certainly not an expert in cancer research, but have read that genomic markers (also known as genetic markers) have been studied with respect to their relationship to various diseases. To unmask the true strength of such genetic markers, I suspect, one might want to control for exposure to agents likely associated with cancer especially relating to geographic locations. The latter can serve as proxies for areas of elevated pollution either in the air, water or the food stock, or places noted for fresh fruits (like mangos,...) believed to be beneficial to the immune system. Other control factors could be age (as a nonlinear variable to proxy the general strength of the immune system). Also, income levels, may be a proxy for healthcare access and a possible indicator for those starting point with better health. Also, my research on chemotherapy and survival rates, revealed a positive benefit that may arise from one's ancestry. For example, ancestors of people who endured long ocean voyages seeking new habitats (a darwinian survival of the fittest) apparently have better cancer survival rates following chemotherapy. Controlling for such factors may make the statistical process of assessing the impact of genetic markers more accurate (and also may allow MORE such factors to be included in the model). Here are some studies mentioning some of the controlling factors .
Does too many variables in a regression model affect inference? I am certainly not an expert in cancer research, but have read that genomic markers (also known as genetic markers) have been studied with respect to their relationship to various diseases. To unmask
29,125
Are unbiased efficient estimators stochastically dominant over other (median) unbiased estimators?
Here is an experiment in a non-standard case, the location Cauchy problem, where non-standard means that there is no uniformly best unbiased estimator. Let us consider $(X_1,\ldots,X_N)$ a sample from a Cauchy $\mathcal{C}(\mu,1)$ distribution and the following four invariant estimators of $\mu$: $\hat{\mu}_1= \text{median}(X_1,\ldots,X_N)=X_{(N/2)}$ $\hat{\mu}_2= \text{mean}(X_{(N/4)},\ldots,X_{(3N/4)})=\frac{2}{N}(X_{(N/4)}+\ldots+X_{(3N/4)})$ $\hat{\mu}_3=\mu^\text{MLE}$ which is efficient $\hat{\mu}_4=\hat{\mu}_1+\frac{2}{N}\frac{\partial \ell}{\partial \mu}(\hat{\mu}_1)$ Then comparing the cdfs of the four estimators leads to this picture, where the cdfs of $\hat{\mu}_3$ (gold) and $\hat{\mu}_4$ (tomato) are comparable, and improving upon $\hat{\mu}_1$ (steelblue), itself improving upon $\hat{\mu}_2$ (sienna). A representation of the differences to the empirical cdf of the MLE makes it clearer: Here is the corresponding R code: T=1e4 N=11 mlechy=function(x){ return(optimize(function(theta) -sum(dcauchy(x, location=theta, log=TRUE)), c(-100,100))$minimum) } est=matrix(0,T,4) for (t in 1:T){ cauc=sort(rcauchy(N)) est[t,1]=median(cauc) est[t,2]=mean(cauc[4:8]) est[t,3]=mlechy(cauc) est[t,4]=est[t,1]+(4/N)*sum((cauc-est[t,1])/(1+(cauc-est[t,1])^2)) } plot(ecdf(est[,1]),col="steelblue",cex=.4,xlim=c(-1,1),main="",ylab="F(x)") plot(ecdf(est[,2]),add=TRUE,col="sienna",cex=.4) plot(ecdf(est[,3]),add=TRUE,col="gold",cex=.4) plot(ecdf(est[,4]),add=TRUE,col="tomato",cex=.4)
Are unbiased efficient estimators stochastically dominant over other (median) unbiased estimators?
Here is an experiment in a non-standard case, the location Cauchy problem, where non-standard means that there is no uniformly best unbiased estimator. Let us consider $(X_1,\ldots,X_N)$ a sample from
Are unbiased efficient estimators stochastically dominant over other (median) unbiased estimators? Here is an experiment in a non-standard case, the location Cauchy problem, where non-standard means that there is no uniformly best unbiased estimator. Let us consider $(X_1,\ldots,X_N)$ a sample from a Cauchy $\mathcal{C}(\mu,1)$ distribution and the following four invariant estimators of $\mu$: $\hat{\mu}_1= \text{median}(X_1,\ldots,X_N)=X_{(N/2)}$ $\hat{\mu}_2= \text{mean}(X_{(N/4)},\ldots,X_{(3N/4)})=\frac{2}{N}(X_{(N/4)}+\ldots+X_{(3N/4)})$ $\hat{\mu}_3=\mu^\text{MLE}$ which is efficient $\hat{\mu}_4=\hat{\mu}_1+\frac{2}{N}\frac{\partial \ell}{\partial \mu}(\hat{\mu}_1)$ Then comparing the cdfs of the four estimators leads to this picture, where the cdfs of $\hat{\mu}_3$ (gold) and $\hat{\mu}_4$ (tomato) are comparable, and improving upon $\hat{\mu}_1$ (steelblue), itself improving upon $\hat{\mu}_2$ (sienna). A representation of the differences to the empirical cdf of the MLE makes it clearer: Here is the corresponding R code: T=1e4 N=11 mlechy=function(x){ return(optimize(function(theta) -sum(dcauchy(x, location=theta, log=TRUE)), c(-100,100))$minimum) } est=matrix(0,T,4) for (t in 1:T){ cauc=sort(rcauchy(N)) est[t,1]=median(cauc) est[t,2]=mean(cauc[4:8]) est[t,3]=mlechy(cauc) est[t,4]=est[t,1]+(4/N)*sum((cauc-est[t,1])/(1+(cauc-est[t,1])^2)) } plot(ecdf(est[,1]),col="steelblue",cex=.4,xlim=c(-1,1),main="",ylab="F(x)") plot(ecdf(est[,2]),add=TRUE,col="sienna",cex=.4) plot(ecdf(est[,3]),add=TRUE,col="gold",cex=.4) plot(ecdf(est[,4]),add=TRUE,col="tomato",cex=.4)
Are unbiased efficient estimators stochastically dominant over other (median) unbiased estimators? Here is an experiment in a non-standard case, the location Cauchy problem, where non-standard means that there is no uniformly best unbiased estimator. Let us consider $(X_1,\ldots,X_N)$ a sample from
29,126
Variational Auto Encoder (VAE) sampling from prior vs posterior
Here's what I understood about VAEs: the posterior refers to p(z|x), which is approximated by a learnt q(z|x), where z is the latent variable and x is the input. the prior refers to p(z). Often, p(z) is approximated with a learnt q(z) or simply N(0, 1). The posterior explains how likely the latent variable is given the input, while the prior simply represents how the latent variables are distributed without any conditioning (in CVAEs conditions are added here as well). Hence, in training, we want to learn a good posterior approximation (Evidence) that explains the input, but in testing we want to generate random samples following the prior distribution (unless you want to condition them some how).
Variational Auto Encoder (VAE) sampling from prior vs posterior
Here's what I understood about VAEs: the posterior refers to p(z|x), which is approximated by a learnt q(z|x), where z is the latent variable and x is the input. the prior refers to p(z). Often, p(z
Variational Auto Encoder (VAE) sampling from prior vs posterior Here's what I understood about VAEs: the posterior refers to p(z|x), which is approximated by a learnt q(z|x), where z is the latent variable and x is the input. the prior refers to p(z). Often, p(z) is approximated with a learnt q(z) or simply N(0, 1). The posterior explains how likely the latent variable is given the input, while the prior simply represents how the latent variables are distributed without any conditioning (in CVAEs conditions are added here as well). Hence, in training, we want to learn a good posterior approximation (Evidence) that explains the input, but in testing we want to generate random samples following the prior distribution (unless you want to condition them some how).
Variational Auto Encoder (VAE) sampling from prior vs posterior Here's what I understood about VAEs: the posterior refers to p(z|x), which is approximated by a learnt q(z|x), where z is the latent variable and x is the input. the prior refers to p(z). Often, p(z
29,127
My neural network can't even learn Euclidean distance
The output seems to strongly suggest that one or more of your neurons goes dead (or perhaps the hyperplane of weights for two of your neurons have merged). You can see that with 3 Relu's, you get 3 shadowy splits in the center when you converge to the more reasonable solution. You can easily verify if this is true by checking the output values of each neuron to see if it stays dead for a large majority of your samples. Alternatively, you could plot all 2x3=6 neuron weights, grouped by their respective neuron, to see if two neurons collapse to the same pair of weights. I suspect that one possible cause of this is when $x+iy$ is skewed toward one coordinate, e.g. $x\gg y$, in which case you're trying to reproduce the identity, as then $abs(x+iy)\approx x$. There's really not much you can do here to remedy this. One option is to add more neurons as you've tried. The second option is to try a continuous activation, like a sigmoid, or perhaps something unbounded like an exponential. You could also try dropout (with say, 10% probability). You could use the regular dropout implementation in keras, which is hopefully smart enough to ignore situations when all 3 of your neurons drop out.
My neural network can't even learn Euclidean distance
The output seems to strongly suggest that one or more of your neurons goes dead (or perhaps the hyperplane of weights for two of your neurons have merged). You can see that with 3 Relu's, you get 3 sh
My neural network can't even learn Euclidean distance The output seems to strongly suggest that one or more of your neurons goes dead (or perhaps the hyperplane of weights for two of your neurons have merged). You can see that with 3 Relu's, you get 3 shadowy splits in the center when you converge to the more reasonable solution. You can easily verify if this is true by checking the output values of each neuron to see if it stays dead for a large majority of your samples. Alternatively, you could plot all 2x3=6 neuron weights, grouped by their respective neuron, to see if two neurons collapse to the same pair of weights. I suspect that one possible cause of this is when $x+iy$ is skewed toward one coordinate, e.g. $x\gg y$, in which case you're trying to reproduce the identity, as then $abs(x+iy)\approx x$. There's really not much you can do here to remedy this. One option is to add more neurons as you've tried. The second option is to try a continuous activation, like a sigmoid, or perhaps something unbounded like an exponential. You could also try dropout (with say, 10% probability). You could use the regular dropout implementation in keras, which is hopefully smart enough to ignore situations when all 3 of your neurons drop out.
My neural network can't even learn Euclidean distance The output seems to strongly suggest that one or more of your neurons goes dead (or perhaps the hyperplane of weights for two of your neurons have merged). You can see that with 3 Relu's, you get 3 sh
29,128
What is the distribution of a sum of identically distributed Bernoulli random varibles if each pair has the same correlation?
Have you seen this paper: Kadane, 2016, Sums of Possibly Associated Bernoulli Variables: The Conway-Maxwell-Binomial Distribution? In this paper, you can see that the conditions assumed in your question i.e. having $n$ marginally Binomial r.v. with the same probability of success, $p$, and the same pairwise correlation, $\rho$, between all pairs does not fully specify the distribution of the sum of those random variables. To be more specific, in Section 2.3 of the paper, the author has assumed "zero higher order additive interaction (Darroch, 1974)": where $P\{W = k\} = P\{\sum_{i=0}^{m} X_i = k\}$. The model is also called correlated binomial model. Here is also a brief summary of the first sections of the paper that you may find helpful for modeling the sum: Proposition 1,2 and 3 provide reasoning for not using correlation as a measure of dependence and to model the sum without assuming a marginal distribution. section 2.1 and 2.2 are distribution models that have these two characteristics. They have some notion of dependence but it is not necessary the correlation. They also allow for symmetric dependence. (Proposition 1 states that correlation cannot be used as a measure of dependence as it is bounded below by $-1/(m-1)$ based on the conditions stated in the proposition). Section 3 is the proposed model of the author to directly model the sums using a notion of dependence that allows both for positive and negative association.
What is the distribution of a sum of identically distributed Bernoulli random varibles if each pair
Have you seen this paper: Kadane, 2016, Sums of Possibly Associated Bernoulli Variables: The Conway-Maxwell-Binomial Distribution? In this paper, you can see that the conditions assumed in your questi
What is the distribution of a sum of identically distributed Bernoulli random varibles if each pair has the same correlation? Have you seen this paper: Kadane, 2016, Sums of Possibly Associated Bernoulli Variables: The Conway-Maxwell-Binomial Distribution? In this paper, you can see that the conditions assumed in your question i.e. having $n$ marginally Binomial r.v. with the same probability of success, $p$, and the same pairwise correlation, $\rho$, between all pairs does not fully specify the distribution of the sum of those random variables. To be more specific, in Section 2.3 of the paper, the author has assumed "zero higher order additive interaction (Darroch, 1974)": where $P\{W = k\} = P\{\sum_{i=0}^{m} X_i = k\}$. The model is also called correlated binomial model. Here is also a brief summary of the first sections of the paper that you may find helpful for modeling the sum: Proposition 1,2 and 3 provide reasoning for not using correlation as a measure of dependence and to model the sum without assuming a marginal distribution. section 2.1 and 2.2 are distribution models that have these two characteristics. They have some notion of dependence but it is not necessary the correlation. They also allow for symmetric dependence. (Proposition 1 states that correlation cannot be used as a measure of dependence as it is bounded below by $-1/(m-1)$ based on the conditions stated in the proposition). Section 3 is the proposed model of the author to directly model the sums using a notion of dependence that allows both for positive and negative association.
What is the distribution of a sum of identically distributed Bernoulli random varibles if each pair Have you seen this paper: Kadane, 2016, Sums of Possibly Associated Bernoulli Variables: The Conway-Maxwell-Binomial Distribution? In this paper, you can see that the conditions assumed in your questi
29,129
What does it mean if the ROC AUC is high and the Average Precision is low?
I came up with the same problem recently and found some help in a few posts, which are referenced at the end of this answer. As usual I will use the abbreviations commonly used in the Confusion Matrix context: TP (True Positives), FP (False Positives), TN (True Negatives), FN (False Negatives). I will also consider the positive class to be the minority class, while the negative class is the majority class. First, you should notice that ROC AUC and Precision-Recall AUC are ranking metrics [1]. This means that they measure how well your probabilities (or scores) can order your data. ROC and Precision-Recall curves are related to ordering, because of the variation of the threshold that is used so as to build the curves. The difference between these metrics is how the ordering quality is quantified [2]. ROC analysis uses True Positive Rate (TPR or Recall) and False Positive Rate (FPR). Precision-Recall analysis, on the other hand, exchanges FPR for Precision. Then, while ROC uses all the cells (TP, FP, TN, FN) of the Confusion Matrix, Precision-Recall disregards the True Negatives, which have a high impact on an imbalanced problem, since almost all your data is of the negative class. Therefore, Precision-Recall gives more weight to the minority class (the positive class) than the ROC. This is why the Precision-Recall AUC is more suitable for heavily imbalanced problems. The more intuitive meaning of having a high ROC AUC, but a low Precision-Recall AUC is that your model can order very well your data (almost of of them belong to the same class anyway), but high scores do not correlate well with being positive class. You are not very confident about your high scores, but is very confident about the low scores. [1] https://machinelearningmastery.com/tour-of-evaluation-metrics-for-imbalanced-classification/ [2] https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-imbalanced-classification/
What does it mean if the ROC AUC is high and the Average Precision is low?
I came up with the same problem recently and found some help in a few posts, which are referenced at the end of this answer. As usual I will use the abbreviations commonly used in the Confusion Matrix
What does it mean if the ROC AUC is high and the Average Precision is low? I came up with the same problem recently and found some help in a few posts, which are referenced at the end of this answer. As usual I will use the abbreviations commonly used in the Confusion Matrix context: TP (True Positives), FP (False Positives), TN (True Negatives), FN (False Negatives). I will also consider the positive class to be the minority class, while the negative class is the majority class. First, you should notice that ROC AUC and Precision-Recall AUC are ranking metrics [1]. This means that they measure how well your probabilities (or scores) can order your data. ROC and Precision-Recall curves are related to ordering, because of the variation of the threshold that is used so as to build the curves. The difference between these metrics is how the ordering quality is quantified [2]. ROC analysis uses True Positive Rate (TPR or Recall) and False Positive Rate (FPR). Precision-Recall analysis, on the other hand, exchanges FPR for Precision. Then, while ROC uses all the cells (TP, FP, TN, FN) of the Confusion Matrix, Precision-Recall disregards the True Negatives, which have a high impact on an imbalanced problem, since almost all your data is of the negative class. Therefore, Precision-Recall gives more weight to the minority class (the positive class) than the ROC. This is why the Precision-Recall AUC is more suitable for heavily imbalanced problems. The more intuitive meaning of having a high ROC AUC, but a low Precision-Recall AUC is that your model can order very well your data (almost of of them belong to the same class anyway), but high scores do not correlate well with being positive class. You are not very confident about your high scores, but is very confident about the low scores. [1] https://machinelearningmastery.com/tour-of-evaluation-metrics-for-imbalanced-classification/ [2] https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-imbalanced-classification/
What does it mean if the ROC AUC is high and the Average Precision is low? I came up with the same problem recently and found some help in a few posts, which are referenced at the end of this answer. As usual I will use the abbreviations commonly used in the Confusion Matrix
29,130
What does it mean if the ROC AUC is high and the Average Precision is low?
The floor of ROC AUC is 0.5, while the floor of AUCPR is the positive rate in your data. If your positive rate is low enough, an AUCPR of 0.3 is outstanding.
What does it mean if the ROC AUC is high and the Average Precision is low?
The floor of ROC AUC is 0.5, while the floor of AUCPR is the positive rate in your data. If your positive rate is low enough, an AUCPR of 0.3 is outstanding.
What does it mean if the ROC AUC is high and the Average Precision is low? The floor of ROC AUC is 0.5, while the floor of AUCPR is the positive rate in your data. If your positive rate is low enough, an AUCPR of 0.3 is outstanding.
What does it mean if the ROC AUC is high and the Average Precision is low? The floor of ROC AUC is 0.5, while the floor of AUCPR is the positive rate in your data. If your positive rate is low enough, an AUCPR of 0.3 is outstanding.
29,131
Why are batch-normalization techniques less popular in natural language applications than in computer vision?
I think the main reason is that computer vision models tend to be much deeper than the ones commonly used in NLP. It's rare to have more than three or four layers for NLP tasks and oftentimes you can get by with just a single layer LSTM. Batch normalization helps train deeper networks but it is not as important for shallower ones.
Why are batch-normalization techniques less popular in natural language applications than in compute
I think the main reason is that computer vision models tend to be much deeper than the ones commonly used in NLP. It's rare to have more than three or four layers for NLP tasks and oftentimes you can
Why are batch-normalization techniques less popular in natural language applications than in computer vision? I think the main reason is that computer vision models tend to be much deeper than the ones commonly used in NLP. It's rare to have more than three or four layers for NLP tasks and oftentimes you can get by with just a single layer LSTM. Batch normalization helps train deeper networks but it is not as important for shallower ones.
Why are batch-normalization techniques less popular in natural language applications than in compute I think the main reason is that computer vision models tend to be much deeper than the ones commonly used in NLP. It's rare to have more than three or four layers for NLP tasks and oftentimes you can
29,132
Why are batch-normalization techniques less popular in natural language applications than in computer vision?
I've been wondering about this as well. For some reason, applying batchnorm degrades the performance (accuracy) of NLP benchmarks most of the time. There is a recent paper trying to attribute this to the variance of weights that we are training. We find that there are clear differences in the batch statistics of NLP data versus CV data. In particular, we observe that batch statistics for NLP data have a very large variance throughout training. This variance exists in the corresponding gradients as well. In contrast, CV data exhibits orders of magnitude smaller variance. See Figure 2 and 3 for a comparison of BN in CV and NLP. https://arxiv.org/pdf/2003.07845v1.pdf
Why are batch-normalization techniques less popular in natural language applications than in compute
I've been wondering about this as well. For some reason, applying batchnorm degrades the performance (accuracy) of NLP benchmarks most of the time. There is a recent paper trying to attribute this to
Why are batch-normalization techniques less popular in natural language applications than in computer vision? I've been wondering about this as well. For some reason, applying batchnorm degrades the performance (accuracy) of NLP benchmarks most of the time. There is a recent paper trying to attribute this to the variance of weights that we are training. We find that there are clear differences in the batch statistics of NLP data versus CV data. In particular, we observe that batch statistics for NLP data have a very large variance throughout training. This variance exists in the corresponding gradients as well. In contrast, CV data exhibits orders of magnitude smaller variance. See Figure 2 and 3 for a comparison of BN in CV and NLP. https://arxiv.org/pdf/2003.07845v1.pdf
Why are batch-normalization techniques less popular in natural language applications than in compute I've been wondering about this as well. For some reason, applying batchnorm degrades the performance (accuracy) of NLP benchmarks most of the time. There is a recent paper trying to attribute this to
29,133
What are the mean and variance of a 0-censored multivariate normal?
We can first reduce this to depend only on certain moments of univariate/bivariate truncated normal distributions: note of course that $ \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\Cov}{Cov} \newcommand{\N}{\mathcal N} \newcommand{\T}{\tilde} \newcommand{\v}{\mathcal V} $ \begin{gather} \E[Z_+] = \begin{bmatrix} \E[(Z_i)_+] \end{bmatrix}_i \\ \Cov(Z_+) = \begin{bmatrix} \Cov\left( (Z_i)_+, (Z_j)_+ \right) \end{bmatrix}_{ij} ,\end{gather} and because we're making coordinate-wise transformations of certain dimensions of a normal distribution, we only need to worry about the mean and variance of a 1d censored normal and the covariance of two 1d censored normals. We'll use some results from S Rosenbaum (1961). Moments of a Truncated Bivariate Normal Distribution. JRSS B, vol 23 pp 405-408. (jstor) Rosenbaum considers $$ \begin{bmatrix}\T X \\ \T Y\end{bmatrix} \sim \N\left( \begin{bmatrix}0 \\ 0\end{bmatrix}, \begin{bmatrix}1 & \rho \\ \rho & 1\end{bmatrix} \right) ,$$ and considers truncation to the event $\v = \{ \T X \ge a_X, \T Y \ge a_Y \}$. Specifically, we'll use the following three results, his (1), (3), and (5). First, define the following: \begin{gather} q_x = \phi( a_x) \qquad q_y = \phi( a_y) \\ Q_x = \Phi(-a_x) \qquad Q_y = \Phi(-a_y) \\ R_{xy} = \Phi\left( \frac{\rho a_x - a_y}{\sqrt{1 - \rho^2}} \right) \qquad R_{yx} = \Phi\left( \frac{\rho a_y - a_x}{\sqrt{1 - \rho^2}} \right) \\ r_{xy} = \frac{\sqrt{1-\rho^2}}{\sqrt{2 \pi}} \phi\left( \sqrt{\frac{h^2 - 2 \rho h k + k^2}{1 - \rho^2}} \right) \end{gather} Now, Rosenbaum shows that: \begin{align} \Pr(\v) \E[\T X \mid \v] &= q_x R_{xy} + \rho q_y R_{yx} \tag{1} \\ \Pr\left(\v \right) \E\left[\T X^2 \mid \v \right] &= \Pr\left(\v \right) + a_x q_x R_{xy} + \rho^2 a_y q_y R_{yx} + \rho r_{xy} \tag{3} \\ \Pr(\v) \E\left[ \T X \T Y \mid \v \right] &= \rho \Pr(\v) + \rho a_x q_x R_{xy} + \rho a_y q_y R_{yx} + r_{xy} \tag{5} .\end{align} It will be useful to also consider the special case of (1) and (3) with $a_y = -\infty$, i.e. a 1d truncation: \begin{align} \Pr(\v) \E[\T X \mid \v] &= q_x \tag{*} \\ \Pr\left(\v \right) \E\left[\T X^2 \mid \v \right] &= \Pr\left(\v \right) = Q_x \tag{**} .\end{align} We now want to consider \begin{align} \begin{bmatrix}X \\ Y\end{bmatrix} &= \begin{bmatrix}\mu_x\\\mu_y\end{bmatrix} + \begin{bmatrix}\sigma_x & 0 \\ 0 & \sigma_y\end{bmatrix}\begin{bmatrix}\T X \\ \T Y\end{bmatrix} \\&\sim \N\left( \begin{bmatrix}\mu_X\\\mu_Y\end{bmatrix}, \begin{bmatrix}\sigma_x^2 & \rho \sigma_x \sigma_y \\ \rho \sigma_x \sigma_y & \sigma_y^2 \end{bmatrix} \right) \\&= \N\left( \mu, \Sigma \right) .\end{align} We will use $$ a_x = \frac{-\mu_x}{\sigma_x} \qquad a_y = \frac{-\mu_y}{\sigma_y} ,$$ which are the values of $\T X$ and $\T Y$ when $X = 0$, $Y = 0$. Now, using (*), we obtain \begin{align} \E[ X_+ ] &= \Pr(X_+ > 0) \E[X \mid X > 0] + \Pr(X_+=0) \, 0 \\&= \Pr(X > 0) \left( \mu_x + \sigma_x \E[\T X \mid \T X \ge a_x] \right) \\&= Q_x \mu_x + q_x \sigma_x ,\end{align} and using both (*) and (**) yields \begin{align} \E[ X_+^2 ] &= \Pr(X_+ > 0) \E[X^2 \mid X > 0] + \Pr(X_+=0) 0 \\&= \Pr\left(\T X \ge a_x\right) \E\left[(\mu_x + \sigma_x \T X)^2 \mid \T X \ge a_x\right] \\&= \Pr\left(\T X \ge a_x\right) \E\left[\mu_x^2 + \mu_x \sigma_x \T X + \sigma_x^2 \T X^2 \mid \T X \ge a_x\right] \\&= Q_x \mu_x^2 + q_x \mu_x \sigma_x + Q_x \sigma_x^2 \end{align} so that \begin{align} \Var[X_+] &= \E[X_+^2] - \E[X_+]^2 \\&= Q_x \mu_x^2 + q_x \mu_x \sigma_x + Q_x \sigma_x^2 - Q_x^2 \mu_x^2 - q_x^2 \sigma_x^2 - 2 q_x Q_x \mu_x \sigma_x \\&= Q_x (1 - Q_x) \mu_x^2 + (1 - 2 Q_x) q_x \mu_x \sigma_x + (Q_x - q_x^2) \sigma_x^2 .\end{align} To find $\Cov(X_+, Y_+)$, we will need \begin{align} \E[X_+ Y_+] &= \Pr(\v) \E[ X Y \mid \v] + Pr(\lnot\v) \, 0 \\&= \Pr(\v) \E\left[ (\mu_x + \sigma_x \T X) (\mu_y + \sigma_y \T Y) \mid \v \right] \\&= \mu_x \mu_y \Pr(\v) + \mu_y \sigma_x \Pr(\v) \E[ \T X \mid \v] + \mu_x \sigma_y \Pr(\v) \E[ \T Y \mid \v] \\&\qquad + \sigma_x \sigma_y \Pr(\v) \E\left[ \T X \T Y \mid \v \right] \\&= \mu_x \mu_y \Pr(\v) + \mu_y \sigma_x (q_x R_{xy} + \rho q_y R_{yx}) + \mu_x \sigma_y (\rho q_x R_{xy} + q_y R_{yx}) \\&\qquad + \sigma_x \sigma_y \left( \rho \Pr\left( \v \right) - \rho \mu_x q_x R_{xy} / \sigma_x - \rho \mu_y q_y R_{yx} / \sigma_y + r_{xy} \right) \\&= (\mu_x \mu_y + \sigma_x \sigma_y \rho) \Pr(\v) + (\mu_y \sigma_x + \mu_x \sigma_y \rho - \rho \mu_x \sigma_y) q_x R_{xy} \\&\qquad + (\mu_y \sigma_x \rho + \mu_x \sigma_y - \rho \mu_y \sigma_x) q_y R_{yx} + \sigma_x \sigma_y r_{xy} \\&= (\mu_x \mu_y + \Sigma_{xy}) \Pr(\v) + \mu_y \sigma_x q_x R_{xy} + \mu_x \sigma_y q_y R_{yx} + \sigma_x \sigma_y r_{xy} ,\end{align} and then subtracting $\E[X_+] \E[Y_+]$ we get \begin{align} \Cov(X_+, Y_+) &= (\mu_x \mu_y + \Sigma_{xy}) \Pr(\v) + \mu_y \sigma_x q_x R_{xy} + \mu_x \sigma_y q_y R_{yx} + \sigma_x \sigma_y r_{xy} \\&\qquad - (Q_x \mu_x + q_x \sigma_x) (Q_y \mu_y + q_y \sigma_y) .\end{align} Here's some Python code to compute the moments: import numpy as np from scipy import stats def relu_mvn_mean_cov(mu, Sigma): mu = np.asarray(mu, dtype=float) Sigma = np.asarray(Sigma, dtype=float) d, = mu.shape assert Sigma.shape == (d, d) x = (slice(None), np.newaxis) y = (np.newaxis, slice(None)) sigma2s = np.diagonal(Sigma) sigmas = np.sqrt(sigma2s) rhos = Sigma / sigmas[x] / sigmas[y] prob = np.empty((d, d)) # prob[i, j] = Pr(X_i > 0, X_j > 0) zero = np.zeros(d) for i in range(d): prob[i, i] = np.nan for j in range(i + 1, d): # Pr(X > 0) = Pr(-X < 0); X ~ N(mu, S) => -X ~ N(-mu, S) s = [i, j] prob[i, j] = prob[j, i] = stats.multivariate_normal.cdf( zero[s], mean=-mu[s], cov=Sigma[np.ix_(s, s)]) mu_sigs = mu / sigmas Q = stats.norm.cdf(mu_sigs) q = stats.norm.pdf(mu_sigs) mean = Q * mu + q * sigmas # rho_cs is sqrt(1 - rhos**2); but don't calculate diagonal, because # it'll just be zero and we're dividing by it (but not using result) # use inf instead of nan; stats.norm.cdf doesn't like nan inputs rho_cs = 1 - rhos**2 np.fill_diagonal(rho_cs, np.inf) np.sqrt(rho_cs, out=rho_cs) R = stats.norm.cdf((mu_sigs[y] - rhos * mu_sigs[x]) / rho_cs) mu_sigs_sq = mu_sigs ** 2 r_num = mu_sigs_sq[x] + mu_sigs_sq[y] - 2 * rhos * mu_sigs[x] * mu_sigs[y] np.fill_diagonal(r_num, 1) # don't want slightly negative numerator here r = rho_cs / np.sqrt(2 * np.pi) * stats.norm.pdf(np.sqrt(r_num) / rho_cs) bit = mu[y] * sigmas[x] * q[x] * R cov = ( (mu[x] * mu[y] + Sigma) * prob + bit + bit.T + sigmas[x] * sigmas[y] * r - mean[x] * mean[y]) cov[range(d), range(d)] = ( Q * (1 - Q) * mu**2 + (1 - 2 * Q) * q * mu * sigmas + (Q - q**2) * sigma2s) return mean, cov and a Monte Carlo test that it works: np.random.seed(12) d = 4 mu = np.random.randn(d) L = np.random.randn(d, d) Sigma = L.T.dot(L) dist = stats.multivariate_normal(mu, Sigma) mn, cov = relu_mvn_mean_cov(mu, Sigma) samps = dist.rvs(10**7) mn_est = samps.mean(axis=0) cov_est = np.cov(samps, rowvar=False) print(np.max(np.abs(mn - mn_est)), np.max(np.abs(cov - cov_est))) which gives 0.000572145310512 0.00298692620286, indicating that the claimed expectation and covariance match Monte Carlo estimates (based on $10,000,000$ samples).
What are the mean and variance of a 0-censored multivariate normal?
We can first reduce this to depend only on certain moments of univariate/bivariate truncated normal distributions: note of course that $ \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\Var}{
What are the mean and variance of a 0-censored multivariate normal? We can first reduce this to depend only on certain moments of univariate/bivariate truncated normal distributions: note of course that $ \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\Cov}{Cov} \newcommand{\N}{\mathcal N} \newcommand{\T}{\tilde} \newcommand{\v}{\mathcal V} $ \begin{gather} \E[Z_+] = \begin{bmatrix} \E[(Z_i)_+] \end{bmatrix}_i \\ \Cov(Z_+) = \begin{bmatrix} \Cov\left( (Z_i)_+, (Z_j)_+ \right) \end{bmatrix}_{ij} ,\end{gather} and because we're making coordinate-wise transformations of certain dimensions of a normal distribution, we only need to worry about the mean and variance of a 1d censored normal and the covariance of two 1d censored normals. We'll use some results from S Rosenbaum (1961). Moments of a Truncated Bivariate Normal Distribution. JRSS B, vol 23 pp 405-408. (jstor) Rosenbaum considers $$ \begin{bmatrix}\T X \\ \T Y\end{bmatrix} \sim \N\left( \begin{bmatrix}0 \\ 0\end{bmatrix}, \begin{bmatrix}1 & \rho \\ \rho & 1\end{bmatrix} \right) ,$$ and considers truncation to the event $\v = \{ \T X \ge a_X, \T Y \ge a_Y \}$. Specifically, we'll use the following three results, his (1), (3), and (5). First, define the following: \begin{gather} q_x = \phi( a_x) \qquad q_y = \phi( a_y) \\ Q_x = \Phi(-a_x) \qquad Q_y = \Phi(-a_y) \\ R_{xy} = \Phi\left( \frac{\rho a_x - a_y}{\sqrt{1 - \rho^2}} \right) \qquad R_{yx} = \Phi\left( \frac{\rho a_y - a_x}{\sqrt{1 - \rho^2}} \right) \\ r_{xy} = \frac{\sqrt{1-\rho^2}}{\sqrt{2 \pi}} \phi\left( \sqrt{\frac{h^2 - 2 \rho h k + k^2}{1 - \rho^2}} \right) \end{gather} Now, Rosenbaum shows that: \begin{align} \Pr(\v) \E[\T X \mid \v] &= q_x R_{xy} + \rho q_y R_{yx} \tag{1} \\ \Pr\left(\v \right) \E\left[\T X^2 \mid \v \right] &= \Pr\left(\v \right) + a_x q_x R_{xy} + \rho^2 a_y q_y R_{yx} + \rho r_{xy} \tag{3} \\ \Pr(\v) \E\left[ \T X \T Y \mid \v \right] &= \rho \Pr(\v) + \rho a_x q_x R_{xy} + \rho a_y q_y R_{yx} + r_{xy} \tag{5} .\end{align} It will be useful to also consider the special case of (1) and (3) with $a_y = -\infty$, i.e. a 1d truncation: \begin{align} \Pr(\v) \E[\T X \mid \v] &= q_x \tag{*} \\ \Pr\left(\v \right) \E\left[\T X^2 \mid \v \right] &= \Pr\left(\v \right) = Q_x \tag{**} .\end{align} We now want to consider \begin{align} \begin{bmatrix}X \\ Y\end{bmatrix} &= \begin{bmatrix}\mu_x\\\mu_y\end{bmatrix} + \begin{bmatrix}\sigma_x & 0 \\ 0 & \sigma_y\end{bmatrix}\begin{bmatrix}\T X \\ \T Y\end{bmatrix} \\&\sim \N\left( \begin{bmatrix}\mu_X\\\mu_Y\end{bmatrix}, \begin{bmatrix}\sigma_x^2 & \rho \sigma_x \sigma_y \\ \rho \sigma_x \sigma_y & \sigma_y^2 \end{bmatrix} \right) \\&= \N\left( \mu, \Sigma \right) .\end{align} We will use $$ a_x = \frac{-\mu_x}{\sigma_x} \qquad a_y = \frac{-\mu_y}{\sigma_y} ,$$ which are the values of $\T X$ and $\T Y$ when $X = 0$, $Y = 0$. Now, using (*), we obtain \begin{align} \E[ X_+ ] &= \Pr(X_+ > 0) \E[X \mid X > 0] + \Pr(X_+=0) \, 0 \\&= \Pr(X > 0) \left( \mu_x + \sigma_x \E[\T X \mid \T X \ge a_x] \right) \\&= Q_x \mu_x + q_x \sigma_x ,\end{align} and using both (*) and (**) yields \begin{align} \E[ X_+^2 ] &= \Pr(X_+ > 0) \E[X^2 \mid X > 0] + \Pr(X_+=0) 0 \\&= \Pr\left(\T X \ge a_x\right) \E\left[(\mu_x + \sigma_x \T X)^2 \mid \T X \ge a_x\right] \\&= \Pr\left(\T X \ge a_x\right) \E\left[\mu_x^2 + \mu_x \sigma_x \T X + \sigma_x^2 \T X^2 \mid \T X \ge a_x\right] \\&= Q_x \mu_x^2 + q_x \mu_x \sigma_x + Q_x \sigma_x^2 \end{align} so that \begin{align} \Var[X_+] &= \E[X_+^2] - \E[X_+]^2 \\&= Q_x \mu_x^2 + q_x \mu_x \sigma_x + Q_x \sigma_x^2 - Q_x^2 \mu_x^2 - q_x^2 \sigma_x^2 - 2 q_x Q_x \mu_x \sigma_x \\&= Q_x (1 - Q_x) \mu_x^2 + (1 - 2 Q_x) q_x \mu_x \sigma_x + (Q_x - q_x^2) \sigma_x^2 .\end{align} To find $\Cov(X_+, Y_+)$, we will need \begin{align} \E[X_+ Y_+] &= \Pr(\v) \E[ X Y \mid \v] + Pr(\lnot\v) \, 0 \\&= \Pr(\v) \E\left[ (\mu_x + \sigma_x \T X) (\mu_y + \sigma_y \T Y) \mid \v \right] \\&= \mu_x \mu_y \Pr(\v) + \mu_y \sigma_x \Pr(\v) \E[ \T X \mid \v] + \mu_x \sigma_y \Pr(\v) \E[ \T Y \mid \v] \\&\qquad + \sigma_x \sigma_y \Pr(\v) \E\left[ \T X \T Y \mid \v \right] \\&= \mu_x \mu_y \Pr(\v) + \mu_y \sigma_x (q_x R_{xy} + \rho q_y R_{yx}) + \mu_x \sigma_y (\rho q_x R_{xy} + q_y R_{yx}) \\&\qquad + \sigma_x \sigma_y \left( \rho \Pr\left( \v \right) - \rho \mu_x q_x R_{xy} / \sigma_x - \rho \mu_y q_y R_{yx} / \sigma_y + r_{xy} \right) \\&= (\mu_x \mu_y + \sigma_x \sigma_y \rho) \Pr(\v) + (\mu_y \sigma_x + \mu_x \sigma_y \rho - \rho \mu_x \sigma_y) q_x R_{xy} \\&\qquad + (\mu_y \sigma_x \rho + \mu_x \sigma_y - \rho \mu_y \sigma_x) q_y R_{yx} + \sigma_x \sigma_y r_{xy} \\&= (\mu_x \mu_y + \Sigma_{xy}) \Pr(\v) + \mu_y \sigma_x q_x R_{xy} + \mu_x \sigma_y q_y R_{yx} + \sigma_x \sigma_y r_{xy} ,\end{align} and then subtracting $\E[X_+] \E[Y_+]$ we get \begin{align} \Cov(X_+, Y_+) &= (\mu_x \mu_y + \Sigma_{xy}) \Pr(\v) + \mu_y \sigma_x q_x R_{xy} + \mu_x \sigma_y q_y R_{yx} + \sigma_x \sigma_y r_{xy} \\&\qquad - (Q_x \mu_x + q_x \sigma_x) (Q_y \mu_y + q_y \sigma_y) .\end{align} Here's some Python code to compute the moments: import numpy as np from scipy import stats def relu_mvn_mean_cov(mu, Sigma): mu = np.asarray(mu, dtype=float) Sigma = np.asarray(Sigma, dtype=float) d, = mu.shape assert Sigma.shape == (d, d) x = (slice(None), np.newaxis) y = (np.newaxis, slice(None)) sigma2s = np.diagonal(Sigma) sigmas = np.sqrt(sigma2s) rhos = Sigma / sigmas[x] / sigmas[y] prob = np.empty((d, d)) # prob[i, j] = Pr(X_i > 0, X_j > 0) zero = np.zeros(d) for i in range(d): prob[i, i] = np.nan for j in range(i + 1, d): # Pr(X > 0) = Pr(-X < 0); X ~ N(mu, S) => -X ~ N(-mu, S) s = [i, j] prob[i, j] = prob[j, i] = stats.multivariate_normal.cdf( zero[s], mean=-mu[s], cov=Sigma[np.ix_(s, s)]) mu_sigs = mu / sigmas Q = stats.norm.cdf(mu_sigs) q = stats.norm.pdf(mu_sigs) mean = Q * mu + q * sigmas # rho_cs is sqrt(1 - rhos**2); but don't calculate diagonal, because # it'll just be zero and we're dividing by it (but not using result) # use inf instead of nan; stats.norm.cdf doesn't like nan inputs rho_cs = 1 - rhos**2 np.fill_diagonal(rho_cs, np.inf) np.sqrt(rho_cs, out=rho_cs) R = stats.norm.cdf((mu_sigs[y] - rhos * mu_sigs[x]) / rho_cs) mu_sigs_sq = mu_sigs ** 2 r_num = mu_sigs_sq[x] + mu_sigs_sq[y] - 2 * rhos * mu_sigs[x] * mu_sigs[y] np.fill_diagonal(r_num, 1) # don't want slightly negative numerator here r = rho_cs / np.sqrt(2 * np.pi) * stats.norm.pdf(np.sqrt(r_num) / rho_cs) bit = mu[y] * sigmas[x] * q[x] * R cov = ( (mu[x] * mu[y] + Sigma) * prob + bit + bit.T + sigmas[x] * sigmas[y] * r - mean[x] * mean[y]) cov[range(d), range(d)] = ( Q * (1 - Q) * mu**2 + (1 - 2 * Q) * q * mu * sigmas + (Q - q**2) * sigma2s) return mean, cov and a Monte Carlo test that it works: np.random.seed(12) d = 4 mu = np.random.randn(d) L = np.random.randn(d, d) Sigma = L.T.dot(L) dist = stats.multivariate_normal(mu, Sigma) mn, cov = relu_mvn_mean_cov(mu, Sigma) samps = dist.rvs(10**7) mn_est = samps.mean(axis=0) cov_est = np.cov(samps, rowvar=False) print(np.max(np.abs(mn - mn_est)), np.max(np.abs(cov - cov_est))) which gives 0.000572145310512 0.00298692620286, indicating that the claimed expectation and covariance match Monte Carlo estimates (based on $10,000,000$ samples).
What are the mean and variance of a 0-censored multivariate normal? We can first reduce this to depend only on certain moments of univariate/bivariate truncated normal distributions: note of course that $ \DeclareMathOperator{\E}{\mathbb E} \DeclareMathOperator{\Var}{
29,134
Expectation of the softmax transform for Gaussian multivariate variables
I am sorry if I rescue a fairly old question but I was facing a very similar problem recently and I stumble upon a paper that might offer some help. The article is: "Semi-analytical approximations to statistical moments of sigmoid and softmax mappings of normal variables" at https://arxiv.org/pdf/1703.00091.pdf Expectation of Softmax approximation For computing the average value of a softmax mapping $\pi \left( \mathbf{\mathsf{x}} \right)$ of multi-normal distributed variables $\mathbf{\mathsf{x}} \sim \mathcal{N}_D \left( \mathbf{\mu}, \mathbf{\Sigma} \right)$ the author provides the following approximation: $$ \mathbb{E} \left[ \pi^k (\mathbf{\mathsf{x}}) \right] \simeq \frac{1}{2 - D + \sum_{k' \neq k} \frac{1}{\mathbb{E} \left[ \sigma \left( x^k - x^{k'} \right) \right]}} $$ Where $x^k$ represents the $k$-component of the $\mathbf{\mathsf{x}}$ D-dimensional vector and $\sigma \left( x \right)$ represent the one-dimensional sigmoidal function. To evaluate this formula one needs to compute the average value $\mathbb{E} \left[ \sigma (x) \right]$ for which you could use your own approximation (a very similar approximation is again provided in the aformentioned article). This formula is based on a re-writing of the softmax formula in terms of sigmoids and starts from the $D=2$ case you mentioned where the result is "exact" (as much as an approximation can be) and postulate the validity of their expression for $D>2$. They validate their proposal by means of numerical validation.
Expectation of the softmax transform for Gaussian multivariate variables
I am sorry if I rescue a fairly old question but I was facing a very similar problem recently and I stumble upon a paper that might offer some help. The article is: "Semi-analytical approximations to
Expectation of the softmax transform for Gaussian multivariate variables I am sorry if I rescue a fairly old question but I was facing a very similar problem recently and I stumble upon a paper that might offer some help. The article is: "Semi-analytical approximations to statistical moments of sigmoid and softmax mappings of normal variables" at https://arxiv.org/pdf/1703.00091.pdf Expectation of Softmax approximation For computing the average value of a softmax mapping $\pi \left( \mathbf{\mathsf{x}} \right)$ of multi-normal distributed variables $\mathbf{\mathsf{x}} \sim \mathcal{N}_D \left( \mathbf{\mu}, \mathbf{\Sigma} \right)$ the author provides the following approximation: $$ \mathbb{E} \left[ \pi^k (\mathbf{\mathsf{x}}) \right] \simeq \frac{1}{2 - D + \sum_{k' \neq k} \frac{1}{\mathbb{E} \left[ \sigma \left( x^k - x^{k'} \right) \right]}} $$ Where $x^k$ represents the $k$-component of the $\mathbf{\mathsf{x}}$ D-dimensional vector and $\sigma \left( x \right)$ represent the one-dimensional sigmoidal function. To evaluate this formula one needs to compute the average value $\mathbb{E} \left[ \sigma (x) \right]$ for which you could use your own approximation (a very similar approximation is again provided in the aformentioned article). This formula is based on a re-writing of the softmax formula in terms of sigmoids and starts from the $D=2$ case you mentioned where the result is "exact" (as much as an approximation can be) and postulate the validity of their expression for $D>2$. They validate their proposal by means of numerical validation.
Expectation of the softmax transform for Gaussian multivariate variables I am sorry if I rescue a fairly old question but I was facing a very similar problem recently and I stumble upon a paper that might offer some help. The article is: "Semi-analytical approximations to
29,135
Gaussian process with time series
In reverse order there are many decent GP libraries such as SKLearn, GPy, pyGPs, GPflow and so on. Secondly your input is clearly the time and you can preprocess this as you see fit but you should understand the high level questions such as do I only care about the hour of the day? (In which case you could drop the date and keep only the 0-23 hours) or do you also want to model perhaps seasonal trends? (In which case unix time stamps could be useful) What is important to understand is that GPS are more than just a smooth extrapolation technique - so you can encode how similar points in time are via your choice of kernel. So for example a daily trend might be encoded as a periodic kernel with period of one day. You can then add and multiply this to season periodic kernels, maybe add some noise and so on. This combination of kernel arithmetic creates distributions over the sum and product of function spaces. Check out for example http://www.cs.toronto.edu/~duvenaud/cookbook/ and https://github.com/jkfitzsimons/IPyNotebook_MachineLearning/blob/master/Just%20Another%20Kernel%20Cookbook....ipynb.
Gaussian process with time series
In reverse order there are many decent GP libraries such as SKLearn, GPy, pyGPs, GPflow and so on. Secondly your input is clearly the time and you can preprocess this as you see fit but you should un
Gaussian process with time series In reverse order there are many decent GP libraries such as SKLearn, GPy, pyGPs, GPflow and so on. Secondly your input is clearly the time and you can preprocess this as you see fit but you should understand the high level questions such as do I only care about the hour of the day? (In which case you could drop the date and keep only the 0-23 hours) or do you also want to model perhaps seasonal trends? (In which case unix time stamps could be useful) What is important to understand is that GPS are more than just a smooth extrapolation technique - so you can encode how similar points in time are via your choice of kernel. So for example a daily trend might be encoded as a periodic kernel with period of one day. You can then add and multiply this to season periodic kernels, maybe add some noise and so on. This combination of kernel arithmetic creates distributions over the sum and product of function spaces. Check out for example http://www.cs.toronto.edu/~duvenaud/cookbook/ and https://github.com/jkfitzsimons/IPyNotebook_MachineLearning/blob/master/Just%20Another%20Kernel%20Cookbook....ipynb.
Gaussian process with time series In reverse order there are many decent GP libraries such as SKLearn, GPy, pyGPs, GPflow and so on. Secondly your input is clearly the time and you can preprocess this as you see fit but you should un
29,136
Is it pointless to use Bagging with nearest neighbor classifiers?
In the original paper about bagging, Breiman refers to this point. He explains that unstable learners are likely to give different prediction for modified datasets and likely to benefit from bagging. On the other hand, stable learners (take to the extreme a constant), will give quite similar predictions anyway so bagging won't help. He also refer to specific algorithms stability: Unstability was studied in Breiman [1994] where it was pointed out that neural nets, classi cation and regression trees, and subset selection in linear regression were unstable,while k-nearest neighbor methods were stable. Breiman [1994] is "Breiman,L.(1994)Heuristics of instability in model selection,Technical Report, Statistics Department, University of California at Berkeley." I think the Breiman extended the technical report into Heuristics of Instability and Stabilization in Model Selection but he hardly refers to knn there. I your intuition is correct. The lower k the more unstable the model will be. The more we modify the dataset, the higher probability the use a different set of neighbors. If you take k=1 and modify the dataset enough so the probability of getting the same neighbor is less then 80%, bagging should help. I think the the use case Breiman had in mind is a higher k and more delicate modifications. If you have k of 10 and probability of having the same neighbour of 99%, the results will be quite stable.
Is it pointless to use Bagging with nearest neighbor classifiers?
In the original paper about bagging, Breiman refers to this point. He explains that unstable learners are likely to give different prediction for modified datasets and likely to benefit from bagging.
Is it pointless to use Bagging with nearest neighbor classifiers? In the original paper about bagging, Breiman refers to this point. He explains that unstable learners are likely to give different prediction for modified datasets and likely to benefit from bagging. On the other hand, stable learners (take to the extreme a constant), will give quite similar predictions anyway so bagging won't help. He also refer to specific algorithms stability: Unstability was studied in Breiman [1994] where it was pointed out that neural nets, classi cation and regression trees, and subset selection in linear regression were unstable,while k-nearest neighbor methods were stable. Breiman [1994] is "Breiman,L.(1994)Heuristics of instability in model selection,Technical Report, Statistics Department, University of California at Berkeley." I think the Breiman extended the technical report into Heuristics of Instability and Stabilization in Model Selection but he hardly refers to knn there. I your intuition is correct. The lower k the more unstable the model will be. The more we modify the dataset, the higher probability the use a different set of neighbors. If you take k=1 and modify the dataset enough so the probability of getting the same neighbor is less then 80%, bagging should help. I think the the use case Breiman had in mind is a higher k and more delicate modifications. If you have k of 10 and probability of having the same neighbour of 99%, the results will be quite stable.
Is it pointless to use Bagging with nearest neighbor classifiers? In the original paper about bagging, Breiman refers to this point. He explains that unstable learners are likely to give different prediction for modified datasets and likely to benefit from bagging.
29,137
how can i simulate with arima.sim drift, intercept and trend
Deterministic Trend If your drift intercept is $c$, you can just add the function $c t$ to the zero mean process. Code: xt <- arima.sim(n=50, list(order=c(1,0,1), ar = c(.9), ma = -.2)) becomes xtWithDrift <- xt + .20*seq(1,50)) The model would be written like \begin{align*} X_t &= \phi X_{t-1} + Z_t + \theta Z_{t-1} \\ Y_t &= a + ct + X_t. \end{align*} Stochastic Trend If you want a stochastic trend, you're better off simulating the differences, then summing those. With a nonrandom starting point, for example: startSpot <- 3 yt <- arima.sim(n=50, list(order=c(1,0,1),ar=c(.9), ma=-.2)) + .2 #see comment below plot(startSpot + cumsum(yt)) This gives you $$ X_t = 3 + \sum_{j=1}^t Y_t $$ where $Y_t$ is the ARMA(1,1). $X_t$ is the ARIMA(1,1,1). Or in other words, $$ (1 - \phi B)(Y_t - .2) = (1 + \theta B) Z_t, $$ where $Y_t = X_t - X_{t-1}$.
how can i simulate with arima.sim drift, intercept and trend
Deterministic Trend If your drift intercept is $c$, you can just add the function $c t$ to the zero mean process. Code: xt <- arima.sim(n=50, list(order=c(1,0,1), ar = c(.9), ma = -.2)) becomes xtWi
how can i simulate with arima.sim drift, intercept and trend Deterministic Trend If your drift intercept is $c$, you can just add the function $c t$ to the zero mean process. Code: xt <- arima.sim(n=50, list(order=c(1,0,1), ar = c(.9), ma = -.2)) becomes xtWithDrift <- xt + .20*seq(1,50)) The model would be written like \begin{align*} X_t &= \phi X_{t-1} + Z_t + \theta Z_{t-1} \\ Y_t &= a + ct + X_t. \end{align*} Stochastic Trend If you want a stochastic trend, you're better off simulating the differences, then summing those. With a nonrandom starting point, for example: startSpot <- 3 yt <- arima.sim(n=50, list(order=c(1,0,1),ar=c(.9), ma=-.2)) + .2 #see comment below plot(startSpot + cumsum(yt)) This gives you $$ X_t = 3 + \sum_{j=1}^t Y_t $$ where $Y_t$ is the ARMA(1,1). $X_t$ is the ARIMA(1,1,1). Or in other words, $$ (1 - \phi B)(Y_t - .2) = (1 + \theta B) Z_t, $$ where $Y_t = X_t - X_{t-1}$.
how can i simulate with arima.sim drift, intercept and trend Deterministic Trend If your drift intercept is $c$, you can just add the function $c t$ to the zero mean process. Code: xt <- arima.sim(n=50, list(order=c(1,0,1), ar = c(.9), ma = -.2)) becomes xtWi
29,138
Which Bootstrap method is most preferred?
It's never too late to fix an error... At least I think there is an error in the implementation of the bootstrap $t$ interval, which requires a bootstrap within the bootstrap (double bootstrap). Note: I refactor the R code for clarity. And since the OP didn't provide data or the code to reproduce the data, I use the nerve data from the book All of Nonparametric Statistics by L. Wasserman. The measurements are 799 waiting times between successive pulses along a nerve fiber and was originally reported in Cox and Lewis (1966). The goal is to compute the studentized bootstrap interval defined as: $$ \begin{aligned} \left( T - t^*_{1-\alpha/2}\widehat{\operatorname{se}}_{\text{boot}}, T - t^*_{\alpha/2}\widehat{\operatorname{se}}_{\text{boot}} \right) \end{aligned} $$ where $T$ is the statistic of interest (here skewness); $\widehat{\operatorname{se}}_{\text{boot}}$ is the standard error of the bootstrapped statistics $\tilde{T}_1, \ldots, \tilde{T}_B$; $t^*_q$ is the $q$ sample quantile of $T^*_1, \ldots, T^*_B$; and $T^*_b$ is the bootstrapped pivotal quantity: $$ \begin{aligned} T^*_b = \frac{\tilde{T}_b - T}{\widehat{\operatorname{se}}^*_b} \end{aligned} $$ The standard error $\widehat{\operatorname{se}}^*_b$ of $T^*_b$ is estimated with an inner bootstrap inside each bootstrap iteration $b$. That's computationally expensive. set.seed(1234) # Dataset used in the book All of Nonparametric Statistics by L. Wasserman. # https://www.stat.cmu.edu/~larry/all-of-nonpar/data.html x <- scan("nerve.dat") n <- length(x) The bootstrap is a general procedure, so let's make the implementation general as well by defining a function estimator to calculate a statistic $T$ (for the nerve data it's the skewness) from a sample $x$ as well as a function simulator to generate bootstrap resamples of $x$. estimator <- function(x) { # sample skewness mean((x - mean(x))^3) / sd(x)^3 } simulator <- function(x) { sample(x, size = length(x), replace = TRUE) } So here is the bootstrap $t$ implementation outlined in the question. I've highlighted the errors in comments. alpha <- 0.05 B <- 2000 # Warning: This code snippet doesn't implement the bootstrap t correctly. Tstat <- estimator(x) Tboot <- numeric(B) Tstar <- numeric(B) for (i in seq(B)) { boot <- simulator(x) Tboot[i] <- estimator(boot) # > Error: Doesn't bootstrap the std. error of Tboot[i] correctly. Tstar[i] <- (Tboot[i] - Tstat) / sd(boot) } q.lower <- quantile(Tstar, alpha / 2) q.upper <- quantile(Tstar, 1 - alpha / 2) # > Error: Uses the standard deviation of the sample x # instead of the standard error of the statistics T c( Tstat - q.upper * sd(x), Tstat - q.lower * sd(x) ) #> 97.5% 2.5% #> 1.451426 2.102211 And here is the correct way to do the bootstrap $t$ method. Note that this procedure requires to do bootstrap inside the bootstrap. This is more computationally intensive but (we expect) more accurate. Tstat <- estimator(x) Tboot <- numeric(B) Tstar <- numeric(B) for (i in seq(B)) { boot <- simulator(x) Tboot[i] <- estimator(boot) # > Bootstrap the bootstrap sample to estimate the std. error of Tboot. boot_of_boot <- replicate(B, estimator(simulator(boot))) Tstar[i] <- (Tboot[i] - Tstat) / sd(boot_of_boot) } q.lower <- quantile(Tstar, alpha / 2) q.upper <- quantile(Tstar, 1 - alpha / 2) # > Use the bootstrap estimate of the std. error of T. c( Tstat - q.upper * sd(Tboot), Tstat - q.lower * sd(Tboot) ) #> 97.5% 2.5% #> 1.463658 2.292374 Example 3.17 of All of Nonparametric Statistics reports the studentized 95% interval for the skewness of the nerve data as (1.45, 2.28). This agrees well with the result obtain with the second/correct implementation of the bootstrap $t$ above. References L. Wasserman (2007). All of Nonparametric Statistics. Springer. Cox, D. and Lewis, P. (1966). The Statistical Analysis of Series of Events. Chapman and Hall. Nerve dataset downloaded from https://www.stat.cmu.edu/~larry/all-of-nonpar/data.html
Which Bootstrap method is most preferred?
It's never too late to fix an error... At least I think there is an error in the implementation of the bootstrap $t$ interval, which requires a bootstrap within the bootstrap (double bootstrap). Note:
Which Bootstrap method is most preferred? It's never too late to fix an error... At least I think there is an error in the implementation of the bootstrap $t$ interval, which requires a bootstrap within the bootstrap (double bootstrap). Note: I refactor the R code for clarity. And since the OP didn't provide data or the code to reproduce the data, I use the nerve data from the book All of Nonparametric Statistics by L. Wasserman. The measurements are 799 waiting times between successive pulses along a nerve fiber and was originally reported in Cox and Lewis (1966). The goal is to compute the studentized bootstrap interval defined as: $$ \begin{aligned} \left( T - t^*_{1-\alpha/2}\widehat{\operatorname{se}}_{\text{boot}}, T - t^*_{\alpha/2}\widehat{\operatorname{se}}_{\text{boot}} \right) \end{aligned} $$ where $T$ is the statistic of interest (here skewness); $\widehat{\operatorname{se}}_{\text{boot}}$ is the standard error of the bootstrapped statistics $\tilde{T}_1, \ldots, \tilde{T}_B$; $t^*_q$ is the $q$ sample quantile of $T^*_1, \ldots, T^*_B$; and $T^*_b$ is the bootstrapped pivotal quantity: $$ \begin{aligned} T^*_b = \frac{\tilde{T}_b - T}{\widehat{\operatorname{se}}^*_b} \end{aligned} $$ The standard error $\widehat{\operatorname{se}}^*_b$ of $T^*_b$ is estimated with an inner bootstrap inside each bootstrap iteration $b$. That's computationally expensive. set.seed(1234) # Dataset used in the book All of Nonparametric Statistics by L. Wasserman. # https://www.stat.cmu.edu/~larry/all-of-nonpar/data.html x <- scan("nerve.dat") n <- length(x) The bootstrap is a general procedure, so let's make the implementation general as well by defining a function estimator to calculate a statistic $T$ (for the nerve data it's the skewness) from a sample $x$ as well as a function simulator to generate bootstrap resamples of $x$. estimator <- function(x) { # sample skewness mean((x - mean(x))^3) / sd(x)^3 } simulator <- function(x) { sample(x, size = length(x), replace = TRUE) } So here is the bootstrap $t$ implementation outlined in the question. I've highlighted the errors in comments. alpha <- 0.05 B <- 2000 # Warning: This code snippet doesn't implement the bootstrap t correctly. Tstat <- estimator(x) Tboot <- numeric(B) Tstar <- numeric(B) for (i in seq(B)) { boot <- simulator(x) Tboot[i] <- estimator(boot) # > Error: Doesn't bootstrap the std. error of Tboot[i] correctly. Tstar[i] <- (Tboot[i] - Tstat) / sd(boot) } q.lower <- quantile(Tstar, alpha / 2) q.upper <- quantile(Tstar, 1 - alpha / 2) # > Error: Uses the standard deviation of the sample x # instead of the standard error of the statistics T c( Tstat - q.upper * sd(x), Tstat - q.lower * sd(x) ) #> 97.5% 2.5% #> 1.451426 2.102211 And here is the correct way to do the bootstrap $t$ method. Note that this procedure requires to do bootstrap inside the bootstrap. This is more computationally intensive but (we expect) more accurate. Tstat <- estimator(x) Tboot <- numeric(B) Tstar <- numeric(B) for (i in seq(B)) { boot <- simulator(x) Tboot[i] <- estimator(boot) # > Bootstrap the bootstrap sample to estimate the std. error of Tboot. boot_of_boot <- replicate(B, estimator(simulator(boot))) Tstar[i] <- (Tboot[i] - Tstat) / sd(boot_of_boot) } q.lower <- quantile(Tstar, alpha / 2) q.upper <- quantile(Tstar, 1 - alpha / 2) # > Use the bootstrap estimate of the std. error of T. c( Tstat - q.upper * sd(Tboot), Tstat - q.lower * sd(Tboot) ) #> 97.5% 2.5% #> 1.463658 2.292374 Example 3.17 of All of Nonparametric Statistics reports the studentized 95% interval for the skewness of the nerve data as (1.45, 2.28). This agrees well with the result obtain with the second/correct implementation of the bootstrap $t$ above. References L. Wasserman (2007). All of Nonparametric Statistics. Springer. Cox, D. and Lewis, P. (1966). The Statistical Analysis of Series of Events. Chapman and Hall. Nerve dataset downloaded from https://www.stat.cmu.edu/~larry/all-of-nonpar/data.html
Which Bootstrap method is most preferred? It's never too late to fix an error... At least I think there is an error in the implementation of the bootstrap $t$ interval, which requires a bootstrap within the bootstrap (double bootstrap). Note:
29,139
Which Bootstrap method is most preferred?
As Michael Chernick notes, it would be useful to also look at the bias-corrected (BC) and bias-corrected and accelerated (BCa) bootstrap. The BCa variant in particular attempts to deal with skewness in data, as you apparently have. DiCiccio & Efron (1996, Statistical Science) found that it performs well, as do Davison & Hinkley, Bootstrap Methods and their Applications (1997). Why does my bootstrap interval have terrible coverage? is related, and I would especially recommend the article by Canto et al. (2006) that I cite there. And in the end, I agree that the answer is likely related to sample size, as well as your underlying distribution, and the pivotality or non of the statistic you are bootstrapping.
Which Bootstrap method is most preferred?
As Michael Chernick notes, it would be useful to also look at the bias-corrected (BC) and bias-corrected and accelerated (BCa) bootstrap. The BCa variant in particular attempts to deal with skewness i
Which Bootstrap method is most preferred? As Michael Chernick notes, it would be useful to also look at the bias-corrected (BC) and bias-corrected and accelerated (BCa) bootstrap. The BCa variant in particular attempts to deal with skewness in data, as you apparently have. DiCiccio & Efron (1996, Statistical Science) found that it performs well, as do Davison & Hinkley, Bootstrap Methods and their Applications (1997). Why does my bootstrap interval have terrible coverage? is related, and I would especially recommend the article by Canto et al. (2006) that I cite there. And in the end, I agree that the answer is likely related to sample size, as well as your underlying distribution, and the pivotality or non of the statistic you are bootstrapping.
Which Bootstrap method is most preferred? As Michael Chernick notes, it would be useful to also look at the bias-corrected (BC) and bias-corrected and accelerated (BCa) bootstrap. The BCa variant in particular attempts to deal with skewness i
29,140
Any use of non-rectangular-shaped kernels in convolutional neural networks? Especially when analyzing game boards
This seems to come up in earlier Herbrich papers on Go. "Learning on Graphs in the Game of Go" - where he looks at the board as a different topology And this slide in a 2015 presentation he makes, mentioning 13 different "patterns" (which is somewhat different that the AlphaGo approach) References Graepel, T., Goutrie, M., Krüger, M., & Herbrich, R. (2001, August). "Learning on graphs in the game of Go." In International Conference on Artificial Neural Networks (pp. 347-352). Springer Berlin Heidelberg. Herbrich, R. (2015) "Machine Learning in Industry". Retrieved from http://mlss.tuebingen.mpg.de/2015/slides/herbrich/herbrich.pdf
Any use of non-rectangular-shaped kernels in convolutional neural networks? Especially when analyzin
This seems to come up in earlier Herbrich papers on Go. "Learning on Graphs in the Game of Go" - where he looks at the board as a different topology And this slide in a 2015 presentation he makes, me
Any use of non-rectangular-shaped kernels in convolutional neural networks? Especially when analyzing game boards This seems to come up in earlier Herbrich papers on Go. "Learning on Graphs in the Game of Go" - where he looks at the board as a different topology And this slide in a 2015 presentation he makes, mentioning 13 different "patterns" (which is somewhat different that the AlphaGo approach) References Graepel, T., Goutrie, M., Krüger, M., & Herbrich, R. (2001, August). "Learning on graphs in the game of Go." In International Conference on Artificial Neural Networks (pp. 347-352). Springer Berlin Heidelberg. Herbrich, R. (2015) "Machine Learning in Industry". Retrieved from http://mlss.tuebingen.mpg.de/2015/slides/herbrich/herbrich.pdf
Any use of non-rectangular-shaped kernels in convolutional neural networks? Especially when analyzin This seems to come up in earlier Herbrich papers on Go. "Learning on Graphs in the Game of Go" - where he looks at the board as a different topology And this slide in a 2015 presentation he makes, me
29,141
Any use of non-rectangular-shaped kernels in convolutional neural networks? Especially when analyzing game boards
{1} compared square versus triangular 2D convolutions As Geomatt22 mentions, in the example you gave the question, one could use a square filter and hope that the "actual" shape of the filter be learnt during the training phase. {1} Graham, Ben. "Sparse 3D convolutional neural networks." arXiv preprint arXiv:1505.02890 (2015). https://scholar.google.com/scholar?cluster=10336237130292873407&hl=en&as_sdt=0,22 ; https://arxiv.org/abs/1505.02890
Any use of non-rectangular-shaped kernels in convolutional neural networks? Especially when analyzin
{1} compared square versus triangular 2D convolutions As Geomatt22 mentions, in the example you gave the question, one could use a square filter and hope that the "actual" shape of the filter be lea
Any use of non-rectangular-shaped kernels in convolutional neural networks? Especially when analyzing game boards {1} compared square versus triangular 2D convolutions As Geomatt22 mentions, in the example you gave the question, one could use a square filter and hope that the "actual" shape of the filter be learnt during the training phase. {1} Graham, Ben. "Sparse 3D convolutional neural networks." arXiv preprint arXiv:1505.02890 (2015). https://scholar.google.com/scholar?cluster=10336237130292873407&hl=en&as_sdt=0,22 ; https://arxiv.org/abs/1505.02890
Any use of non-rectangular-shaped kernels in convolutional neural networks? Especially when analyzin {1} compared square versus triangular 2D convolutions As Geomatt22 mentions, in the example you gave the question, one could use a square filter and hope that the "actual" shape of the filter be lea
29,142
What is the physical significance of inverse of a matrix? [closed]
Matrix Inverse in Terms of Geometry: If a matrix works on a set of vectors by rotating and scaling the vectors, then the matrix's inverse will undo the rotations and scalings and return the original vectors. If the first linear transformation is not unique, there are several ways to do the transformation and you cannot determine that path you need to take to reverse the transformation. In terms of geometry that means that the vectors you're scaling/rotating are in some sense so alike that you can reproduce a specific result by combining the vectors in more than one way. I believe in terms of statistics we'd refer to that as multicollinearity. If the transformation is not unique then you have a singular matrix, and you need to apply specific rules governing how you interpret the transformation in order to generate the inverse.
What is the physical significance of inverse of a matrix? [closed]
Matrix Inverse in Terms of Geometry: If a matrix works on a set of vectors by rotating and scaling the vectors, then the matrix's inverse will undo the rotations and scalings and return the original v
What is the physical significance of inverse of a matrix? [closed] Matrix Inverse in Terms of Geometry: If a matrix works on a set of vectors by rotating and scaling the vectors, then the matrix's inverse will undo the rotations and scalings and return the original vectors. If the first linear transformation is not unique, there are several ways to do the transformation and you cannot determine that path you need to take to reverse the transformation. In terms of geometry that means that the vectors you're scaling/rotating are in some sense so alike that you can reproduce a specific result by combining the vectors in more than one way. I believe in terms of statistics we'd refer to that as multicollinearity. If the transformation is not unique then you have a singular matrix, and you need to apply specific rules governing how you interpret the transformation in order to generate the inverse.
What is the physical significance of inverse of a matrix? [closed] Matrix Inverse in Terms of Geometry: If a matrix works on a set of vectors by rotating and scaling the vectors, then the matrix's inverse will undo the rotations and scalings and return the original v
29,143
multivariate Effective Sample Size (multiESS): choice of batch size
Good question. FYI, this problem is also there in the univariate setting, and is not only in the multivariate ESS. This is the best I can think of right now. The choice of choosing the optimal batch size is an open problem. However, it is clear that in terms of asymptotics, $b_n$ should increase with $n$ (this is mentioned in the paper also I think. In general it is known that if $b_n$ does not increase with $n$, then $\Sigma$ will not be strongly consistent). So instead of looking at $1 \leq b_n \leq n$ it will be better (or at least theoretically better) to look at $b_n = n^{t}$ where $0 < t < 1$. Now, let me first explain how the batch means estimator works. Suppose you have a $p$-dimensional Markov chain $X_1, X_2, X_3, \dots, X_n$. To estimate $\Sigma$, this Markov chain is broken into batches ($a_n$ batches of size $b_n$), and the sample mean of each batch is calculated ($\bar{Y}_i$). $$ \underbrace{X_1, \dots, X_{b_n}}_{\bar{Y}_{1}}, \quad \underbrace{X_{b_n+1}, \dots, X_{2b_n}}_{\bar{Y}_{2}}, \quad \dots\quad ,\underbrace{X_{n-b_n+1},\dots, X_{n}}_{\bar{Y}_{a_n}}.$$ The sample covariance (scaled) is the batch means estimator. If $b_n = 1$, then the batch means will be exactly the Markov chain, and your batch means estimator will estimate $\Lambda$ and not $\Sigma$. If $b_n = 2$ then that means you are assuming that there is only a significant correlation upto lag 1, and all correlation after that is too small. This is likely not true, since lags go upto over and above 20-40 usually. On the other hand, if $b_n > n/2$ you have only one batch, and thus will have no batch means estimator. So you definitely want $b_n < n/2$. But you also want $b_n$ to be low enough so that you have enough batches to calculate the covariance structure. In Vats et al., I think they choose $n^{1/2}$ for when its slowly mixing and $n^{1/3}$ when it is reasonable. A reasonable thing to do is to look at how many significant lags you have. If you have large lags, then choose a larger batch size, and if you have small lags choose a smaller batch size. If you want to use the method you mentioned, this I would restrict $b_n$ to be over a much smaller set. Maybe let $T = \{ .1, .2, .3, .4, .5\}$, and take $$ \text{mESS}^* = \min_{t \in T, b_n = n^t} \text{mESS}(b_n)$$ From my understanding of the field, there is still some work to be done in choosing batch sizes, and a couple of groups (including Vats et al) are working on this problem. However, the ad-hoc way of choosing batch sizes by learning from the ACF plots seems to have worked so far. EDIT------ Here is another way to think about this. Note that the batch size should ideally be such that the batch means $\bar{Y}$ have no lag associated with it. So the batch size can be chosen such that the acf plot for the batch means shows no significant lags. Consider the AR(1) example below (this is univariate, but can be extended to the multivariate setting). For $\epsilon \sim N(0,1)$. $$x_t = \rho x_{t-1} + \epsilon. $$ The closer $\rho$ is to 1, the slower mixing the chain is. I set $\rho = .5$ and run the Markov chain for $n = 100,000$ iterations. Here is the ACF plot for the Markov chain $x_t$. Seeing how there are lags only up to 5-6, I choose $b_n = \lfloor n^{1/5} \rfloor$, break the chain into batches, and calculate the batchmeans. Now I present the ACF for the batch means. Ah, there is one significant lag in the batch means. So maybe $t = 1/5$ is too low. I choose $t = 1/4$ instead (you could choose something inbetween also, but this is easier), and again look at the acf of the batch means. No significant lags! So now you know that choosing $b_n = \lfloor n^{1/4} \rfloor$ gives you big enough batches so that subsequent batch means are approximately uncorrelated.
multivariate Effective Sample Size (multiESS): choice of batch size
Good question. FYI, this problem is also there in the univariate setting, and is not only in the multivariate ESS. This is the best I can think of right now. The choice of choosing the optimal batch
multivariate Effective Sample Size (multiESS): choice of batch size Good question. FYI, this problem is also there in the univariate setting, and is not only in the multivariate ESS. This is the best I can think of right now. The choice of choosing the optimal batch size is an open problem. However, it is clear that in terms of asymptotics, $b_n$ should increase with $n$ (this is mentioned in the paper also I think. In general it is known that if $b_n$ does not increase with $n$, then $\Sigma$ will not be strongly consistent). So instead of looking at $1 \leq b_n \leq n$ it will be better (or at least theoretically better) to look at $b_n = n^{t}$ where $0 < t < 1$. Now, let me first explain how the batch means estimator works. Suppose you have a $p$-dimensional Markov chain $X_1, X_2, X_3, \dots, X_n$. To estimate $\Sigma$, this Markov chain is broken into batches ($a_n$ batches of size $b_n$), and the sample mean of each batch is calculated ($\bar{Y}_i$). $$ \underbrace{X_1, \dots, X_{b_n}}_{\bar{Y}_{1}}, \quad \underbrace{X_{b_n+1}, \dots, X_{2b_n}}_{\bar{Y}_{2}}, \quad \dots\quad ,\underbrace{X_{n-b_n+1},\dots, X_{n}}_{\bar{Y}_{a_n}}.$$ The sample covariance (scaled) is the batch means estimator. If $b_n = 1$, then the batch means will be exactly the Markov chain, and your batch means estimator will estimate $\Lambda$ and not $\Sigma$. If $b_n = 2$ then that means you are assuming that there is only a significant correlation upto lag 1, and all correlation after that is too small. This is likely not true, since lags go upto over and above 20-40 usually. On the other hand, if $b_n > n/2$ you have only one batch, and thus will have no batch means estimator. So you definitely want $b_n < n/2$. But you also want $b_n$ to be low enough so that you have enough batches to calculate the covariance structure. In Vats et al., I think they choose $n^{1/2}$ for when its slowly mixing and $n^{1/3}$ when it is reasonable. A reasonable thing to do is to look at how many significant lags you have. If you have large lags, then choose a larger batch size, and if you have small lags choose a smaller batch size. If you want to use the method you mentioned, this I would restrict $b_n$ to be over a much smaller set. Maybe let $T = \{ .1, .2, .3, .4, .5\}$, and take $$ \text{mESS}^* = \min_{t \in T, b_n = n^t} \text{mESS}(b_n)$$ From my understanding of the field, there is still some work to be done in choosing batch sizes, and a couple of groups (including Vats et al) are working on this problem. However, the ad-hoc way of choosing batch sizes by learning from the ACF plots seems to have worked so far. EDIT------ Here is another way to think about this. Note that the batch size should ideally be such that the batch means $\bar{Y}$ have no lag associated with it. So the batch size can be chosen such that the acf plot for the batch means shows no significant lags. Consider the AR(1) example below (this is univariate, but can be extended to the multivariate setting). For $\epsilon \sim N(0,1)$. $$x_t = \rho x_{t-1} + \epsilon. $$ The closer $\rho$ is to 1, the slower mixing the chain is. I set $\rho = .5$ and run the Markov chain for $n = 100,000$ iterations. Here is the ACF plot for the Markov chain $x_t$. Seeing how there are lags only up to 5-6, I choose $b_n = \lfloor n^{1/5} \rfloor$, break the chain into batches, and calculate the batchmeans. Now I present the ACF for the batch means. Ah, there is one significant lag in the batch means. So maybe $t = 1/5$ is too low. I choose $t = 1/4$ instead (you could choose something inbetween also, but this is easier), and again look at the acf of the batch means. No significant lags! So now you know that choosing $b_n = \lfloor n^{1/4} \rfloor$ gives you big enough batches so that subsequent batch means are approximately uncorrelated.
multivariate Effective Sample Size (multiESS): choice of batch size Good question. FYI, this problem is also there in the univariate setting, and is not only in the multivariate ESS. This is the best I can think of right now. The choice of choosing the optimal batch
29,144
Expected value of maximum ratio of n iid normal variables
The expectation is undefined. Let the $X_i$ be iid according to any distribution $F$ with the following property: there exists a positive number $h$ and a positive $\epsilon$ such that $$F(x) - F(0) \ge h x\tag{1}$$ for all $0 \lt x \lt \epsilon$. This property is true of any continuous distribution, such a Normal distribution, whose density $f$ is continuous and nonzero at $0$, for then $F(x) - F(0) = f(0)x + o(x)$, allowing us to take for $h$ any fixed value between $0$ and $f(0)$. To simplify the analysis I will also assume $F(0) \gt 0$ and $1-F(1) \gt 0$, both of which are true for all Normal distributions. (The latter can be assured by rescaling $F$ if necessary. The former is used only to permit a simple underestimate of a probability.) Let $t \gt 1$ and let us overestimate the survival function of the ratio as $$\eqalign{ \Pr\left(\frac{X_{(i+1)}}{X_{(i)}} \gt t\right) &= \Pr(X_{(i+1)} \gt t X_{(i)}) \\ &\gt \Pr(X_{(i+1)}\gt 1,\ X_{(i)} \le 1/t) \\ &\gt \Pr(X_{(i+1)}\gt 1,\ 1/t \ge X_{(i)} \gt 0,\ 0 \ge X_{(i-1)}).}$$ That latter probability is the chance that exactly $n-i$ of the $X_j$ exceed $1$, exactly one lies in the interval $(0,1/t]$, and the remaining $i-1$ (if any) are nonpositive. In terms of $F$ that chance is given by the multinomial expression $$\binom{n}{n-i,1,i-1}(1-F(1))^{n-i}(F(1/t)-F(0))F(0)^{i-1}.$$ When $t \gt 1/\epsilon$, inequality $(1)$ provides a lower bound for this that is proportional to $1/t$, showing that The survival function $S(t)$ of $X_{(i+1)}/X_{(i)}$, has a tail behaving asymptotically as $1/t$: that is, $S(t) = a/t + o(1/t)$ for some positive number $a$. By definition, the expectation of any random variable is the expectation of its positive part $\max(X,0)$ plus the expectation of its negative part $-\max(-X,0)$. Since the positive part of the expectation--if it exists--is the integral of the survival function (from $0$ to $\infty$) and $$\int_0^x S(t) dt = \int_0^x (1/t + o(1/t))dt\; \propto\; \log(x),$$ the positive part of the expectation of $X_{(i+1)}/X_{(i)}$ diverges. The same argument applied to the variables $-X_i$ shows the negative part of the expectation diverges. Thus, the expectation of the ratio isn't even infinite: it is undefined.
Expected value of maximum ratio of n iid normal variables
The expectation is undefined. Let the $X_i$ be iid according to any distribution $F$ with the following property: there exists a positive number $h$ and a positive $\epsilon$ such that $$F(x) - F(0)
Expected value of maximum ratio of n iid normal variables The expectation is undefined. Let the $X_i$ be iid according to any distribution $F$ with the following property: there exists a positive number $h$ and a positive $\epsilon$ such that $$F(x) - F(0) \ge h x\tag{1}$$ for all $0 \lt x \lt \epsilon$. This property is true of any continuous distribution, such a Normal distribution, whose density $f$ is continuous and nonzero at $0$, for then $F(x) - F(0) = f(0)x + o(x)$, allowing us to take for $h$ any fixed value between $0$ and $f(0)$. To simplify the analysis I will also assume $F(0) \gt 0$ and $1-F(1) \gt 0$, both of which are true for all Normal distributions. (The latter can be assured by rescaling $F$ if necessary. The former is used only to permit a simple underestimate of a probability.) Let $t \gt 1$ and let us overestimate the survival function of the ratio as $$\eqalign{ \Pr\left(\frac{X_{(i+1)}}{X_{(i)}} \gt t\right) &= \Pr(X_{(i+1)} \gt t X_{(i)}) \\ &\gt \Pr(X_{(i+1)}\gt 1,\ X_{(i)} \le 1/t) \\ &\gt \Pr(X_{(i+1)}\gt 1,\ 1/t \ge X_{(i)} \gt 0,\ 0 \ge X_{(i-1)}).}$$ That latter probability is the chance that exactly $n-i$ of the $X_j$ exceed $1$, exactly one lies in the interval $(0,1/t]$, and the remaining $i-1$ (if any) are nonpositive. In terms of $F$ that chance is given by the multinomial expression $$\binom{n}{n-i,1,i-1}(1-F(1))^{n-i}(F(1/t)-F(0))F(0)^{i-1}.$$ When $t \gt 1/\epsilon$, inequality $(1)$ provides a lower bound for this that is proportional to $1/t$, showing that The survival function $S(t)$ of $X_{(i+1)}/X_{(i)}$, has a tail behaving asymptotically as $1/t$: that is, $S(t) = a/t + o(1/t)$ for some positive number $a$. By definition, the expectation of any random variable is the expectation of its positive part $\max(X,0)$ plus the expectation of its negative part $-\max(-X,0)$. Since the positive part of the expectation--if it exists--is the integral of the survival function (from $0$ to $\infty$) and $$\int_0^x S(t) dt = \int_0^x (1/t + o(1/t))dt\; \propto\; \log(x),$$ the positive part of the expectation of $X_{(i+1)}/X_{(i)}$ diverges. The same argument applied to the variables $-X_i$ shows the negative part of the expectation diverges. Thus, the expectation of the ratio isn't even infinite: it is undefined.
Expected value of maximum ratio of n iid normal variables The expectation is undefined. Let the $X_i$ be iid according to any distribution $F$ with the following property: there exists a positive number $h$ and a positive $\epsilon$ such that $$F(x) - F(0)
29,145
LASSO regularisation parameter from LARS algorithm
I have figured out how to perform the required conversion. Assume that the inputs $X$ are standardised (zero mean, unit variance) and the responses $y$ are centered. We know that the modified LARS algorithm provides the full LASSO regularisation path, cf. original paper by Efron et al. This means that, at each iteration $k$, the former algorithm finds an optimal couple $(\beta^*, \lambda^*)$ minimising the regularised loss function: \begin{align} (\beta^*, \lambda^*) &= \text{argmin}_{(\beta,\lambda)} L(\beta,\lambda) \\ L(\beta,\lambda) &= \Vert y-X\beta \Vert_2^2 + \lambda \Vert \beta \Vert_1 \\ &= \sum_{i=1}^N \left(y_i - \sum_{j=1}^p \beta_j X_{ij}\right)^2 + \lambda \sum_{j=1}^p \vert \beta_j \vert \end{align} For all active components $a=\{1,...,q\}$ in the active set $\mathcal{A}_k$ at the end of step $k$, applying the KKT stationarity condition gives \begin{align} 0 &= \frac{\partial L}{\partial \beta_a}(\beta^*,\lambda^*) \\ &= -2 \sum_{i=1}^N X_{ia} \left(y_i - \sum_{j=1}^q \beta_j^* X_{ij}\right) + \lambda^*\ \text{sign}(\beta_a^*) \end{align} In other words $$ \lambda^* = 2 \frac{\sum_{i=1}^N X_{ia} \left(y_i - \sum_{j=1}^q \beta_j^* X_{ij}\right)}{\text{sign}(\beta_a^*)} $$ or in matrix notations (noting that dividing/multiplying by $\text {sign}(x)$ is the same) the following equation is satisfied for any active component $a$: $$ \lambda^* = 2 \ \text{sign}(\beta_a^*) X_a^T r $$ In the original paper, authors mention that for any solution to the LASSO problem, the sign of an active regression weight ($\beta_a^*$) should be identical to the sign of the corresponding active predictor's correlation with the current regression residual ($X_a^T r$), which is only logic since $\lambda^*$ must be positive. Thus we can also write: $$ \lambda^* = 2 \vert X_a^T r \vert $$ In addition, we see that at the final step $k$ (OLS fit, $\beta^* = (X^TX)^{-1}X^T y $), we get $\lambda^* = 0$ due to the orthogonality lemma. The use of the median in the MATLAB implementation I found IMHO seems like an effort to 'average out' numerical errors over all the active components: $$ \lambda^* = \text{median}( 2 \vert X_{\mathcal{A}_k}^T r_{\mathcal{A}_k} \vert ),\ \ \ \forall k > 1$$ To compute the value of $\lambda$ when there are no active components (step $k=1$), one can use the same trick as above but in the infinitesimal limit where all regression weights are zero and only the sign of the first component $b$ to become active (at step $k=2$) matters. This yields: $$ \lambda^* = 2 \ \text{sign}(\beta_b^*) X_b^T y $$ which is strictly equivalent to $$ \lambda^* = \max(2 \vert X^T y \vert), \text { for } k=1$$ because (i) same remark as earlier concerning the sign of regression weights; (ii) the LARS algorithm determines the next component $b$ to enter the active set as the one which is the most correlated with the current residual, which at step $k=1$ is simply $y$.
LASSO regularisation parameter from LARS algorithm
I have figured out how to perform the required conversion. Assume that the inputs $X$ are standardised (zero mean, unit variance) and the responses $y$ are centered. We know that the modified LARS al
LASSO regularisation parameter from LARS algorithm I have figured out how to perform the required conversion. Assume that the inputs $X$ are standardised (zero mean, unit variance) and the responses $y$ are centered. We know that the modified LARS algorithm provides the full LASSO regularisation path, cf. original paper by Efron et al. This means that, at each iteration $k$, the former algorithm finds an optimal couple $(\beta^*, \lambda^*)$ minimising the regularised loss function: \begin{align} (\beta^*, \lambda^*) &= \text{argmin}_{(\beta,\lambda)} L(\beta,\lambda) \\ L(\beta,\lambda) &= \Vert y-X\beta \Vert_2^2 + \lambda \Vert \beta \Vert_1 \\ &= \sum_{i=1}^N \left(y_i - \sum_{j=1}^p \beta_j X_{ij}\right)^2 + \lambda \sum_{j=1}^p \vert \beta_j \vert \end{align} For all active components $a=\{1,...,q\}$ in the active set $\mathcal{A}_k$ at the end of step $k$, applying the KKT stationarity condition gives \begin{align} 0 &= \frac{\partial L}{\partial \beta_a}(\beta^*,\lambda^*) \\ &= -2 \sum_{i=1}^N X_{ia} \left(y_i - \sum_{j=1}^q \beta_j^* X_{ij}\right) + \lambda^*\ \text{sign}(\beta_a^*) \end{align} In other words $$ \lambda^* = 2 \frac{\sum_{i=1}^N X_{ia} \left(y_i - \sum_{j=1}^q \beta_j^* X_{ij}\right)}{\text{sign}(\beta_a^*)} $$ or in matrix notations (noting that dividing/multiplying by $\text {sign}(x)$ is the same) the following equation is satisfied for any active component $a$: $$ \lambda^* = 2 \ \text{sign}(\beta_a^*) X_a^T r $$ In the original paper, authors mention that for any solution to the LASSO problem, the sign of an active regression weight ($\beta_a^*$) should be identical to the sign of the corresponding active predictor's correlation with the current regression residual ($X_a^T r$), which is only logic since $\lambda^*$ must be positive. Thus we can also write: $$ \lambda^* = 2 \vert X_a^T r \vert $$ In addition, we see that at the final step $k$ (OLS fit, $\beta^* = (X^TX)^{-1}X^T y $), we get $\lambda^* = 0$ due to the orthogonality lemma. The use of the median in the MATLAB implementation I found IMHO seems like an effort to 'average out' numerical errors over all the active components: $$ \lambda^* = \text{median}( 2 \vert X_{\mathcal{A}_k}^T r_{\mathcal{A}_k} \vert ),\ \ \ \forall k > 1$$ To compute the value of $\lambda$ when there are no active components (step $k=1$), one can use the same trick as above but in the infinitesimal limit where all regression weights are zero and only the sign of the first component $b$ to become active (at step $k=2$) matters. This yields: $$ \lambda^* = 2 \ \text{sign}(\beta_b^*) X_b^T y $$ which is strictly equivalent to $$ \lambda^* = \max(2 \vert X^T y \vert), \text { for } k=1$$ because (i) same remark as earlier concerning the sign of regression weights; (ii) the LARS algorithm determines the next component $b$ to enter the active set as the one which is the most correlated with the current residual, which at step $k=1$ is simply $y$.
LASSO regularisation parameter from LARS algorithm I have figured out how to perform the required conversion. Assume that the inputs $X$ are standardised (zero mean, unit variance) and the responses $y$ are centered. We know that the modified LARS al
29,146
Why aren't type II errors emphasized as much in statistical literature?
This is a good question. Let me begin with a couple of clarifications: It doesn't really mean anything for a "[t]ype II error [to] be significant" (or for a type I error to be). Certainly, it might be very important that we missed a true effect, though. Also, we do not generally "[accept] the null hypothesis". (For more on that, it may help to read my answer here: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis?) I think you are (unfortunately) right that less attention is paid to power and type II errors. While I think the situation is improving in biomedical research (e.g., funding agencies and IRBs often reqire power analyses now), I think there are a couple of reasons for this: I think power is harder for people to understand than simple significance. (This is in part because it depends on a lot of unknowns—notably the effect size, but there are others as well). Most sciences (i.e., other than physics and chemistry) are not well mathematized. As a result, it is very hard for researchers to know what the effect size 'should' be given their theory (other than just $\ne0$). Scientists have traditionally assumed that type I errors are worse than type II errors.
Why aren't type II errors emphasized as much in statistical literature?
This is a good question. Let me begin with a couple of clarifications: It doesn't really mean anything for a "[t]ype II error [to] be significant" (or for a type I error to be). Certainly, it mig
Why aren't type II errors emphasized as much in statistical literature? This is a good question. Let me begin with a couple of clarifications: It doesn't really mean anything for a "[t]ype II error [to] be significant" (or for a type I error to be). Certainly, it might be very important that we missed a true effect, though. Also, we do not generally "[accept] the null hypothesis". (For more on that, it may help to read my answer here: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis?) I think you are (unfortunately) right that less attention is paid to power and type II errors. While I think the situation is improving in biomedical research (e.g., funding agencies and IRBs often reqire power analyses now), I think there are a couple of reasons for this: I think power is harder for people to understand than simple significance. (This is in part because it depends on a lot of unknowns—notably the effect size, but there are others as well). Most sciences (i.e., other than physics and chemistry) are not well mathematized. As a result, it is very hard for researchers to know what the effect size 'should' be given their theory (other than just $\ne0$). Scientists have traditionally assumed that type I errors are worse than type II errors.
Why aren't type II errors emphasized as much in statistical literature? This is a good question. Let me begin with a couple of clarifications: It doesn't really mean anything for a "[t]ype II error [to] be significant" (or for a type I error to be). Certainly, it mig
29,147
Why aren't type II errors emphasized as much in statistical literature?
The reason is that we simply don't know the actual type II error rate and we never will. It depends on a parameter we usually don't know. In turn, if we would know this parameter, we would not need to do a statistical test. However, we can plan an experiment such that a specific type II error rate is met given some alternative is true. This way, we would choose a sample size that does not waste resources: Either because the test doesn't reject in the end or because already a much smaller sample size would have been sufficient to reject the hypothesis.
Why aren't type II errors emphasized as much in statistical literature?
The reason is that we simply don't know the actual type II error rate and we never will. It depends on a parameter we usually don't know. In turn, if we would know this parameter, we would not need to
Why aren't type II errors emphasized as much in statistical literature? The reason is that we simply don't know the actual type II error rate and we never will. It depends on a parameter we usually don't know. In turn, if we would know this parameter, we would not need to do a statistical test. However, we can plan an experiment such that a specific type II error rate is met given some alternative is true. This way, we would choose a sample size that does not waste resources: Either because the test doesn't reject in the end or because already a much smaller sample size would have been sufficient to reject the hypothesis.
Why aren't type II errors emphasized as much in statistical literature? The reason is that we simply don't know the actual type II error rate and we never will. It depends on a parameter we usually don't know. In turn, if we would know this parameter, we would not need to
29,148
Explain Backward algorithm for Hidden Markov Model
Looks like your observation sequence is B,B. Let's denote the observation at time $t$ as $z_t$ and hidden state at time $t$ as $x_t$. If we denote $\alpha_t(i)$ as the forward values and $\beta_t(i)$ as the backward values, ($i$ is one of the possible hidden states) $\alpha_t(i)=P(x_t=i,z_{1:t})$ This means $\alpha_t(i)$ is the probability of arriving to state $i$ at time $t$ emitting the observations up to time $t$. Then, $\beta_t(i) = P(z_{t+1:T}\mid x_t=i)$ which is the probability of emitting the remaining sequence from $t+1$ until the end of time after being at hidden state $i$ at time $t$. To do the recursion on $\beta_t(i)$ we can write, $P(z_{t+1:T}\mid x_t=i)=\sum\limits_jP(x_{t+1}=j,z_{t+1:T}\mid x_{t}=i)$ Using chain rule, $P(x_{t+1}=j,z_{t+1:T}\mid x_{t}=i) = P(z_{t+2:T},z_{t+1},x_{t+1}=j\mid x_{t}=i)\\ =P(z_{t+2:T}\mid z_{t+1},x_{t+1}=j, x_{t}=i)P(z_{t+1}\mid x_{t+1}=j, x_{t}=i)P(x_{t+1}=j\mid x_{t}=i)$ From conditional independencies of HMM the above probabilities simplifies to $P(z_{t+2:T}\mid x_{t+1}=j)P(z_{t+1}\mid x_{t+1}=j)P(x_{t+1}=j\mid x_{t}=i)$ Note that $P(z_{t+2:T}\mid x_{t+1}=j) =\beta_{t+1}(j) $ from our definition. Substituting to $P(z_{t+1:T}\mid x_t=i)$ we get, $\beta_t(i) = P(z_{t+1:T}\mid x_t=i) = \sum\limits_j \beta_{t+1}(j)P(z_{t+1}\mid x_{t+1}=j)P(x_{t+1}=j\mid x_t=i)$ Now you have a recursion for beta. Last two terms of the last equation you know from your model. Here starting from end of the chain (T) we go backward calculating all $\beta_t$, hence the backward algorithm. In forward you have to start from the beginning and you go to the end of chain. In your model you have to initialize $\beta_T(i) = P(\emptyset \mid x_T=i)=1$ for all $i$. This is the probability of not emitting observations after $T=2$.
Explain Backward algorithm for Hidden Markov Model
Looks like your observation sequence is B,B. Let's denote the observation at time $t$ as $z_t$ and hidden state at time $t$ as $x_t$. If we denote $\alpha_t(i)$ as the forward values and $\beta_t(i)$
Explain Backward algorithm for Hidden Markov Model Looks like your observation sequence is B,B. Let's denote the observation at time $t$ as $z_t$ and hidden state at time $t$ as $x_t$. If we denote $\alpha_t(i)$ as the forward values and $\beta_t(i)$ as the backward values, ($i$ is one of the possible hidden states) $\alpha_t(i)=P(x_t=i,z_{1:t})$ This means $\alpha_t(i)$ is the probability of arriving to state $i$ at time $t$ emitting the observations up to time $t$. Then, $\beta_t(i) = P(z_{t+1:T}\mid x_t=i)$ which is the probability of emitting the remaining sequence from $t+1$ until the end of time after being at hidden state $i$ at time $t$. To do the recursion on $\beta_t(i)$ we can write, $P(z_{t+1:T}\mid x_t=i)=\sum\limits_jP(x_{t+1}=j,z_{t+1:T}\mid x_{t}=i)$ Using chain rule, $P(x_{t+1}=j,z_{t+1:T}\mid x_{t}=i) = P(z_{t+2:T},z_{t+1},x_{t+1}=j\mid x_{t}=i)\\ =P(z_{t+2:T}\mid z_{t+1},x_{t+1}=j, x_{t}=i)P(z_{t+1}\mid x_{t+1}=j, x_{t}=i)P(x_{t+1}=j\mid x_{t}=i)$ From conditional independencies of HMM the above probabilities simplifies to $P(z_{t+2:T}\mid x_{t+1}=j)P(z_{t+1}\mid x_{t+1}=j)P(x_{t+1}=j\mid x_{t}=i)$ Note that $P(z_{t+2:T}\mid x_{t+1}=j) =\beta_{t+1}(j) $ from our definition. Substituting to $P(z_{t+1:T}\mid x_t=i)$ we get, $\beta_t(i) = P(z_{t+1:T}\mid x_t=i) = \sum\limits_j \beta_{t+1}(j)P(z_{t+1}\mid x_{t+1}=j)P(x_{t+1}=j\mid x_t=i)$ Now you have a recursion for beta. Last two terms of the last equation you know from your model. Here starting from end of the chain (T) we go backward calculating all $\beta_t$, hence the backward algorithm. In forward you have to start from the beginning and you go to the end of chain. In your model you have to initialize $\beta_T(i) = P(\emptyset \mid x_T=i)=1$ for all $i$. This is the probability of not emitting observations after $T=2$.
Explain Backward algorithm for Hidden Markov Model Looks like your observation sequence is B,B. Let's denote the observation at time $t$ as $z_t$ and hidden state at time $t$ as $x_t$. If we denote $\alpha_t(i)$ as the forward values and $\beta_t(i)$
29,149
Two Sample chi squared test
First some notation. Let $\left\{X_t\right\}_{1,\ldots,m}$ and $\left\{Y_t\right\}_{1,\ldots,n}$ denote the categorical sequence associated with $\mathbf{X}_m$ and $\mathbf{Y}_n$, i.e. $\text{Pr}\left\{X_t = i\right\} = a_i, \text{Pr}\left\{Y_t = i\right\} = b_i$. Let $N=n+m$. Consider the binerizations $$\begin{align*} \mathbf{X}_{i}^* &= (X^*_{1,i},\ldots,X_{N,i}^*) = (\delta_{i,X_1},\ldots,\delta_{i,X_n},0,\ldots,0)\\ \mathbf{Y}_{i}^* &= (Y^*_{1,i},\ldots,Y_{N,i}^*)= (0,\ldots,0,\delta_{i,Y_1},\ldots,\delta_{i,Y_n})\\ \end{align*}$$ where $\delta_{i,j}\equiv \mathbf{1}_{i=j}$ is Kronecker's Delta. So we have $$X_{m,i} = \sum_{t=1}^{N} X_{t,i}^* = \sum_{t=1}^m \delta_{i,X_t} \qquad Y_{n,i} = \sum_{t=1}^{N} Y_{t,i}^* = \sum_{t=1}^n \delta_{i,Y_t}$$ Now we begin the proof. First we combine the two summands of the test statistic. Note that $$\begin{align*} X_{m,i} - m\hat{c}_i &= \dfrac{(n+m)X_{m,i} - m(X_{m,i} + Y_{n,i})}{n+m}\\ &= \dfrac{nX_{m,i} - mY_{n,i}}{n+m}\\ Y_{n,i} - n\hat{c}_i &= \dfrac{(n+m)Y_{n,i} - n(X_{m,i} + Y_{n,i})}{n+m}\\ &= \dfrac{mY_{n,i} - nX_{m,i}}{n+m} \end{align*}$$ So we can write the test statistic as $$\begin{align*} S &= \sum_{i=1}^k \dfrac{(X_{m,i} - m\hat{c}_i)^2}{m\hat{c}_i} + \sum_{i=1}^k \dfrac{(Y_{n,i} - n\hat{c}_i)^2}{n\hat{c}_i}\\ &= \sum_{i=1}^k \dfrac{(nX_{m,i} - mY_{n,i})^2}{(n+m)^2m\hat{c}_i} + \sum_{i=1}^k \dfrac{(nX_{m,i} - mY_{n,i})^2}{(n+m)^2n\hat{c}_i}\\ &= \sum_{i=1}^k \dfrac{(nX_{m,i} - mY_{n,i})^2}{nm(n+m)\hat{c}_i} \end{align*}$$ Next note that $$nX_{m,i} - mY_{n,i} = \sum_{t=1}^N nX_{t,i}^* - mY_{t,i}^* = Z_{i}$$ with the following properties $$\begin{align*} \text{E}[Z_{i}] &= n\text{E}[X_{m,i}] - m\text{E}[Y_{n,i}]\\ &= nma_i - nma_i = 0\\ \text{Var}[Z_{i}] &= \text{Var}[nX_{m,i} - mY_{n,i}]\\ &= n^2\text{Var}[X_{m,i}] - m^2\text{Var}[Y_{n,i}] \qquad\text{Note $X_{m,i}$ and $Y_{n,i}$ are independent}\\ &= n^2ma_i(1-a_i) + m^2na_i(1-a_i)\\ &= nm(n+m)a_i(1-a_i)\\ \text{Cov}[Z_{i},Z_{j}] &= \text{E}[Z_{i}Z_{j}] - \text{E}[Z_{i}]\text{E}[Z_{j}]\\ &= \text{E}[(nX_{m,i} - mY_{n,i})(nX_{m,j} - mY_{n,j})]\\ &= n^2(-ma_ia_j + m^2a_ia_j) - 2n^2m^2a_ia_j + m^2(-na_ia_j+n^2a_ia_j)\\ &= -nm(n+m)a_ia_j \end{align*}$$ and so by multivariate CLT we have $$\dfrac{1}{\sqrt{nm(n+m)}}\mathbf{Z} = \dfrac{n\mathbf{X}_m - m \mathbf{Y}_n}{\sqrt{nm(n+m)}}\overset{D}{\to} \text{N}(\mathbf{0},\Sigma)$$ where the $(i,j)$th element of $\Sigma$, $\sigma_{ij} = a_i(\delta_{ij} - a_j)$. Since $\hat{\mathbf{c}} = (\hat{c}_1,\ldots,\hat{c}_k) \overset{p}{\to} (a_1,\ldots,a_k)=\mathbf{a}$ By Slutsky we have $$\dfrac{n\mathbf{X}_m - m \mathbf{Y}_n}{\sqrt{nm(n+m)}\hat{\mathbf{c}}}\overset{D}{\to} \text{N}(\mathbf{0},\mathbf{I}_k - \sqrt{\mathbf{a}}\sqrt{\mathbf{a}}')$$ where $\mathbf{I}_k$ is the $k\times k$ identity matrix, $\sqrt{\mathbf{a}} = (\sqrt{a_1},\ldots,\sqrt{a_k})$. Since $\mathbf{I}_k - \sqrt{\mathbf{a}}\sqrt{\mathbf{a}}'$ has eigenvalue 0 of multiplicty 1 and eigenvalue 1 of multiplicity $k-1$, by the continuous mapping theorem (or see Lemma 17.1, Theorem 17.2 of van der Vaart) we have $$\sum_{i=1}^k \dfrac{(nX_{m,i} - mY_{n,i})^2}{nm(n+m)\hat{c}_i} \overset{D}{\to} \chi^2_{k-1}$$
Two Sample chi squared test
First some notation. Let $\left\{X_t\right\}_{1,\ldots,m}$ and $\left\{Y_t\right\}_{1,\ldots,n}$ denote the categorical sequence associated with $\mathbf{X}_m$ and $\mathbf{Y}_n$, i.e. $\text{Pr}\left
Two Sample chi squared test First some notation. Let $\left\{X_t\right\}_{1,\ldots,m}$ and $\left\{Y_t\right\}_{1,\ldots,n}$ denote the categorical sequence associated with $\mathbf{X}_m$ and $\mathbf{Y}_n$, i.e. $\text{Pr}\left\{X_t = i\right\} = a_i, \text{Pr}\left\{Y_t = i\right\} = b_i$. Let $N=n+m$. Consider the binerizations $$\begin{align*} \mathbf{X}_{i}^* &= (X^*_{1,i},\ldots,X_{N,i}^*) = (\delta_{i,X_1},\ldots,\delta_{i,X_n},0,\ldots,0)\\ \mathbf{Y}_{i}^* &= (Y^*_{1,i},\ldots,Y_{N,i}^*)= (0,\ldots,0,\delta_{i,Y_1},\ldots,\delta_{i,Y_n})\\ \end{align*}$$ where $\delta_{i,j}\equiv \mathbf{1}_{i=j}$ is Kronecker's Delta. So we have $$X_{m,i} = \sum_{t=1}^{N} X_{t,i}^* = \sum_{t=1}^m \delta_{i,X_t} \qquad Y_{n,i} = \sum_{t=1}^{N} Y_{t,i}^* = \sum_{t=1}^n \delta_{i,Y_t}$$ Now we begin the proof. First we combine the two summands of the test statistic. Note that $$\begin{align*} X_{m,i} - m\hat{c}_i &= \dfrac{(n+m)X_{m,i} - m(X_{m,i} + Y_{n,i})}{n+m}\\ &= \dfrac{nX_{m,i} - mY_{n,i}}{n+m}\\ Y_{n,i} - n\hat{c}_i &= \dfrac{(n+m)Y_{n,i} - n(X_{m,i} + Y_{n,i})}{n+m}\\ &= \dfrac{mY_{n,i} - nX_{m,i}}{n+m} \end{align*}$$ So we can write the test statistic as $$\begin{align*} S &= \sum_{i=1}^k \dfrac{(X_{m,i} - m\hat{c}_i)^2}{m\hat{c}_i} + \sum_{i=1}^k \dfrac{(Y_{n,i} - n\hat{c}_i)^2}{n\hat{c}_i}\\ &= \sum_{i=1}^k \dfrac{(nX_{m,i} - mY_{n,i})^2}{(n+m)^2m\hat{c}_i} + \sum_{i=1}^k \dfrac{(nX_{m,i} - mY_{n,i})^2}{(n+m)^2n\hat{c}_i}\\ &= \sum_{i=1}^k \dfrac{(nX_{m,i} - mY_{n,i})^2}{nm(n+m)\hat{c}_i} \end{align*}$$ Next note that $$nX_{m,i} - mY_{n,i} = \sum_{t=1}^N nX_{t,i}^* - mY_{t,i}^* = Z_{i}$$ with the following properties $$\begin{align*} \text{E}[Z_{i}] &= n\text{E}[X_{m,i}] - m\text{E}[Y_{n,i}]\\ &= nma_i - nma_i = 0\\ \text{Var}[Z_{i}] &= \text{Var}[nX_{m,i} - mY_{n,i}]\\ &= n^2\text{Var}[X_{m,i}] - m^2\text{Var}[Y_{n,i}] \qquad\text{Note $X_{m,i}$ and $Y_{n,i}$ are independent}\\ &= n^2ma_i(1-a_i) + m^2na_i(1-a_i)\\ &= nm(n+m)a_i(1-a_i)\\ \text{Cov}[Z_{i},Z_{j}] &= \text{E}[Z_{i}Z_{j}] - \text{E}[Z_{i}]\text{E}[Z_{j}]\\ &= \text{E}[(nX_{m,i} - mY_{n,i})(nX_{m,j} - mY_{n,j})]\\ &= n^2(-ma_ia_j + m^2a_ia_j) - 2n^2m^2a_ia_j + m^2(-na_ia_j+n^2a_ia_j)\\ &= -nm(n+m)a_ia_j \end{align*}$$ and so by multivariate CLT we have $$\dfrac{1}{\sqrt{nm(n+m)}}\mathbf{Z} = \dfrac{n\mathbf{X}_m - m \mathbf{Y}_n}{\sqrt{nm(n+m)}}\overset{D}{\to} \text{N}(\mathbf{0},\Sigma)$$ where the $(i,j)$th element of $\Sigma$, $\sigma_{ij} = a_i(\delta_{ij} - a_j)$. Since $\hat{\mathbf{c}} = (\hat{c}_1,\ldots,\hat{c}_k) \overset{p}{\to} (a_1,\ldots,a_k)=\mathbf{a}$ By Slutsky we have $$\dfrac{n\mathbf{X}_m - m \mathbf{Y}_n}{\sqrt{nm(n+m)}\hat{\mathbf{c}}}\overset{D}{\to} \text{N}(\mathbf{0},\mathbf{I}_k - \sqrt{\mathbf{a}}\sqrt{\mathbf{a}}')$$ where $\mathbf{I}_k$ is the $k\times k$ identity matrix, $\sqrt{\mathbf{a}} = (\sqrt{a_1},\ldots,\sqrt{a_k})$. Since $\mathbf{I}_k - \sqrt{\mathbf{a}}\sqrt{\mathbf{a}}'$ has eigenvalue 0 of multiplicty 1 and eigenvalue 1 of multiplicity $k-1$, by the continuous mapping theorem (or see Lemma 17.1, Theorem 17.2 of van der Vaart) we have $$\sum_{i=1}^k \dfrac{(nX_{m,i} - mY_{n,i})^2}{nm(n+m)\hat{c}_i} \overset{D}{\to} \chi^2_{k-1}$$
Two Sample chi squared test First some notation. Let $\left\{X_t\right\}_{1,\ldots,m}$ and $\left\{Y_t\right\}_{1,\ldots,n}$ denote the categorical sequence associated with $\mathbf{X}_m$ and $\mathbf{Y}_n$, i.e. $\text{Pr}\left
29,150
adonis in vegan: order of variables or use of strata
As you've noted yourself, by running two adonis models with your fixed factors inverted you see that both the variance assigned to each factor, and the P-values differ each time. This occurs in unbalanced designs such as yours, where the degrees of freedom associated with each factor differ. From the description of your experiment, it looks like a classical case of a nested design where Species is nested in Site. In this case the model you're looking for should look like this: adonis <- adonis(jacc ~ Site / Species, strata = Site, data = df_compare). Do note that nestedness should be stated in the model formulation as well as in the strata (see reply by Jari Oksanen).
adonis in vegan: order of variables or use of strata
As you've noted yourself, by running two adonis models with your fixed factors inverted you see that both the variance assigned to each factor, and the P-values differ each time. This occurs in unbala
adonis in vegan: order of variables or use of strata As you've noted yourself, by running two adonis models with your fixed factors inverted you see that both the variance assigned to each factor, and the P-values differ each time. This occurs in unbalanced designs such as yours, where the degrees of freedom associated with each factor differ. From the description of your experiment, it looks like a classical case of a nested design where Species is nested in Site. In this case the model you're looking for should look like this: adonis <- adonis(jacc ~ Site / Species, strata = Site, data = df_compare). Do note that nestedness should be stated in the model formulation as well as in the strata (see reply by Jari Oksanen).
adonis in vegan: order of variables or use of strata As you've noted yourself, by running two adonis models with your fixed factors inverted you see that both the variance assigned to each factor, and the P-values differ each time. This occurs in unbala
29,151
What are dangers of calculating Pearson correlations (instead of tetrachoric ones) for binary variables in factor analysis?
Linear Factor analyis is theoretically, logically for continuous variables only. If variables are not continuous but are, for example, dichotomous, one way for you shall be to admit underlying continuous variables behind and declare that the observed variables are the binned underlying or true ones. You cannot quantify a dichotomous variable into a scale one without an extraneous "tutor", but you can still infer the correlations which would be if your variables had not been binned yet and were "original" continuous normally distributed. And this is the tetrachoric correlations (or polychoric, if in place of binary you have ordinal variables). So, using tetrachoric correlations (inferred Pearson correlations) in place of Phi correlations (observed Pearson correlations with dichotomous data) is a logical act. Phi correlations computed on dichotomously binned variables are very sensitive to the cut point (aka "difficulty level of task") over which the binning took place. A pair of variables could hope to attain the theoretical bound $r=1$ only when they are binned over the equivalent cut point. The more different was the cut point in them the lower is the maximal bound of possible $r$ between them. (This is the general effect of the sameness of marginal distributions on the possible range for Pearson $r$, but in dichotomous variables this effect is most sharp because too few values to take on.) So, phi correlations in their matrix can be seen as unequally deflated due to contrasting marginal distributions in the dichotomous variables; you don't know if one correlation is larger than another "truly" or due to the different cut points in these two pairs of variables. The number of factors to extract (following criterions such as Kaiser's "eigenvalue>1") will be inflated: some extracted "factors" being the outcome of the unevenness, diversity of the cut points, - not substantive latent factors. This is practical reason why not use phi correlations (at least in their raw - nonrescaled) form. There has been evidence in simulation/binning studies that factor analysis based on tetrachoric correlations worsens if there are many strong (>0.7) correlations in the matrix. Tetrachoric correlation is not ideal: if the cut-points of the correlating underlying variables are at the opposites (and so the marginal distributions in the dichotomous are oppositely skewed) while the underlying association is strong, tetrachoric coefficient overestimates it further. Note also that tetrachoric correlation matrix isn't necessarily positive semidefinite in not large samples and might thus need correction ("smoothing"). Still, it is regarded by many a better way than doing factor analysis on plain Pearson (phi) coefficients. But why do namely factor analysis on binary data at all? There are other options, including latent trait / IRT (a form of "logistic" factor analysis) and Multiple Correspondence analysis (if you see your binary variables as nominal categories). See also: Assumptions of linear factor analysis. Rescaled Pearson $r$ could be (but not very convincing) alternative to tetrachotic $r$ for FA.
What are dangers of calculating Pearson correlations (instead of tetrachoric ones) for binary variab
Linear Factor analyis is theoretically, logically for continuous variables only. If variables are not continuous but are, for example, dichotomous, one way for you shall be to admit underlying continu
What are dangers of calculating Pearson correlations (instead of tetrachoric ones) for binary variables in factor analysis? Linear Factor analyis is theoretically, logically for continuous variables only. If variables are not continuous but are, for example, dichotomous, one way for you shall be to admit underlying continuous variables behind and declare that the observed variables are the binned underlying or true ones. You cannot quantify a dichotomous variable into a scale one without an extraneous "tutor", but you can still infer the correlations which would be if your variables had not been binned yet and were "original" continuous normally distributed. And this is the tetrachoric correlations (or polychoric, if in place of binary you have ordinal variables). So, using tetrachoric correlations (inferred Pearson correlations) in place of Phi correlations (observed Pearson correlations with dichotomous data) is a logical act. Phi correlations computed on dichotomously binned variables are very sensitive to the cut point (aka "difficulty level of task") over which the binning took place. A pair of variables could hope to attain the theoretical bound $r=1$ only when they are binned over the equivalent cut point. The more different was the cut point in them the lower is the maximal bound of possible $r$ between them. (This is the general effect of the sameness of marginal distributions on the possible range for Pearson $r$, but in dichotomous variables this effect is most sharp because too few values to take on.) So, phi correlations in their matrix can be seen as unequally deflated due to contrasting marginal distributions in the dichotomous variables; you don't know if one correlation is larger than another "truly" or due to the different cut points in these two pairs of variables. The number of factors to extract (following criterions such as Kaiser's "eigenvalue>1") will be inflated: some extracted "factors" being the outcome of the unevenness, diversity of the cut points, - not substantive latent factors. This is practical reason why not use phi correlations (at least in their raw - nonrescaled) form. There has been evidence in simulation/binning studies that factor analysis based on tetrachoric correlations worsens if there are many strong (>0.7) correlations in the matrix. Tetrachoric correlation is not ideal: if the cut-points of the correlating underlying variables are at the opposites (and so the marginal distributions in the dichotomous are oppositely skewed) while the underlying association is strong, tetrachoric coefficient overestimates it further. Note also that tetrachoric correlation matrix isn't necessarily positive semidefinite in not large samples and might thus need correction ("smoothing"). Still, it is regarded by many a better way than doing factor analysis on plain Pearson (phi) coefficients. But why do namely factor analysis on binary data at all? There are other options, including latent trait / IRT (a form of "logistic" factor analysis) and Multiple Correspondence analysis (if you see your binary variables as nominal categories). See also: Assumptions of linear factor analysis. Rescaled Pearson $r$ could be (but not very convincing) alternative to tetrachotic $r$ for FA.
What are dangers of calculating Pearson correlations (instead of tetrachoric ones) for binary variab Linear Factor analyis is theoretically, logically for continuous variables only. If variables are not continuous but are, for example, dichotomous, one way for you shall be to admit underlying continu
29,152
Modeling customer churn - Machine learning versus hazard/survival models
I think your question could be further defined. The first distinction for churn models is between creating (1) a binary (or multi-class if there are multiple types of churn) model to estimate the probability of a customer churning within or by a certain future point (e.g. the next 3 months) (2) a survival type model creating an estimate of the risk of attrition each period (say each month for the next year) Which of the two is correct for your situation depends on the model use. If you really want to understand the attrition risk over time and perhaps understand how (possibly time-varying) variables interact with time then a survival model is appropriate. For a lot of customer models, I prefer to use discrete time hazard models for this purpose because time is often discrete in databases and the hazard estimate is a probability of the event. Cox regression is another popular choice but time is treated as continuous (or via adjustment for ties) but the hazard is technically not a probability. For most churn models, where a company is interested in targeting those x% of customers most at risk and the database is scored each time a targeting campaign launches, the binary (or multi-class) option is normally what is needed. The second choice is how to estimate the models. Do you use a traditional statistical model such as logistic regression for the binary (multi-class) model or a machine learning algorithm (e.g. random forest). The choice is based on which gives the most accurate model and what level of interpretability is required. For discrete time hazard models, a logistic regression is typically used with splines to introduce non-linear effects of time. This can also be done with neural networks and many other types of ML algorithms as the setup is simply supervised learning with a "person-period" data set. Further, cox regression can be fit with traditional algorithms like SAS proc phreg or R coxph(). Machine learning algorithm GBM also fits cox regression with a selected loss function. As has been mentioned, ML algorithms for survival analysis using random forests and other tree based methods have also been developed and there are many within R.
Modeling customer churn - Machine learning versus hazard/survival models
I think your question could be further defined. The first distinction for churn models is between creating (1) a binary (or multi-class if there are multiple types of churn) model to estimate the pro
Modeling customer churn - Machine learning versus hazard/survival models I think your question could be further defined. The first distinction for churn models is between creating (1) a binary (or multi-class if there are multiple types of churn) model to estimate the probability of a customer churning within or by a certain future point (e.g. the next 3 months) (2) a survival type model creating an estimate of the risk of attrition each period (say each month for the next year) Which of the two is correct for your situation depends on the model use. If you really want to understand the attrition risk over time and perhaps understand how (possibly time-varying) variables interact with time then a survival model is appropriate. For a lot of customer models, I prefer to use discrete time hazard models for this purpose because time is often discrete in databases and the hazard estimate is a probability of the event. Cox regression is another popular choice but time is treated as continuous (or via adjustment for ties) but the hazard is technically not a probability. For most churn models, where a company is interested in targeting those x% of customers most at risk and the database is scored each time a targeting campaign launches, the binary (or multi-class) option is normally what is needed. The second choice is how to estimate the models. Do you use a traditional statistical model such as logistic regression for the binary (multi-class) model or a machine learning algorithm (e.g. random forest). The choice is based on which gives the most accurate model and what level of interpretability is required. For discrete time hazard models, a logistic regression is typically used with splines to introduce non-linear effects of time. This can also be done with neural networks and many other types of ML algorithms as the setup is simply supervised learning with a "person-period" data set. Further, cox regression can be fit with traditional algorithms like SAS proc phreg or R coxph(). Machine learning algorithm GBM also fits cox regression with a selected loss function. As has been mentioned, ML algorithms for survival analysis using random forests and other tree based methods have also been developed and there are many within R.
Modeling customer churn - Machine learning versus hazard/survival models I think your question could be further defined. The first distinction for churn models is between creating (1) a binary (or multi-class if there are multiple types of churn) model to estimate the pro
29,153
Modeling customer churn - Machine learning versus hazard/survival models
First of all I would clarify where exactly you make the distinction between machines learning and hazard models. From my understanding the ml literature distinguishes between parametric and non-parametric models (among others). And second, what do you need the model for? Is it for scientific research or something else? In any event choosing the appropriate model to describe you data is first of all depended on what you need the model for. To your question: It depends on how much you know about the data generating process. If for example you take the famous coin flip or die roll, you have a very good idea about the process that generates the expected outcome of an experiment. In that case you really want to use a parametric (bayesian or frequentist) estimation because they will give you a very good estimation of the unknown parameter. Furthermore these models are very well understood, which has many advantages. If you don't know the data generating process, or you are uncertain of it, you don't have much of a choice, will need to estimate the parameters that describe the data from the data itself. If you decide for such an approach, you must accept that these models have drawbacks (depending on the specific model etc.) From my understanding the less you know about a process, the more you will need to estimate from the data itself, which will certainly come at price.
Modeling customer churn - Machine learning versus hazard/survival models
First of all I would clarify where exactly you make the distinction between machines learning and hazard models. From my understanding the ml literature distinguishes between parametric and non-parame
Modeling customer churn - Machine learning versus hazard/survival models First of all I would clarify where exactly you make the distinction between machines learning and hazard models. From my understanding the ml literature distinguishes between parametric and non-parametric models (among others). And second, what do you need the model for? Is it for scientific research or something else? In any event choosing the appropriate model to describe you data is first of all depended on what you need the model for. To your question: It depends on how much you know about the data generating process. If for example you take the famous coin flip or die roll, you have a very good idea about the process that generates the expected outcome of an experiment. In that case you really want to use a parametric (bayesian or frequentist) estimation because they will give you a very good estimation of the unknown parameter. Furthermore these models are very well understood, which has many advantages. If you don't know the data generating process, or you are uncertain of it, you don't have much of a choice, will need to estimate the parameters that describe the data from the data itself. If you decide for such an approach, you must accept that these models have drawbacks (depending on the specific model etc.) From my understanding the less you know about a process, the more you will need to estimate from the data itself, which will certainly come at price.
Modeling customer churn - Machine learning versus hazard/survival models First of all I would clarify where exactly you make the distinction between machines learning and hazard models. From my understanding the ml literature distinguishes between parametric and non-parame
29,154
What are the fitted values in a random effects model?
Your model does fit a population mean intercept. It also fits a variance for the population distribution. From the data and these two, random intercepts are predicted for each study unit. (See my answer here: Why do the estimated values from a Best Linear Unbiased Predictor (BLUP) differ from a Best Linear Unbiased Estimator (BLUE)?) In your (relatively straightforward) model, the fitted values are the sum of the estimated fixed effect coefficient and the predicted random intercept. Put another way, the values are the predicted values ($\hat y_i$) for the study units based on your model, knowing both the estimated fixed and predicted random effects. Here is a quick demonstration using the code from my linked answer: cbind(fitted(re.mod3), rep(coef(summary(re.mod3))[1]+ranef(re.mod3)[[1]][[1]], each=5)) # [,1] [,2] # 1 13.19965 13.19965 # 2 13.19965 13.19965 # 3 13.19965 13.19965 # 4 13.19965 13.19965 # 5 13.19965 13.19965 # 6 16.31164 16.31164 # 7 16.31164 16.31164 # 8 16.31164 16.31164 # 9 16.31164 16.31164 # 10 16.31164 16.31164 # 11 17.47962 17.47962 # 12 17.47962 17.47962 # 13 17.47962 17.47962 # 14 17.47962 17.47962 # 15 17.47962 17.47962 # 16 15.49802 15.49802 # 17 15.49802 15.49802 # 18 15.49802 15.49802 # 19 15.49802 15.49802 # 20 15.49802 15.49802 # 21 13.82224 13.82224 # 22 13.82224 13.82224 # 23 13.82224 13.82224 # 24 13.82224 13.82224 # 25 13.82224 13.82224
What are the fitted values in a random effects model?
Your model does fit a population mean intercept. It also fits a variance for the population distribution. From the data and these two, random intercepts are predicted for each study unit. (See my a
What are the fitted values in a random effects model? Your model does fit a population mean intercept. It also fits a variance for the population distribution. From the data and these two, random intercepts are predicted for each study unit. (See my answer here: Why do the estimated values from a Best Linear Unbiased Predictor (BLUP) differ from a Best Linear Unbiased Estimator (BLUE)?) In your (relatively straightforward) model, the fitted values are the sum of the estimated fixed effect coefficient and the predicted random intercept. Put another way, the values are the predicted values ($\hat y_i$) for the study units based on your model, knowing both the estimated fixed and predicted random effects. Here is a quick demonstration using the code from my linked answer: cbind(fitted(re.mod3), rep(coef(summary(re.mod3))[1]+ranef(re.mod3)[[1]][[1]], each=5)) # [,1] [,2] # 1 13.19965 13.19965 # 2 13.19965 13.19965 # 3 13.19965 13.19965 # 4 13.19965 13.19965 # 5 13.19965 13.19965 # 6 16.31164 16.31164 # 7 16.31164 16.31164 # 8 16.31164 16.31164 # 9 16.31164 16.31164 # 10 16.31164 16.31164 # 11 17.47962 17.47962 # 12 17.47962 17.47962 # 13 17.47962 17.47962 # 14 17.47962 17.47962 # 15 17.47962 17.47962 # 16 15.49802 15.49802 # 17 15.49802 15.49802 # 18 15.49802 15.49802 # 19 15.49802 15.49802 # 20 15.49802 15.49802 # 21 13.82224 13.82224 # 22 13.82224 13.82224 # 23 13.82224 13.82224 # 24 13.82224 13.82224 # 25 13.82224 13.82224
What are the fitted values in a random effects model? Your model does fit a population mean intercept. It also fits a variance for the population distribution. From the data and these two, random intercepts are predicted for each study unit. (See my a
29,155
Why is forecasting of ARMA models performed by Kalman filter
To me one of the main advantages is handling of missing data and uneven time steps. Kalman filter easily handles the missing observations, and actually can be used to impute them. OLS and MLE don't handle missing data as easily, and not every package will have this feature support unlike Kalman filter.
Why is forecasting of ARMA models performed by Kalman filter
To me one of the main advantages is handling of missing data and uneven time steps. Kalman filter easily handles the missing observations, and actually can be used to impute them. OLS and MLE don't h
Why is forecasting of ARMA models performed by Kalman filter To me one of the main advantages is handling of missing data and uneven time steps. Kalman filter easily handles the missing observations, and actually can be used to impute them. OLS and MLE don't handle missing data as easily, and not every package will have this feature support unlike Kalman filter.
Why is forecasting of ARMA models performed by Kalman filter To me one of the main advantages is handling of missing data and uneven time steps. Kalman filter easily handles the missing observations, and actually can be used to impute them. OLS and MLE don't h
29,156
Understanding warning message "Ties are present" in Kruskal-Wallis post hoc
A tie means that you have several observations share the same value (hence the same rank). For example, a sample consists of observations: $1, 3, 3, 5, 10, 10, 10$. "$3$" and "$10$" are two ties, where $3$ has replicates of $2$ and $10$ has replicates of $3$. Such a sample corresponds to the rank statistics: $1, 2, 2, 4, 5, 5, 5$. When ties are present, usually we need to break it (if not, you probably will get the warning message as you showed). And conventionally, we break the ties in rank statistics, in contrast to break ties in the original observations. Since Kruskal-Wallis test is using rank statistics, it is sufficient to answer your question by restricting the scope to the rank statistics. Two tie-breaking methods are common, one is "breaking ties by random". Namely, we regenerate distinct ranks randomly among the ties. Continuing the above example, to the tie "$2, 2$", we may draw two numbers without replacement from the set $\{2, 3\}$, then assign them to the second and third positions, for example, "3, 2". Similarly, we can do that for the tie $10$. A possible adjusted rank statistics can be $1, 3, 2, 4, 6, 5, 7$, hence the ties got broken. The disadvantage of this method is that you may get different test statistics among different analysis, since the tie-breaking is by random. The second method is "averaging". That is, average assigns each tied element the "average" rank. Using this method, the original rank statistics becomes: $1, 2.5, 2.5, 4, 6, 6, 6$. This method essentially adjusts the ties instead of breaking them. In software, you may specify tie-breaking options for which you should consult the function documentation. For a similar discussion on this issue, see How does ties.method argument of R's rank function work?
Understanding warning message "Ties are present" in Kruskal-Wallis post hoc
A tie means that you have several observations share the same value (hence the same rank). For example, a sample consists of observations: $1, 3, 3, 5, 10, 10, 10$. "$3$" and "$10$" are two ties, wher
Understanding warning message "Ties are present" in Kruskal-Wallis post hoc A tie means that you have several observations share the same value (hence the same rank). For example, a sample consists of observations: $1, 3, 3, 5, 10, 10, 10$. "$3$" and "$10$" are two ties, where $3$ has replicates of $2$ and $10$ has replicates of $3$. Such a sample corresponds to the rank statistics: $1, 2, 2, 4, 5, 5, 5$. When ties are present, usually we need to break it (if not, you probably will get the warning message as you showed). And conventionally, we break the ties in rank statistics, in contrast to break ties in the original observations. Since Kruskal-Wallis test is using rank statistics, it is sufficient to answer your question by restricting the scope to the rank statistics. Two tie-breaking methods are common, one is "breaking ties by random". Namely, we regenerate distinct ranks randomly among the ties. Continuing the above example, to the tie "$2, 2$", we may draw two numbers without replacement from the set $\{2, 3\}$, then assign them to the second and third positions, for example, "3, 2". Similarly, we can do that for the tie $10$. A possible adjusted rank statistics can be $1, 3, 2, 4, 6, 5, 7$, hence the ties got broken. The disadvantage of this method is that you may get different test statistics among different analysis, since the tie-breaking is by random. The second method is "averaging". That is, average assigns each tied element the "average" rank. Using this method, the original rank statistics becomes: $1, 2.5, 2.5, 4, 6, 6, 6$. This method essentially adjusts the ties instead of breaking them. In software, you may specify tie-breaking options for which you should consult the function documentation. For a similar discussion on this issue, see How does ties.method argument of R's rank function work?
Understanding warning message "Ties are present" in Kruskal-Wallis post hoc A tie means that you have several observations share the same value (hence the same rank). For example, a sample consists of observations: $1, 3, 3, 5, 10, 10, 10$. "$3$" and "$10$" are two ties, wher
29,157
Why is my combined p-value, obtained using the Fisher's method, so low?
Your p-value looks to be correct. Consider that if the null hypothesis is true, p-values should be uniform; when you have many of them, you're effectively checking your collection of p-values for consistency with uniformity, against the alternative that they're smaller than you'd expect from a uniform (Fisher's method measures this degree of being too small in a particular way). Your values are skewed toward the low side (e.g. consider that 7 values are below 0.25, but only 2 are above 0.75). Fisher's approach can pick up that your p-values tend to be too small. If the p-values were from a uniform, they should lie close to the red line in this plot (the F values are uniform scores; essentially the ecdf shifted down by $\frac{1}{2n}$ (equivalently the average of the ecdf before and after the point)): We can see that the large p-values tend to be too small (they lie left of the line near the top of the plot). Because of that, the Fisher p-value is quite small.
Why is my combined p-value, obtained using the Fisher's method, so low?
Your p-value looks to be correct. Consider that if the null hypothesis is true, p-values should be uniform; when you have many of them, you're effectively checking your collection of p-values for cons
Why is my combined p-value, obtained using the Fisher's method, so low? Your p-value looks to be correct. Consider that if the null hypothesis is true, p-values should be uniform; when you have many of them, you're effectively checking your collection of p-values for consistency with uniformity, against the alternative that they're smaller than you'd expect from a uniform (Fisher's method measures this degree of being too small in a particular way). Your values are skewed toward the low side (e.g. consider that 7 values are below 0.25, but only 2 are above 0.75). Fisher's approach can pick up that your p-values tend to be too small. If the p-values were from a uniform, they should lie close to the red line in this plot (the F values are uniform scores; essentially the ecdf shifted down by $\frac{1}{2n}$ (equivalently the average of the ecdf before and after the point)): We can see that the large p-values tend to be too small (they lie left of the line near the top of the plot). Because of that, the Fisher p-value is quite small.
Why is my combined p-value, obtained using the Fisher's method, so low? Your p-value looks to be correct. Consider that if the null hypothesis is true, p-values should be uniform; when you have many of them, you're effectively checking your collection of p-values for cons
29,158
K-means: Why minimizing WCSS is maximizing Distance between clusters?
K-means is all about the analysis-of-variance paradigm. ANOVA - both uni- and multivariate - is based on the fact that the sum of squared deviations about the grand centroid is comprised of such scatter about the group centroids and the scatter of those centroids about the grand one: SStotal=SSwithin+SSbetween. So, if SSwithin is minimized then SSbetween is maximized. SS of deviations of some points about their centroid (arithmetic mean) is known to be directly related to the overall squared euclidean distance between the points: the sum of squared deviations from centroid is equal to the sum of pairwise squared Euclidean distances divided by the number of points. (This is the direct extension of the trigonometric property of centroid. And this relation is exploited also in the double centering of distance matrix.) Thus, saying "SSbetween for centroids (as points) is maximized" is alias to say "the (weighted) set of squared distances between the centroids is maximized". Note: in SSbetween each centroid is weighted by the number of points Ni in that cluster i. That is, each centroid is counted Ni times. For example, with two centroids in the data, 1 and 2, SSbetween = N1*D1^2+N2*D2^2 where D1 and D2 are the deviations of the centroids from the grand mean. That where word "weighted" in the former paragraph stems from. Example Data (N=6: N1=3, N2=2, N3=1) V1 V2 Group 2.06 7.73 1 .67 5.27 1 6.62 9.36 1 3.16 5.23 2 7.66 1.27 2 5.59 9.83 3 SSdeviations V1 V2 Overall SSt 37.82993333 + 51.24408333 = 89.07401666 SSw 29.50106667 + 16.31966667 = 45.82073333 SSb 8.328866667 + 34.92441667 = 43.25328333 SSt is directly related to the squared Euclidean distances between the data points: Matrix of squared Euclidean distances .00000000 7.98370000 23.45050000 7.46000000 73.09160000 16.87090000 7.98370000 .00000000 52.13060000 6.20170000 64.86010000 45.00000000 23.45050000 52.13060000 .00000000 29.02850000 66.52970000 1.28180000 7.46000000 6.20170000 29.02850000 .00000000 35.93160000 27.06490000 73.09160000 64.86010000 66.52970000 35.93160000 .00000000 77.55850000 16.87090000 45.00000000 1.28180000 27.06490000 77.55850000 .00000000 Its sum/2, the sum of the distances = 534.4441000 534.4441000 / N = 89.07401666 = SSt The same reasoning holds for SSb. Matrix of squared Euclidean distances between the 3 group centroids (see https://stats.stackexchange.com/q/148847/3277) .00000000 22.92738889 11.76592222 22.92738889 .00000000 43.32880000 11.76592222 43.32880000 .00000000 3 centroids are 3 points, but SSb is based on N points (propagated centroids): N1 points representing centroid1, N2 points representing centroid2 and N3 representing centroid3. Therefore the sum of the distances must be weighted accordingly: N1*N2*22.92738889 + N1*N3*11.76592222 + N2*N3*43.32880000 = 259.51969998 259.51969998 / N = 43.25328333 = SSb Moral in words: maximizing SSb is equivalent to maximizing the weighted sum of pairwise squared distances between the centroids. (And maximizing SSb corresponds to minimizing SSw, since SSt is constant.)
K-means: Why minimizing WCSS is maximizing Distance between clusters?
K-means is all about the analysis-of-variance paradigm. ANOVA - both uni- and multivariate - is based on the fact that the sum of squared deviations about the grand centroid is comprised of such scatt
K-means: Why minimizing WCSS is maximizing Distance between clusters? K-means is all about the analysis-of-variance paradigm. ANOVA - both uni- and multivariate - is based on the fact that the sum of squared deviations about the grand centroid is comprised of such scatter about the group centroids and the scatter of those centroids about the grand one: SStotal=SSwithin+SSbetween. So, if SSwithin is minimized then SSbetween is maximized. SS of deviations of some points about their centroid (arithmetic mean) is known to be directly related to the overall squared euclidean distance between the points: the sum of squared deviations from centroid is equal to the sum of pairwise squared Euclidean distances divided by the number of points. (This is the direct extension of the trigonometric property of centroid. And this relation is exploited also in the double centering of distance matrix.) Thus, saying "SSbetween for centroids (as points) is maximized" is alias to say "the (weighted) set of squared distances between the centroids is maximized". Note: in SSbetween each centroid is weighted by the number of points Ni in that cluster i. That is, each centroid is counted Ni times. For example, with two centroids in the data, 1 and 2, SSbetween = N1*D1^2+N2*D2^2 where D1 and D2 are the deviations of the centroids from the grand mean. That where word "weighted" in the former paragraph stems from. Example Data (N=6: N1=3, N2=2, N3=1) V1 V2 Group 2.06 7.73 1 .67 5.27 1 6.62 9.36 1 3.16 5.23 2 7.66 1.27 2 5.59 9.83 3 SSdeviations V1 V2 Overall SSt 37.82993333 + 51.24408333 = 89.07401666 SSw 29.50106667 + 16.31966667 = 45.82073333 SSb 8.328866667 + 34.92441667 = 43.25328333 SSt is directly related to the squared Euclidean distances between the data points: Matrix of squared Euclidean distances .00000000 7.98370000 23.45050000 7.46000000 73.09160000 16.87090000 7.98370000 .00000000 52.13060000 6.20170000 64.86010000 45.00000000 23.45050000 52.13060000 .00000000 29.02850000 66.52970000 1.28180000 7.46000000 6.20170000 29.02850000 .00000000 35.93160000 27.06490000 73.09160000 64.86010000 66.52970000 35.93160000 .00000000 77.55850000 16.87090000 45.00000000 1.28180000 27.06490000 77.55850000 .00000000 Its sum/2, the sum of the distances = 534.4441000 534.4441000 / N = 89.07401666 = SSt The same reasoning holds for SSb. Matrix of squared Euclidean distances between the 3 group centroids (see https://stats.stackexchange.com/q/148847/3277) .00000000 22.92738889 11.76592222 22.92738889 .00000000 43.32880000 11.76592222 43.32880000 .00000000 3 centroids are 3 points, but SSb is based on N points (propagated centroids): N1 points representing centroid1, N2 points representing centroid2 and N3 representing centroid3. Therefore the sum of the distances must be weighted accordingly: N1*N2*22.92738889 + N1*N3*11.76592222 + N2*N3*43.32880000 = 259.51969998 259.51969998 / N = 43.25328333 = SSb Moral in words: maximizing SSb is equivalent to maximizing the weighted sum of pairwise squared distances between the centroids. (And maximizing SSb corresponds to minimizing SSw, since SSt is constant.)
K-means: Why minimizing WCSS is maximizing Distance between clusters? K-means is all about the analysis-of-variance paradigm. ANOVA - both uni- and multivariate - is based on the fact that the sum of squared deviations about the grand centroid is comprised of such scatt
29,159
K-means: Why minimizing WCSS is maximizing Distance between clusters?
What you are looking for is the König-Huygens theorem. cf. p. 3 in this article for an explicit reference to the Hyugens formula.
K-means: Why minimizing WCSS is maximizing Distance between clusters?
What you are looking for is the König-Huygens theorem. cf. p. 3 in this article for an explicit reference to the Hyugens formula.
K-means: Why minimizing WCSS is maximizing Distance between clusters? What you are looking for is the König-Huygens theorem. cf. p. 3 in this article for an explicit reference to the Hyugens formula.
K-means: Why minimizing WCSS is maximizing Distance between clusters? What you are looking for is the König-Huygens theorem. cf. p. 3 in this article for an explicit reference to the Hyugens formula.
29,160
Random Forest: Predictors have more than 53 categories? [duplicate]
In this youtube video, Jeremy Howard explains his technique to deal with this problem in r, he separates the variables in two sets according the number of observations for each level : Set 1 : levels for $N_{obs}>100$ or ( $25<N_{obs}<100$ + predictive value) Set 2 : all the rest. I should mention that I'm new to Random Forest and the luck has just made that I looked at this video two days ago. And even if this technique makes sense to me (separate in two sets with different importance) I can't explain the choice of these thresholds (which are obviously a bit arbitrary and dataset dependent), and at what point one can consider that a level has a honorable predictive value.
Random Forest: Predictors have more than 53 categories? [duplicate]
In this youtube video, Jeremy Howard explains his technique to deal with this problem in r, he separates the variables in two sets according the number of observations for each level : Set 1 : levels
Random Forest: Predictors have more than 53 categories? [duplicate] In this youtube video, Jeremy Howard explains his technique to deal with this problem in r, he separates the variables in two sets according the number of observations for each level : Set 1 : levels for $N_{obs}>100$ or ( $25<N_{obs}<100$ + predictive value) Set 2 : all the rest. I should mention that I'm new to Random Forest and the luck has just made that I looked at this video two days ago. And even if this technique makes sense to me (separate in two sets with different importance) I can't explain the choice of these thresholds (which are obviously a bit arbitrary and dataset dependent), and at what point one can consider that a level has a honorable predictive value.
Random Forest: Predictors have more than 53 categories? [duplicate] In this youtube video, Jeremy Howard explains his technique to deal with this problem in r, he separates the variables in two sets according the number of observations for each level : Set 1 : levels
29,161
Deriving likelihood function for IV-probit
Remember that for a bivariate normal variable $$\begin{pmatrix}X \\ Y\end{pmatrix}\sim\mathcal{N}\left(\begin{bmatrix}\mu_X\\\mu_Y\end{bmatrix}, \begin{bmatrix}\sigma_X^2 & \rho\sigma_X\sigma_Y\\\rho\sigma_X\sigma_Y & \sigma_Y^2\end{bmatrix}\right),$$ the conditional distribution of $Y$ given $X$ is $$Y\mid X \sim \mathcal{N}\left(\mu_Y+\rho\sigma_Y\frac{X-\mu_X}{\sigma_X},\sigma_Y\left[1-\rho^2\right]\right).$$ In the present case, we have \begin{align} u_1 \mid v_2 &\sim \mathcal{N}\left(0+\frac{\eta}{1\cdot\tau}\cdot1\frac{v_2-0}{\tau}, 1\cdot\left[1-\left(\frac{\eta}{1\cdot\tau}\right)^2\right] \right) \\ &= \mathcal{N}\left(\frac{\eta}{\tau^2}v_2, 1-\frac{\eta^2}{\tau^2} \right), \end{align} which means that $$u_1=\frac{\eta}{\tau^2}v_2+\xi$$ where (and this was your first mistake) $$\xi\sim\mathcal{N}\left(0,1-\frac{\eta^2}{\tau^2}\right).$$ We can thus rewrite the first equation \begin{align} y_1^* &= \delta_1 z_1 + \alpha_1 y_2 + u_1 \\ &= \delta_1 z_1 + \alpha_1 y_2 + \frac{\eta}{\tau^2}v_2+\xi \\ &= \delta_1 z_1 + \alpha_1 y_2 + \frac{\eta}{\tau^2}(y_2-\textbf{z}\delta)+\xi. \end{align} Now, remember that the conditional probability density function of $X=x$ given $Y=y$ is $$f_{X}(x \mid y)=\frac{f_{XY}(x,y)}{f_{Y}(y)}.$$ In the present case, we have $$f_{1}(y_1 \mid y_2, \mathbf{z})=\frac{f_{12}(y_1,y_2 \mid \mathbf{z})}{f_{2}(y_2 \mid \mathbf{z})},$$ which can be rearranged to your expression $$f_{12}(y_1, y_2 \mid \mathbf{z})= f_{1}(y_1 \mid y_2, \mathbf{z})f_{2}(y_2 \mid \mathbf{z}).$$ Then, we can write the likelihood as a function of the densities of the two independent shocks $v_1,\xi_1$: \begin{align} \mathcal{L}(y_1,y_2\mid \mathbf{z}) &= \prod_i^n f_{1}(y_{1i} \mid y_{2i}, \mathbf{z}_i)f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(y_{1i}=1\right)^{y_{1i}}\Pr\left(y_{1i}=0\right)^{1-y_{1i}}f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(y_{1i}^*>0\right)^{y_{1i}}\Pr\left(y_{1i}^*\leq0\right)^{1-y_{1i}}f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_{i}\delta)+\xi_i>0\right)^{y_{1i}}\\ &\qquad\quad \Pr\left(\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)+\xi_i\leq0\right)^{1-y_{1i}}\\ &\qquad\quad f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(\xi_i>-\left[\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)\right]\right)^{y_{1i}}\\ &\qquad\quad \Pr\left(\xi_i\leq-\left[\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)\right]\right)^{1-y_{1i}}\\ &\qquad\quad f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(\frac{\xi_i-0}{\sqrt{1-\frac{\eta^2}{\tau^2}}}>-\frac{\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)+0}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\right)^{y_{1i}}\\ &\qquad\quad \Pr\left(\frac{\xi_i-0}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\leq-\frac{\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)+0}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\right)^{1-y_{1i}}\\ &\qquad\quad f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(\frac{\xi_i}{\sqrt{1-\frac{\eta^2}{\tau^2}}}>-w_i\right)^{y_{1i}} \Pr\left(\frac{\xi_i}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\leq-w_i\right)^{1-y_{1i}} f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \left[1-\Pr\left(\frac{\xi_i}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\leq-w_i\right)\right]^{y_{1i}} \Pr\left(\frac{\xi_i}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\leq-w_i\right)^{1-y_{1i}} f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i \left[1-\Phi(-w_i)\right]^{y_{1i}} \Phi(-w_i)^{1-y_{1i}} \varphi\left(\frac{y_{2i}-\mathbf{z}_i\delta}{\tau}\right) \\ &= \prod_i^n \Phi(w_i)^{y_{1i}} \left[1-\Phi(w_i)\right]^{1-y_{1i}} \varphi\left(\frac{y_{2i}-\mathbf{z}_i\delta}{\tau}\right) \\ &= \Phi(w)^{y_{1}} \left[1-\Phi(w)\right]^{1-y_{1}} \varphi\left(\frac{y_{2}-\mathbf{z}\delta}{\tau}\right) \\ \end{align} where \begin{align} w_i = \frac{\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)}{\sqrt{1-\frac{\eta^2}{\tau^2}}}. \end{align} $\Phi(z)$ and $\varphi(z)$ are the cumulative density function and probability density function of the standard normal distribution.
Deriving likelihood function for IV-probit
Remember that for a bivariate normal variable $$\begin{pmatrix}X \\ Y\end{pmatrix}\sim\mathcal{N}\left(\begin{bmatrix}\mu_X\\\mu_Y\end{bmatrix}, \begin{bmatrix}\sigma_X^2 & \rho\sigma_X\sigma_Y\\\rho\
Deriving likelihood function for IV-probit Remember that for a bivariate normal variable $$\begin{pmatrix}X \\ Y\end{pmatrix}\sim\mathcal{N}\left(\begin{bmatrix}\mu_X\\\mu_Y\end{bmatrix}, \begin{bmatrix}\sigma_X^2 & \rho\sigma_X\sigma_Y\\\rho\sigma_X\sigma_Y & \sigma_Y^2\end{bmatrix}\right),$$ the conditional distribution of $Y$ given $X$ is $$Y\mid X \sim \mathcal{N}\left(\mu_Y+\rho\sigma_Y\frac{X-\mu_X}{\sigma_X},\sigma_Y\left[1-\rho^2\right]\right).$$ In the present case, we have \begin{align} u_1 \mid v_2 &\sim \mathcal{N}\left(0+\frac{\eta}{1\cdot\tau}\cdot1\frac{v_2-0}{\tau}, 1\cdot\left[1-\left(\frac{\eta}{1\cdot\tau}\right)^2\right] \right) \\ &= \mathcal{N}\left(\frac{\eta}{\tau^2}v_2, 1-\frac{\eta^2}{\tau^2} \right), \end{align} which means that $$u_1=\frac{\eta}{\tau^2}v_2+\xi$$ where (and this was your first mistake) $$\xi\sim\mathcal{N}\left(0,1-\frac{\eta^2}{\tau^2}\right).$$ We can thus rewrite the first equation \begin{align} y_1^* &= \delta_1 z_1 + \alpha_1 y_2 + u_1 \\ &= \delta_1 z_1 + \alpha_1 y_2 + \frac{\eta}{\tau^2}v_2+\xi \\ &= \delta_1 z_1 + \alpha_1 y_2 + \frac{\eta}{\tau^2}(y_2-\textbf{z}\delta)+\xi. \end{align} Now, remember that the conditional probability density function of $X=x$ given $Y=y$ is $$f_{X}(x \mid y)=\frac{f_{XY}(x,y)}{f_{Y}(y)}.$$ In the present case, we have $$f_{1}(y_1 \mid y_2, \mathbf{z})=\frac{f_{12}(y_1,y_2 \mid \mathbf{z})}{f_{2}(y_2 \mid \mathbf{z})},$$ which can be rearranged to your expression $$f_{12}(y_1, y_2 \mid \mathbf{z})= f_{1}(y_1 \mid y_2, \mathbf{z})f_{2}(y_2 \mid \mathbf{z}).$$ Then, we can write the likelihood as a function of the densities of the two independent shocks $v_1,\xi_1$: \begin{align} \mathcal{L}(y_1,y_2\mid \mathbf{z}) &= \prod_i^n f_{1}(y_{1i} \mid y_{2i}, \mathbf{z}_i)f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(y_{1i}=1\right)^{y_{1i}}\Pr\left(y_{1i}=0\right)^{1-y_{1i}}f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(y_{1i}^*>0\right)^{y_{1i}}\Pr\left(y_{1i}^*\leq0\right)^{1-y_{1i}}f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_{i}\delta)+\xi_i>0\right)^{y_{1i}}\\ &\qquad\quad \Pr\left(\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)+\xi_i\leq0\right)^{1-y_{1i}}\\ &\qquad\quad f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(\xi_i>-\left[\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)\right]\right)^{y_{1i}}\\ &\qquad\quad \Pr\left(\xi_i\leq-\left[\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)\right]\right)^{1-y_{1i}}\\ &\qquad\quad f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(\frac{\xi_i-0}{\sqrt{1-\frac{\eta^2}{\tau^2}}}>-\frac{\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)+0}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\right)^{y_{1i}}\\ &\qquad\quad \Pr\left(\frac{\xi_i-0}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\leq-\frac{\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)+0}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\right)^{1-y_{1i}}\\ &\qquad\quad f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \Pr\left(\frac{\xi_i}{\sqrt{1-\frac{\eta^2}{\tau^2}}}>-w_i\right)^{y_{1i}} \Pr\left(\frac{\xi_i}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\leq-w_i\right)^{1-y_{1i}} f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i^n \left[1-\Pr\left(\frac{\xi_i}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\leq-w_i\right)\right]^{y_{1i}} \Pr\left(\frac{\xi_i}{\sqrt{1-\frac{\eta^2}{\tau^2}}}\leq-w_i\right)^{1-y_{1i}} f_{2}(y_{2i} \mid \mathbf{z}_i) \\ &= \prod_i \left[1-\Phi(-w_i)\right]^{y_{1i}} \Phi(-w_i)^{1-y_{1i}} \varphi\left(\frac{y_{2i}-\mathbf{z}_i\delta}{\tau}\right) \\ &= \prod_i^n \Phi(w_i)^{y_{1i}} \left[1-\Phi(w_i)\right]^{1-y_{1i}} \varphi\left(\frac{y_{2i}-\mathbf{z}_i\delta}{\tau}\right) \\ &= \Phi(w)^{y_{1}} \left[1-\Phi(w)\right]^{1-y_{1}} \varphi\left(\frac{y_{2}-\mathbf{z}\delta}{\tau}\right) \\ \end{align} where \begin{align} w_i = \frac{\delta_1 z_{1i} + \alpha_1 y_{2i} + \frac{\eta}{\tau^2}(y_{2i}-\textbf{z}_i\delta)}{\sqrt{1-\frac{\eta^2}{\tau^2}}}. \end{align} $\Phi(z)$ and $\varphi(z)$ are the cumulative density function and probability density function of the standard normal distribution.
Deriving likelihood function for IV-probit Remember that for a bivariate normal variable $$\begin{pmatrix}X \\ Y\end{pmatrix}\sim\mathcal{N}\left(\begin{bmatrix}\mu_X\\\mu_Y\end{bmatrix}, \begin{bmatrix}\sigma_X^2 & \rho\sigma_X\sigma_Y\\\rho\
29,162
Has anybody ever found data where ARCH and GARCH models work?
My experiences with programming/implementing and testing ARCH/GARCH procedures have led me to the conclusion that they must be useful somewhere and someplace but I haven't seen it. Gaussian violations such as unusual values/level shifts/seasonal pulses and local time trends should be used initially to deal with changes in volatility/error variance as they have less serious side effects. After any of these adjustments care might be taken to validate that model parameters are constant over time . Furthermore error variance may not be constant but simpler/less intrusive remedies like Box-Cox and detecting deterministic break points in error variance ala Tsay are much more useful and less destructive. Finally if none of these procedures work then my last gasp would be to throw ARCH/GARCH at the data and then add a ton of holy water. I firmly agree with your findings and conclude that these are methods looking for data or just dissertation topics flying in the wind.
Has anybody ever found data where ARCH and GARCH models work?
My experiences with programming/implementing and testing ARCH/GARCH procedures have led me to the conclusion that they must be useful somewhere and someplace but I haven't seen it. Gaussian violations
Has anybody ever found data where ARCH and GARCH models work? My experiences with programming/implementing and testing ARCH/GARCH procedures have led me to the conclusion that they must be useful somewhere and someplace but I haven't seen it. Gaussian violations such as unusual values/level shifts/seasonal pulses and local time trends should be used initially to deal with changes in volatility/error variance as they have less serious side effects. After any of these adjustments care might be taken to validate that model parameters are constant over time . Furthermore error variance may not be constant but simpler/less intrusive remedies like Box-Cox and detecting deterministic break points in error variance ala Tsay are much more useful and less destructive. Finally if none of these procedures work then my last gasp would be to throw ARCH/GARCH at the data and then add a ton of holy water. I firmly agree with your findings and conclude that these are methods looking for data or just dissertation topics flying in the wind.
Has anybody ever found data where ARCH and GARCH models work? My experiences with programming/implementing and testing ARCH/GARCH procedures have led me to the conclusion that they must be useful somewhere and someplace but I haven't seen it. Gaussian violations
29,163
Has anybody ever found data where ARCH and GARCH models work?
Some background information first: Given a dependent variable $y_t$, independent variables $X_t$ and a conditional mean model $$y_t=\beta X_t+\epsilon_t$$ you can use a GARCH model to model the conditional variance of $\epsilon_t$. Say you have fit a GARCH model and obtained fitted conditional standard deviations $\hat \sigma_t$. If you scale the residuals $\hat \epsilon_t$ by the inverse of the fitted conditional standard deviations $\hat \sigma_t$, you obtain scaled residuals $\hat u_t:=\frac{\hat \epsilon_t}{\hat \sigma_t}$. You would like these to be "nice". At least they should have no ARCH patterns remaining in them. This can be tested by the Li-Mak test, for example. 1: regarding nonstationary residuals GARCH model does not produce any residuals -- there is no GARCH-model-residual in the GARCH formula (only lagged errors $\epsilon_t$ from the conditional mean model that are used as regressors in the GARCH model). But what exactly do you mean by nonstationarity: unit root?; heteroskedasticity?; level shift? When you mention nonstationary residuals, do you have in mind $\hat u_t$ or $\hat \epsilon_t$, or still something else? Edit: the type of nonstationarity is unit root. I suspect this is due to a poor model for the conditional mean rather than a failure of GARCH. Since the effect of GARCH on $\hat u_t$ is the scaling of $\hat \epsilon_t$ by $\frac{1}{\hat \sigma_t}$, that only changes the scale of $\hat \epsilon_t$ but cannot introduce a unit root. That is, the unit root must have already been a feature of $\hat \epsilon_t$, and that is a problem of the conditional mean model, not the conditional variance model. 2: regarding heteroskedasticity More could be said when you clarify what residuals you have in mind. Edit: residuals in mind are $\hat u_t$. If $\hat u_t$ are conditionally heteroskedastic but the pattern is not of ARCH nature, then you could append the standard GARCH model by explanatory variables to explain the remaining heteroskedasticity. 3: regarding non-normality $\epsilon_t$ can be non-normal, this is no problem. $u_t$ should match the distribution you assume when fitting a GARCH model (you need to assume a distribution to be able to obtain the likelihood function that will be maximized when fitting the GARCH model). If you assume a normal distribution for $u_t$ but can reject normality for $\hat u_t$ then it's a problem. But you do not need to assume normality. A $t$ distribution with 3 or 4 degrees of freedom has been argued to be more relevant than a normal distribution for financial returns, for example. 4: regarding residuals are often non-stationary, heteroskedastic and not normal, so the model doesn't explain volatility Eidt (more precise formulation): I am not sure I follow the logical connection here. Since GARCH aims to explain a specific type of conditional heteroskedasticity (not any and all types of CH but autoregressive CH), you should assess it on that basis. If $\hat \epsilon_t$ are autoregressively conditionally heteroskedastic (this can be tested by the ARCH-LM test) but $\hat u_t$ are conditionally homoskedastic (as tested by the Li-Mak test), the GARCH model has done its job. My experience with GARCH models (admittedly limited) is that they do their job but of course are not a panacea.
Has anybody ever found data where ARCH and GARCH models work?
Some background information first: Given a dependent variable $y_t$, independent variables $X_t$ and a conditional mean model $$y_t=\beta X_t+\epsilon_t$$ you can use a GARCH model to model the condi
Has anybody ever found data where ARCH and GARCH models work? Some background information first: Given a dependent variable $y_t$, independent variables $X_t$ and a conditional mean model $$y_t=\beta X_t+\epsilon_t$$ you can use a GARCH model to model the conditional variance of $\epsilon_t$. Say you have fit a GARCH model and obtained fitted conditional standard deviations $\hat \sigma_t$. If you scale the residuals $\hat \epsilon_t$ by the inverse of the fitted conditional standard deviations $\hat \sigma_t$, you obtain scaled residuals $\hat u_t:=\frac{\hat \epsilon_t}{\hat \sigma_t}$. You would like these to be "nice". At least they should have no ARCH patterns remaining in them. This can be tested by the Li-Mak test, for example. 1: regarding nonstationary residuals GARCH model does not produce any residuals -- there is no GARCH-model-residual in the GARCH formula (only lagged errors $\epsilon_t$ from the conditional mean model that are used as regressors in the GARCH model). But what exactly do you mean by nonstationarity: unit root?; heteroskedasticity?; level shift? When you mention nonstationary residuals, do you have in mind $\hat u_t$ or $\hat \epsilon_t$, or still something else? Edit: the type of nonstationarity is unit root. I suspect this is due to a poor model for the conditional mean rather than a failure of GARCH. Since the effect of GARCH on $\hat u_t$ is the scaling of $\hat \epsilon_t$ by $\frac{1}{\hat \sigma_t}$, that only changes the scale of $\hat \epsilon_t$ but cannot introduce a unit root. That is, the unit root must have already been a feature of $\hat \epsilon_t$, and that is a problem of the conditional mean model, not the conditional variance model. 2: regarding heteroskedasticity More could be said when you clarify what residuals you have in mind. Edit: residuals in mind are $\hat u_t$. If $\hat u_t$ are conditionally heteroskedastic but the pattern is not of ARCH nature, then you could append the standard GARCH model by explanatory variables to explain the remaining heteroskedasticity. 3: regarding non-normality $\epsilon_t$ can be non-normal, this is no problem. $u_t$ should match the distribution you assume when fitting a GARCH model (you need to assume a distribution to be able to obtain the likelihood function that will be maximized when fitting the GARCH model). If you assume a normal distribution for $u_t$ but can reject normality for $\hat u_t$ then it's a problem. But you do not need to assume normality. A $t$ distribution with 3 or 4 degrees of freedom has been argued to be more relevant than a normal distribution for financial returns, for example. 4: regarding residuals are often non-stationary, heteroskedastic and not normal, so the model doesn't explain volatility Eidt (more precise formulation): I am not sure I follow the logical connection here. Since GARCH aims to explain a specific type of conditional heteroskedasticity (not any and all types of CH but autoregressive CH), you should assess it on that basis. If $\hat \epsilon_t$ are autoregressively conditionally heteroskedastic (this can be tested by the ARCH-LM test) but $\hat u_t$ are conditionally homoskedastic (as tested by the Li-Mak test), the GARCH model has done its job. My experience with GARCH models (admittedly limited) is that they do their job but of course are not a panacea.
Has anybody ever found data where ARCH and GARCH models work? Some background information first: Given a dependent variable $y_t$, independent variables $X_t$ and a conditional mean model $$y_t=\beta X_t+\epsilon_t$$ you can use a GARCH model to model the condi
29,164
How does one verify causation?
I think this is a very good question. I encounter this problem often and reflect on it a lot. I do research in medical science and the notion in medicine is that nothing is proven causal, never, never, never, until an randomized clinical controlled trial, preferably with a pill (or any other exposure that can be triple-blinded), have proven an effect on the response of interest. This is quite sad, as all other studies are considered to be association studies, which tend to reduce their impact. Hill and Richard Doll thought about this. The former formulated Hill's criteria for causality: The Bradford Hill criteria, otherwise known as Hill's criteria for causation, are a group of minimal conditions necessary to provide adequate evidence of a causal relationship between an incidence and a consequence, established by the English epidemiologist Sir Austin Bradford Hill (1897–1991) in 1965. Strength: A small association does not mean that there is not a causal effect, though the larger the association, the more likely that it is causal. Consistency: Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect. Specificity: Causation is likely if a very specific population at a specific site and disease with no other likely explanation. The more specific an association between a factor and an effect is, the bigger the probability of a causal relationship. Temporality: The effect has to occur after the cause (and if there is an expected delay between the cause and expected effect, then the effect must occur after that delay). Biological gradient: Greater exposure should generally lead to greater incidence of the effect. However, in some cases, the mere presence of the factor can trigger the effect. In other cases, an inverse proportion is observed: greater exposure leads to lower incidence. Plausibility: A plausible mechanism between cause and effect is helpful (but Hill noted that knowledge of the mechanism is limited by current knowledge). Coherence: Coherence between epidemiological and laboratory findings increases the likelihood of an effect. However, Hill noted that "... lack of such [laboratory] evidence cannot nullify the epidemiological effect on associations". Experiment: "Occasionally it is possible to appeal to experimental evidence". Analogy: The effect of similar factors may be considered. This was formulated some 50 years ago, before the advent of randomized trials (which might not be of interest to your particular field) but it is noteworthy that experiments were not given a crucial role in the Hill criteria. I'd like to think that observational data, if analysed with proper statistical methods, does allow for inferences of causality. (Of course this depends on many factors.) But in my field, when it comes to changing management of patients, it is rare to see guidelines shaped by anything other than randomized trials and the prelude to guidelines often underline that certain causality can only be obtained in randomized trials. Now I know that many of you will not agree with me. I don't agree with myself neither. But it might add to a discussion.
How does one verify causation?
I think this is a very good question. I encounter this problem often and reflect on it a lot. I do research in medical science and the notion in medicine is that nothing is proven causal, never, never
How does one verify causation? I think this is a very good question. I encounter this problem often and reflect on it a lot. I do research in medical science and the notion in medicine is that nothing is proven causal, never, never, never, until an randomized clinical controlled trial, preferably with a pill (or any other exposure that can be triple-blinded), have proven an effect on the response of interest. This is quite sad, as all other studies are considered to be association studies, which tend to reduce their impact. Hill and Richard Doll thought about this. The former formulated Hill's criteria for causality: The Bradford Hill criteria, otherwise known as Hill's criteria for causation, are a group of minimal conditions necessary to provide adequate evidence of a causal relationship between an incidence and a consequence, established by the English epidemiologist Sir Austin Bradford Hill (1897–1991) in 1965. Strength: A small association does not mean that there is not a causal effect, though the larger the association, the more likely that it is causal. Consistency: Consistent findings observed by different persons in different places with different samples strengthens the likelihood of an effect. Specificity: Causation is likely if a very specific population at a specific site and disease with no other likely explanation. The more specific an association between a factor and an effect is, the bigger the probability of a causal relationship. Temporality: The effect has to occur after the cause (and if there is an expected delay between the cause and expected effect, then the effect must occur after that delay). Biological gradient: Greater exposure should generally lead to greater incidence of the effect. However, in some cases, the mere presence of the factor can trigger the effect. In other cases, an inverse proportion is observed: greater exposure leads to lower incidence. Plausibility: A plausible mechanism between cause and effect is helpful (but Hill noted that knowledge of the mechanism is limited by current knowledge). Coherence: Coherence between epidemiological and laboratory findings increases the likelihood of an effect. However, Hill noted that "... lack of such [laboratory] evidence cannot nullify the epidemiological effect on associations". Experiment: "Occasionally it is possible to appeal to experimental evidence". Analogy: The effect of similar factors may be considered. This was formulated some 50 years ago, before the advent of randomized trials (which might not be of interest to your particular field) but it is noteworthy that experiments were not given a crucial role in the Hill criteria. I'd like to think that observational data, if analysed with proper statistical methods, does allow for inferences of causality. (Of course this depends on many factors.) But in my field, when it comes to changing management of patients, it is rare to see guidelines shaped by anything other than randomized trials and the prelude to guidelines often underline that certain causality can only be obtained in randomized trials. Now I know that many of you will not agree with me. I don't agree with myself neither. But it might add to a discussion.
How does one verify causation? I think this is a very good question. I encounter this problem often and reflect on it a lot. I do research in medical science and the notion in medicine is that nothing is proven causal, never, never
29,165
How does one verify causation?
Statistics provides tools for detecting and modelling regularities in the data. The modelling process is typically guided by subject-matter knowledge. When the model represents the subject-matter mechanism, statistical properties of the estimated model tell whether the data is at odds with the modelled mechanism. Then causality (or lack thereof) is inferred -- and this is done on the subject-matter domain. An example: suppose you have a random sample of two variables $x$ and $y$. The correlation between them is large and statistically significant. So far, can you say whether $x$ causes $y$? I don't think so. Now add subject-matter knowledge to the data. Case A: the observed variables are length of feet and favourite shoe size $\rightarrow$ people like buying shoes that fit their feet size, so feet size causes the choice of shoe size (but not the other way around). Case B: the observed variables are height and weight of people $\rightarrow$ adults tend to be both taller and heavier than kids, but does that mean weight causes height or height causes weight? Genetics, nutrition, age and other factors cause both.
How does one verify causation?
Statistics provides tools for detecting and modelling regularities in the data. The modelling process is typically guided by subject-matter knowledge. When the model represents the subject-matter mech
How does one verify causation? Statistics provides tools for detecting and modelling regularities in the data. The modelling process is typically guided by subject-matter knowledge. When the model represents the subject-matter mechanism, statistical properties of the estimated model tell whether the data is at odds with the modelled mechanism. Then causality (or lack thereof) is inferred -- and this is done on the subject-matter domain. An example: suppose you have a random sample of two variables $x$ and $y$. The correlation between them is large and statistically significant. So far, can you say whether $x$ causes $y$? I don't think so. Now add subject-matter knowledge to the data. Case A: the observed variables are length of feet and favourite shoe size $\rightarrow$ people like buying shoes that fit their feet size, so feet size causes the choice of shoe size (but not the other way around). Case B: the observed variables are height and weight of people $\rightarrow$ adults tend to be both taller and heavier than kids, but does that mean weight causes height or height causes weight? Genetics, nutrition, age and other factors cause both.
How does one verify causation? Statistics provides tools for detecting and modelling regularities in the data. The modelling process is typically guided by subject-matter knowledge. When the model represents the subject-matter mech
29,166
How does one verify causation?
The question currently assumes that the quantities are correlated, which implies that the person determining the correlation must have good reason to believe the variables share a linear relationship. Granger Causality might be the best tool for determining linear causal relationships. Granger was an economist who shared a nobel prize for his work on linear Causation. Granger suggests that for a set of variables $\{X_t^{(i)}\}_{i=1}^k$ to be considered a cause for effect $Y_t$,two conditions should hold: The cause should occur before the effect. The cause should contain information about the effect that is not available otherwise. To find the shared information one can use regression (although beware that significant regression coefficients do not imply shared information in theory -- just in practice). Specifically, one wants to compare the residuals with and without the cause variables. Consider the variables to be column vectors, so that $\mathcal{X}=[X_{t-1}^{(1)},X_{t-2}^{(1)},\ldots,X_{t-m}^{(1)},X_{t-1}^{(2)},X_{t-2}^{(2)},\ldots,X_{t-m}^{(2)},\ldots,X_{t-m}^{(k)}]^T$ is also a column vector, and $\mathcal{Y}=[Y_{t-1},Y_{t-2},\ldots,Y_{t-m}]^T$ is a column vector. ($m$ is called the order or the time lag. There are methods to optimally chose $m$, but I think people just guess the best $m$ or base it on other constraints.) Then the regression equations of interest are \begin{align*} Y_t=A\cdot\mathcal{Y}+\epsilon_t \\ Y_t=A'\cdot[\mathcal{Y},\mathcal{X}]^T+\epsilon'_t. \end{align*} To determine if the $X_{t-i}^{(j)}$ contained info about $Y_t$ one would do an F-test on the variances of $\epsilon_t$ and $\epsilon'_t$. To ensure that the information is not accounted for by any other source, one would gather up every other variable that can be accounted for, say $Z_t^{(1)},\ldots,Z_t^{(p)}$, define $\mathcal{Z}=[Z_{t-1}^{(1)},Z_{t-2}^{(1)},\ldots,Z_{t-m}^{(p)}]^T$, and do the regression \begin{align*} Y_t=A\cdot[\mathcal{Y},\mathcal{Z}]^T+\epsilon_t \\ Y_t=A'\cdot[\mathcal{Y},\mathcal{X},\mathcal{Z}]^T+\epsilon'_t. \end{align*} and do the same F-test on the residuals. This is just a rough sketch and I believe that many authors have improved upon this idea.
How does one verify causation?
The question currently assumes that the quantities are correlated, which implies that the person determining the correlation must have good reason to believe the variables share a linear relationship.
How does one verify causation? The question currently assumes that the quantities are correlated, which implies that the person determining the correlation must have good reason to believe the variables share a linear relationship. Granger Causality might be the best tool for determining linear causal relationships. Granger was an economist who shared a nobel prize for his work on linear Causation. Granger suggests that for a set of variables $\{X_t^{(i)}\}_{i=1}^k$ to be considered a cause for effect $Y_t$,two conditions should hold: The cause should occur before the effect. The cause should contain information about the effect that is not available otherwise. To find the shared information one can use regression (although beware that significant regression coefficients do not imply shared information in theory -- just in practice). Specifically, one wants to compare the residuals with and without the cause variables. Consider the variables to be column vectors, so that $\mathcal{X}=[X_{t-1}^{(1)},X_{t-2}^{(1)},\ldots,X_{t-m}^{(1)},X_{t-1}^{(2)},X_{t-2}^{(2)},\ldots,X_{t-m}^{(2)},\ldots,X_{t-m}^{(k)}]^T$ is also a column vector, and $\mathcal{Y}=[Y_{t-1},Y_{t-2},\ldots,Y_{t-m}]^T$ is a column vector. ($m$ is called the order or the time lag. There are methods to optimally chose $m$, but I think people just guess the best $m$ or base it on other constraints.) Then the regression equations of interest are \begin{align*} Y_t=A\cdot\mathcal{Y}+\epsilon_t \\ Y_t=A'\cdot[\mathcal{Y},\mathcal{X}]^T+\epsilon'_t. \end{align*} To determine if the $X_{t-i}^{(j)}$ contained info about $Y_t$ one would do an F-test on the variances of $\epsilon_t$ and $\epsilon'_t$. To ensure that the information is not accounted for by any other source, one would gather up every other variable that can be accounted for, say $Z_t^{(1)},\ldots,Z_t^{(p)}$, define $\mathcal{Z}=[Z_{t-1}^{(1)},Z_{t-2}^{(1)},\ldots,Z_{t-m}^{(p)}]^T$, and do the regression \begin{align*} Y_t=A\cdot[\mathcal{Y},\mathcal{Z}]^T+\epsilon_t \\ Y_t=A'\cdot[\mathcal{Y},\mathcal{X},\mathcal{Z}]^T+\epsilon'_t. \end{align*} and do the same F-test on the residuals. This is just a rough sketch and I believe that many authors have improved upon this idea.
How does one verify causation? The question currently assumes that the quantities are correlated, which implies that the person determining the correlation must have good reason to believe the variables share a linear relationship.
29,167
How does one verify causation?
You can't--at least not within statistics. Maxim: you can never know for certain that the effect of one variable is caused by another. The reason: you can never know if there's not another variable that you are not aware of and the data you've collected can't possibly tell you. The fact of life is that data collection isn't always sufficient when data is static and the phenomenon is dynamic--like human behavior. There the collection of data itself can skew results, just like how in particle physics the fact of observation itself can't be removed from the equation.
How does one verify causation?
You can't--at least not within statistics. Maxim: you can never know for certain that the effect of one variable is caused by another. The reason: you can never know if there's not another variab
How does one verify causation? You can't--at least not within statistics. Maxim: you can never know for certain that the effect of one variable is caused by another. The reason: you can never know if there's not another variable that you are not aware of and the data you've collected can't possibly tell you. The fact of life is that data collection isn't always sufficient when data is static and the phenomenon is dynamic--like human behavior. There the collection of data itself can skew results, just like how in particle physics the fact of observation itself can't be removed from the equation.
How does one verify causation? You can't--at least not within statistics. Maxim: you can never know for certain that the effect of one variable is caused by another. The reason: you can never know if there's not another variab
29,168
Neural Nets, Lasso regularization
You could take a look at sparse autoencoders, which sometimes put a L1 penalty on the neural activations, which from an optimization point of view is similar to Lasso (L1 penalty on weights). Here is a Theano implementation. An alternative is given from the UFLDL tutorial: This objective function presents one last problem - the L1 norm is not differentiable at 0, and hence poses a problem for gradient-based methods. While the problem can be solved using other non-gradient descent-based methods, we will "smooth out" the L1 norm using an approximation which will allow us to use gradient descent. To "smooth out" the L1 norm, we use $\sqrt{x^2 + \epsilon}$in place of $\left| x \right|$, where ε is a "smoothing parameter" which can also be interpreted as a sort of "sparsity parameter" (to see this, observe that when ε is large compared to x, the x + ε is dominated by ε, and taking the square root yields approximately $\sqrt{\epsilon}$). So you could follow their approach using the smooth approximation, but you can also go for the exact gradient, which is discontinuous at 0, but sometimes that may not be a problem. For a example, the popular ReLU neuron also has a gradient that is discontinuous at 0, but that it's not a problem for most applications. Also, you can look at Extreme Learning Machines (ELM), which are MLPs that only learn the weights of the final layers and use random hidden layer weights. This seems odd, but it can achieve reasonable results in a very fast time (see right frame). Since the optimization problem to train the ELMs is only linear regression, you could use any Lasso tool for this.
Neural Nets, Lasso regularization
You could take a look at sparse autoencoders, which sometimes put a L1 penalty on the neural activations, which from an optimization point of view is similar to Lasso (L1 penalty on weights). Here is
Neural Nets, Lasso regularization You could take a look at sparse autoencoders, which sometimes put a L1 penalty on the neural activations, which from an optimization point of view is similar to Lasso (L1 penalty on weights). Here is a Theano implementation. An alternative is given from the UFLDL tutorial: This objective function presents one last problem - the L1 norm is not differentiable at 0, and hence poses a problem for gradient-based methods. While the problem can be solved using other non-gradient descent-based methods, we will "smooth out" the L1 norm using an approximation which will allow us to use gradient descent. To "smooth out" the L1 norm, we use $\sqrt{x^2 + \epsilon}$in place of $\left| x \right|$, where ε is a "smoothing parameter" which can also be interpreted as a sort of "sparsity parameter" (to see this, observe that when ε is large compared to x, the x + ε is dominated by ε, and taking the square root yields approximately $\sqrt{\epsilon}$). So you could follow their approach using the smooth approximation, but you can also go for the exact gradient, which is discontinuous at 0, but sometimes that may not be a problem. For a example, the popular ReLU neuron also has a gradient that is discontinuous at 0, but that it's not a problem for most applications. Also, you can look at Extreme Learning Machines (ELM), which are MLPs that only learn the weights of the final layers and use random hidden layer weights. This seems odd, but it can achieve reasonable results in a very fast time (see right frame). Since the optimization problem to train the ELMs is only linear regression, you could use any Lasso tool for this.
Neural Nets, Lasso regularization You could take a look at sparse autoencoders, which sometimes put a L1 penalty on the neural activations, which from an optimization point of view is similar to Lasso (L1 penalty on weights). Here is
29,169
Best method to create growth charts
There is a large literature on growth curves. In my mind there are three "top" approaches. In all three, time is modeled as a restricted cubic spline with a sufficient number of knots (e.g., 6). This is a parametric smoother with excellent performance and easy interpretation. Classical growth curve models (generalized least squares) for longitudinal data with a sensible correlation pattern such as continuous-time AR1. If you can show that residuals are Gaussian you can get MLEs of the quantiles using the estimated means and the common standard deviation. Quantile regression. This is not efficient for non-large $n$. Even though precision is not optimal, the method makes minimal assumptions (because estimates for one quantile are not connected to estimates of a different quantile) and is unbiased. Ordinal regression. This treats continuous $Y$ as ordinal in order to be robust, using semi-parametric models such as the proportional odds model. From ordinal models you can estimate the mean and any quantiles, the latter only if $Y$ is continuous.
Best method to create growth charts
There is a large literature on growth curves. In my mind there are three "top" approaches. In all three, time is modeled as a restricted cubic spline with a sufficient number of knots (e.g., 6). Th
Best method to create growth charts There is a large literature on growth curves. In my mind there are three "top" approaches. In all three, time is modeled as a restricted cubic spline with a sufficient number of knots (e.g., 6). This is a parametric smoother with excellent performance and easy interpretation. Classical growth curve models (generalized least squares) for longitudinal data with a sensible correlation pattern such as continuous-time AR1. If you can show that residuals are Gaussian you can get MLEs of the quantiles using the estimated means and the common standard deviation. Quantile regression. This is not efficient for non-large $n$. Even though precision is not optimal, the method makes minimal assumptions (because estimates for one quantile are not connected to estimates of a different quantile) and is unbiased. Ordinal regression. This treats continuous $Y$ as ordinal in order to be robust, using semi-parametric models such as the proportional odds model. From ordinal models you can estimate the mean and any quantiles, the latter only if $Y$ is continuous.
Best method to create growth charts There is a large literature on growth curves. In my mind there are three "top" approaches. In all three, time is modeled as a restricted cubic spline with a sufficient number of knots (e.g., 6). Th
29,170
Best method to create growth charts
Gaussian process regression. Start with the squared exponential kernel and try and tune the parameters by eye. Later, if you want to do things properly, experiment with different kernels and use the marginal likelihood to optimize the parameters. If you want more detail than the tutorial linked above provides, this book is great.
Best method to create growth charts
Gaussian process regression. Start with the squared exponential kernel and try and tune the parameters by eye. Later, if you want to do things properly, experiment with different kernels and use the m
Best method to create growth charts Gaussian process regression. Start with the squared exponential kernel and try and tune the parameters by eye. Later, if you want to do things properly, experiment with different kernels and use the marginal likelihood to optimize the parameters. If you want more detail than the tutorial linked above provides, this book is great.
Best method to create growth charts Gaussian process regression. Start with the squared exponential kernel and try and tune the parameters by eye. Later, if you want to do things properly, experiment with different kernels and use the m
29,171
Fast density estimation
In the univariate case, one quick approximation: You could take a moderate number of bins (in the univariate case, say something on the order of a thousand, though it depends on your bandwidth - you need your bandwidth to cover lots of bins) and discretize the points to the bin-centers; you just scale each kernel-contribution by the respective bin-count. (This kind of approach is really not suitable in high dimensions) Another approach is to only evaluate the kernel at a limited number of positions and use some form of smooth interpolation between them. You might try log-spline density estimation I suppose, but it may not be any faster. For multivariate density estimation, you might look into the Fast Gauss Transform, see for example, here.
Fast density estimation
In the univariate case, one quick approximation: You could take a moderate number of bins (in the univariate case, say something on the order of a thousand, though it depends on your bandwidth - you n
Fast density estimation In the univariate case, one quick approximation: You could take a moderate number of bins (in the univariate case, say something on the order of a thousand, though it depends on your bandwidth - you need your bandwidth to cover lots of bins) and discretize the points to the bin-centers; you just scale each kernel-contribution by the respective bin-count. (This kind of approach is really not suitable in high dimensions) Another approach is to only evaluate the kernel at a limited number of positions and use some form of smooth interpolation between them. You might try log-spline density estimation I suppose, but it may not be any faster. For multivariate density estimation, you might look into the Fast Gauss Transform, see for example, here.
Fast density estimation In the univariate case, one quick approximation: You could take a moderate number of bins (in the univariate case, say something on the order of a thousand, though it depends on your bandwidth - you n
29,172
Fast density estimation
OP notes that the sample moments can be calculated fast enough for his needs, and suggests: Estimate the moments of the distribution, and find the pdf based on these moments alone This can be done with Pearson fitting which just requires the first 4 moments. But, it does assume that your data is unimodal and ... to be useful and robust ... that the kurtosis etc is not too large. See, for instance, chapter 5 of our book, Rose/Smith(2002 - free download): http://www.mathstatica.com/book/bookcontents.html The 'input' is the first 4 moments ---- the pdf is then derived from those moments, where the functional forms are already worked out symbolically, so the resulting pdf is calculated effectively instantaneously. I think the question would be better defined if the OP specified: How well does a Gaussian fit work? What does the kernel density estimate look like? Why not include a plot? Does the distribution change shape? If so, please provide some examples.
Fast density estimation
OP notes that the sample moments can be calculated fast enough for his needs, and suggests: Estimate the moments of the distribution, and find the pdf based on these moments alone This can be done w
Fast density estimation OP notes that the sample moments can be calculated fast enough for his needs, and suggests: Estimate the moments of the distribution, and find the pdf based on these moments alone This can be done with Pearson fitting which just requires the first 4 moments. But, it does assume that your data is unimodal and ... to be useful and robust ... that the kurtosis etc is not too large. See, for instance, chapter 5 of our book, Rose/Smith(2002 - free download): http://www.mathstatica.com/book/bookcontents.html The 'input' is the first 4 moments ---- the pdf is then derived from those moments, where the functional forms are already worked out symbolically, so the resulting pdf is calculated effectively instantaneously. I think the question would be better defined if the OP specified: How well does a Gaussian fit work? What does the kernel density estimate look like? Why not include a plot? Does the distribution change shape? If so, please provide some examples.
Fast density estimation OP notes that the sample moments can be calculated fast enough for his needs, and suggests: Estimate the moments of the distribution, and find the pdf based on these moments alone This can be done w
29,173
Fast density estimation
Is sub-sampling not an option here? If you've already started to consider using moments and parametric forms, then you probably don't need to look at all million(s) observations. For relatively simple parametric distributions (e.g. Gaussian), hundreds of observations would likely suffice. The full answer will largely depend on the downstream use, too. Will you be seeking to subsequently sample new values from this unknown distribution? If so, the ecdf method in R mentioned above will work just fine, even from a down-sampled subset of your original data.
Fast density estimation
Is sub-sampling not an option here? If you've already started to consider using moments and parametric forms, then you probably don't need to look at all million(s) observations. For relatively simple
Fast density estimation Is sub-sampling not an option here? If you've already started to consider using moments and parametric forms, then you probably don't need to look at all million(s) observations. For relatively simple parametric distributions (e.g. Gaussian), hundreds of observations would likely suffice. The full answer will largely depend on the downstream use, too. Will you be seeking to subsequently sample new values from this unknown distribution? If so, the ecdf method in R mentioned above will work just fine, even from a down-sampled subset of your original data.
Fast density estimation Is sub-sampling not an option here? If you've already started to consider using moments and parametric forms, then you probably don't need to look at all million(s) observations. For relatively simple
29,174
Propensity Score Matching with time-varying treatment
Maybe the following paper is relevant for your case: Lu B. Propensity Score Matching with Time-Dependent Covariates. Biometrics 2005; 61, 721–728. In the situation considered in the paper, subjects may start treatment at any point during an observation period. An individuals who becomes exposed at time $t$ is matched to several controls selected from the corresponding risk-set, i.e. from all subjects who are still at risk of becoming exposed at time $t$. Matching is with respect to a time-dependent propensity score, defined as the hazard of becoming exposed at time $t$ computed from a Cox proportional hazards model: $$h(t)=h_0(t)\exp(\beta'x(t))$$ where $x(t)$ is a vector of potentially time-varying predictors of treatment status. In each risk-set, matching is actually perfomed on the linear predictor scale according to the metric $$d(x_i(t),x_j(t))=\left(\hat\beta'x_i(t)-\hat\beta'x_j(t)\right)^2.$$
Propensity Score Matching with time-varying treatment
Maybe the following paper is relevant for your case: Lu B. Propensity Score Matching with Time-Dependent Covariates. Biometrics 2005; 61, 721–728. In the situation considered in the paper, subjects ma
Propensity Score Matching with time-varying treatment Maybe the following paper is relevant for your case: Lu B. Propensity Score Matching with Time-Dependent Covariates. Biometrics 2005; 61, 721–728. In the situation considered in the paper, subjects may start treatment at any point during an observation period. An individuals who becomes exposed at time $t$ is matched to several controls selected from the corresponding risk-set, i.e. from all subjects who are still at risk of becoming exposed at time $t$. Matching is with respect to a time-dependent propensity score, defined as the hazard of becoming exposed at time $t$ computed from a Cox proportional hazards model: $$h(t)=h_0(t)\exp(\beta'x(t))$$ where $x(t)$ is a vector of potentially time-varying predictors of treatment status. In each risk-set, matching is actually perfomed on the linear predictor scale according to the metric $$d(x_i(t),x_j(t))=\left(\hat\beta'x_i(t)-\hat\beta'x_j(t)\right)^2.$$
Propensity Score Matching with time-varying treatment Maybe the following paper is relevant for your case: Lu B. Propensity Score Matching with Time-Dependent Covariates. Biometrics 2005; 61, 721–728. In the situation considered in the paper, subjects ma
29,175
Propensity Score Matching with time-varying treatment
Stata 13 has a multivalued treatment effects estimator. It might be possible to reframe your problem as multivalued treatment one, where treatment is indexed by time (treated in year 1, treated in year 2,..., treated in year 12) rather than binary. The outcome will be measured in the year after treatment.
Propensity Score Matching with time-varying treatment
Stata 13 has a multivalued treatment effects estimator. It might be possible to reframe your problem as multivalued treatment one, where treatment is indexed by time (treated in year 1, treated in yea
Propensity Score Matching with time-varying treatment Stata 13 has a multivalued treatment effects estimator. It might be possible to reframe your problem as multivalued treatment one, where treatment is indexed by time (treated in year 1, treated in year 2,..., treated in year 12) rather than binary. The outcome will be measured in the year after treatment.
Propensity Score Matching with time-varying treatment Stata 13 has a multivalued treatment effects estimator. It might be possible to reframe your problem as multivalued treatment one, where treatment is indexed by time (treated in year 1, treated in yea
29,176
Testing for overdispersion in logistic regression
The approach described requires unnecessary computations. The test statistic is just sum(residuals(model_binom, type = "deviance")^2) This is exactly equal to the Pearson $\chi^2$ test statistic for lack of fit, hence it have chi-squared distribution. Overdispersion as such doesn't apply to Bernoulli data. Large value of $\chi^2$ could indicate lack of covariates or powers, or interactions terms, or data should be grouped. A p-value of 0.79 indicates the test failed to find any problems.
Testing for overdispersion in logistic regression
The approach described requires unnecessary computations. The test statistic is just sum(residuals(model_binom, type = "deviance")^2) This is exactly equal to the Pearson $\chi^2$ test statistic for
Testing for overdispersion in logistic regression The approach described requires unnecessary computations. The test statistic is just sum(residuals(model_binom, type = "deviance")^2) This is exactly equal to the Pearson $\chi^2$ test statistic for lack of fit, hence it have chi-squared distribution. Overdispersion as such doesn't apply to Bernoulli data. Large value of $\chi^2$ could indicate lack of covariates or powers, or interactions terms, or data should be grouped. A p-value of 0.79 indicates the test failed to find any problems.
Testing for overdispersion in logistic regression The approach described requires unnecessary computations. The test statistic is just sum(residuals(model_binom, type = "deviance")^2) This is exactly equal to the Pearson $\chi^2$ test statistic for
29,177
Testing for overdispersion in logistic regression
As @oleh says, the chi2 test is basically a general GOF, which will be triggered by overdispersion, but could be triggered also by other problems. You can test specifically for overdispersion in binomial GLMs with the DHARMa R package (disclaimer: I'm the developer), which compares the dispersion in the data with the dispersion of simulated data from the fitted model. model_binom <- glm(Species=="versicolor" ~ Sepal.Width, family=binomial(), data=iris) library(DHARMa) testDispersion(model_binom) However, note that the raw 0/1 response in a logistic regression cannot have overdispersion, so this test (as any other dispersion test) will never be positive. Overdispersion tests on a 0/1 response only make sense if you group the residuals. See the comments specific to binomial responses in the DHARMa vignette here.
Testing for overdispersion in logistic regression
As @oleh says, the chi2 test is basically a general GOF, which will be triggered by overdispersion, but could be triggered also by other problems. You can test specifically for overdispersion in binom
Testing for overdispersion in logistic regression As @oleh says, the chi2 test is basically a general GOF, which will be triggered by overdispersion, but could be triggered also by other problems. You can test specifically for overdispersion in binomial GLMs with the DHARMa R package (disclaimer: I'm the developer), which compares the dispersion in the data with the dispersion of simulated data from the fitted model. model_binom <- glm(Species=="versicolor" ~ Sepal.Width, family=binomial(), data=iris) library(DHARMa) testDispersion(model_binom) However, note that the raw 0/1 response in a logistic regression cannot have overdispersion, so this test (as any other dispersion test) will never be positive. Overdispersion tests on a 0/1 response only make sense if you group the residuals. See the comments specific to binomial responses in the DHARMa vignette here.
Testing for overdispersion in logistic regression As @oleh says, the chi2 test is basically a general GOF, which will be triggered by overdispersion, but could be triggered also by other problems. You can test specifically for overdispersion in binom
29,178
Is Gaussian process regression a Bayesian method?
On your first question: GPs are Bayesian because they involve constructing a prior distribution (here over functions directly rather than over parameters) and updating this distribution by conditioning on the data. The Gaussian part just makes the resulting posterior friendlier to work with than it might be otherwise. On your second question: you might ask how your last equation is realised by the 'even simpler approach' described in section 4.2. Things are indeed being integrated out there.
Is Gaussian process regression a Bayesian method?
On your first question: GPs are Bayesian because they involve constructing a prior distribution (here over functions directly rather than over parameters) and updating this distribution by conditionin
Is Gaussian process regression a Bayesian method? On your first question: GPs are Bayesian because they involve constructing a prior distribution (here over functions directly rather than over parameters) and updating this distribution by conditioning on the data. The Gaussian part just makes the resulting posterior friendlier to work with than it might be otherwise. On your second question: you might ask how your last equation is realised by the 'even simpler approach' described in section 4.2. Things are indeed being integrated out there.
Is Gaussian process regression a Bayesian method? On your first question: GPs are Bayesian because they involve constructing a prior distribution (here over functions directly rather than over parameters) and updating this distribution by conditionin
29,179
Is Gaussian process regression a Bayesian method?
Seems the question is not settled totally. I was also frustrated about it until hit the below: "Posterior probability is just the conditional probability that is outputted by the Bayes theorem. There is nothing special about it, it does not differ anyhow from any other conditional probability, it just has it's own name." The original answer is here
Is Gaussian process regression a Bayesian method?
Seems the question is not settled totally. I was also frustrated about it until hit the below: "Posterior probability is just the conditional probability that is outputted by the Bayes theorem. There
Is Gaussian process regression a Bayesian method? Seems the question is not settled totally. I was also frustrated about it until hit the below: "Posterior probability is just the conditional probability that is outputted by the Bayes theorem. There is nothing special about it, it does not differ anyhow from any other conditional probability, it just has it's own name." The original answer is here
Is Gaussian process regression a Bayesian method? Seems the question is not settled totally. I was also frustrated about it until hit the below: "Posterior probability is just the conditional probability that is outputted by the Bayes theorem. There
29,180
What is the intuitive sense behind the purpose and mechanics of Sufficient Statistics?
No. What they say is if $X_1^\prime,\dots,X_n^\prime$ is another random sample from the same population as the original data $X_1,\dots,X_n$, it contains an equal amount of probabilistic information about $\theta$. Therefore, we can "recover the data" if we retain $T$ and discard $X_1,\dots,X_n$. That’s why $T$ is "sufficient". Data reduction. If $T$ is sufficient, the "extra information" carried by $X$ is worthless as long as $θ$ is concerned. It is then only natural to consider inference procedures which do not use this extra irrelevant information. This leads to the Sufficiency Principle: Any inference procedure should depend on the data only through sufficient statistics. See here for more detail on principles involved in data reduction.
What is the intuitive sense behind the purpose and mechanics of Sufficient Statistics?
No. What they say is if $X_1^\prime,\dots,X_n^\prime$ is another random sample from the same population as the original data $X_1,\dots,X_n$, it contains an equal amount of probabilistic information a
What is the intuitive sense behind the purpose and mechanics of Sufficient Statistics? No. What they say is if $X_1^\prime,\dots,X_n^\prime$ is another random sample from the same population as the original data $X_1,\dots,X_n$, it contains an equal amount of probabilistic information about $\theta$. Therefore, we can "recover the data" if we retain $T$ and discard $X_1,\dots,X_n$. That’s why $T$ is "sufficient". Data reduction. If $T$ is sufficient, the "extra information" carried by $X$ is worthless as long as $θ$ is concerned. It is then only natural to consider inference procedures which do not use this extra irrelevant information. This leads to the Sufficiency Principle: Any inference procedure should depend on the data only through sufficient statistics. See here for more detail on principles involved in data reduction.
What is the intuitive sense behind the purpose and mechanics of Sufficient Statistics? No. What they say is if $X_1^\prime,\dots,X_n^\prime$ is another random sample from the same population as the original data $X_1,\dots,X_n$, it contains an equal amount of probabilistic information a
29,181
Two-sample comparison of proportions, sample size estimation: R vs Stata
The difference is due to the fact that Stata's sampsi command (deprecated as of Stata 13 and replaced by power) uses the continuity correction by default, whereas R's power.prop.test() does not (for details on the formula used by Stata, see [PSS] power twoproportions). This can be changed with the nocontinuity option, e.g., sampsi 0.70 0.85, power(0.90) alpha(0.05) nocontinuity which yields a sample size of 161 per group. Use of the continuity correction yields a more conservative test (i.e., larger sample size), and obviously matters less as the sample size increases. Frank Harrell, in the documentation for bpower (part of his Hmisc package), points out that the formula without the continuity correction is pretty accurate, thereby providing some justification for forgoing the correction.
Two-sample comparison of proportions, sample size estimation: R vs Stata
The difference is due to the fact that Stata's sampsi command (deprecated as of Stata 13 and replaced by power) uses the continuity correction by default, whereas R's power.prop.test() does not (for d
Two-sample comparison of proportions, sample size estimation: R vs Stata The difference is due to the fact that Stata's sampsi command (deprecated as of Stata 13 and replaced by power) uses the continuity correction by default, whereas R's power.prop.test() does not (for details on the formula used by Stata, see [PSS] power twoproportions). This can be changed with the nocontinuity option, e.g., sampsi 0.70 0.85, power(0.90) alpha(0.05) nocontinuity which yields a sample size of 161 per group. Use of the continuity correction yields a more conservative test (i.e., larger sample size), and obviously matters less as the sample size increases. Frank Harrell, in the documentation for bpower (part of his Hmisc package), points out that the formula without the continuity correction is pretty accurate, thereby providing some justification for forgoing the correction.
Two-sample comparison of proportions, sample size estimation: R vs Stata The difference is due to the fact that Stata's sampsi command (deprecated as of Stata 13 and replaced by power) uses the continuity correction by default, whereas R's power.prop.test() does not (for d
29,182
classification threshold in RandomForest-sklearn
You could indeed wrap you random forest in a class that a predict methods that calls the predict_proba method of the internal random forest and output class 1 only if it's higher than a custom threshold. Alternatively you can bias the training algorithm by passing a higher sample_weight for samples from the minority class.
classification threshold in RandomForest-sklearn
You could indeed wrap you random forest in a class that a predict methods that calls the predict_proba method of the internal random forest and output class 1 only if it's higher than a custom thresho
classification threshold in RandomForest-sklearn You could indeed wrap you random forest in a class that a predict methods that calls the predict_proba method of the internal random forest and output class 1 only if it's higher than a custom threshold. Alternatively you can bias the training algorithm by passing a higher sample_weight for samples from the minority class.
classification threshold in RandomForest-sklearn You could indeed wrap you random forest in a class that a predict methods that calls the predict_proba method of the internal random forest and output class 1 only if it's higher than a custom thresho
29,183
Should pruning be avoided for bagging (with decision trees)?
Tal, Generally speaking, pruning will hurt performance of bagged trees. Tress are unstable classifiers; meaning that if you perturb the data a little the tree might significantly change. They are low bias but high variance models. Bagging generally works by "replicating" the model to drive the variance down (the old "increase your sample size" trick). However, if you end up averaging models that are very similar, then you don't gain much. If the trees are unpruned, they tend to be more different from one another than if they were pruned. This has the effect of "decorrelating" the trees so that you are averaging trees that are not overly similar. This is also the reason that random forests add the additional tweak of the random predictor selection. That coerces the trees into being very different. Using unpruned trees will increase the risk of overfiting, but model averaging more than offsets this (generally speaking). HTH, Max
Should pruning be avoided for bagging (with decision trees)?
Tal, Generally speaking, pruning will hurt performance of bagged trees. Tress are unstable classifiers; meaning that if you perturb the data a little the tree might significantly change. They are low
Should pruning be avoided for bagging (with decision trees)? Tal, Generally speaking, pruning will hurt performance of bagged trees. Tress are unstable classifiers; meaning that if you perturb the data a little the tree might significantly change. They are low bias but high variance models. Bagging generally works by "replicating" the model to drive the variance down (the old "increase your sample size" trick). However, if you end up averaging models that are very similar, then you don't gain much. If the trees are unpruned, they tend to be more different from one another than if they were pruned. This has the effect of "decorrelating" the trees so that you are averaging trees that are not overly similar. This is also the reason that random forests add the additional tweak of the random predictor selection. That coerces the trees into being very different. Using unpruned trees will increase the risk of overfiting, but model averaging more than offsets this (generally speaking). HTH, Max
Should pruning be avoided for bagging (with decision trees)? Tal, Generally speaking, pruning will hurt performance of bagged trees. Tress are unstable classifiers; meaning that if you perturb the data a little the tree might significantly change. They are low
29,184
Is a saturated model a special case of a overfitted model?
@Tomka's right. A saturated model fits as many parameters as possible for a given set of predictors, but whether it's over-fitted or not depends on the number of observations for each unique pattern of predictors. Suppose you have a linear model with 100 observations of $y$ on $x=0$ and 100 on $x=1$. Then the model $\operatorname{E}Y = \beta_0 +\beta_1 x$ is saturated but surely not over-fitted. But if you have one observation of $y$ for each of $x=(0,1,2,3,4)^\mathrm{T}$ the model $\operatorname{E}Y = \beta_0 +\beta_1 x +\beta_2 x^2 +\beta_3 x^3 +\beta_4 x^4$ is saturated & a perfect fit—doubtless over-fitted†. When people talk about saturated models having as many parameters as observations, as in the linked web page & CV post, they're assuming a context of one observation for each predictor pattern. (Or perhaps sometimes using 'observation' differently—are 100 individuals in a 2×2 contingency table 100 observations of individuals, or 4 observations of cell frequencies?) † Don't take "surely" & "doubtless" literally, by the way. It's possible for the first model that $\beta_1$ is so small compared to $\operatorname{Var}Y$ you'd predict better without trying to estimate it, & vice versa for the second.
Is a saturated model a special case of a overfitted model?
@Tomka's right. A saturated model fits as many parameters as possible for a given set of predictors, but whether it's over-fitted or not depends on the number of observations for each unique pattern o
Is a saturated model a special case of a overfitted model? @Tomka's right. A saturated model fits as many parameters as possible for a given set of predictors, but whether it's over-fitted or not depends on the number of observations for each unique pattern of predictors. Suppose you have a linear model with 100 observations of $y$ on $x=0$ and 100 on $x=1$. Then the model $\operatorname{E}Y = \beta_0 +\beta_1 x$ is saturated but surely not over-fitted. But if you have one observation of $y$ for each of $x=(0,1,2,3,4)^\mathrm{T}$ the model $\operatorname{E}Y = \beta_0 +\beta_1 x +\beta_2 x^2 +\beta_3 x^3 +\beta_4 x^4$ is saturated & a perfect fit—doubtless over-fitted†. When people talk about saturated models having as many parameters as observations, as in the linked web page & CV post, they're assuming a context of one observation for each predictor pattern. (Or perhaps sometimes using 'observation' differently—are 100 individuals in a 2×2 contingency table 100 observations of individuals, or 4 observations of cell frequencies?) † Don't take "surely" & "doubtless" literally, by the way. It's possible for the first model that $\beta_1$ is so small compared to $\operatorname{Var}Y$ you'd predict better without trying to estimate it, & vice versa for the second.
Is a saturated model a special case of a overfitted model? @Tomka's right. A saturated model fits as many parameters as possible for a given set of predictors, but whether it's over-fitted or not depends on the number of observations for each unique pattern o
29,185
Chi-squared test with 0 expected values
You would only ignore the 0's if there is some reason (not a statistical one) to do so; but including it would only change the degrees of freedom since (0-0) is, of course, 0. However, I am not sure you want chi-square here at all. It would depend on why you expected only AA genotype. If you do want chi-square, it would be $\frac{(2-0)^2}{0} + \frac{(3-5)^2}{5} = \infty$
Chi-squared test with 0 expected values
You would only ignore the 0's if there is some reason (not a statistical one) to do so; but including it would only change the degrees of freedom since (0-0) is, of course, 0. However, I am not sure y
Chi-squared test with 0 expected values You would only ignore the 0's if there is some reason (not a statistical one) to do so; but including it would only change the degrees of freedom since (0-0) is, of course, 0. However, I am not sure you want chi-square here at all. It would depend on why you expected only AA genotype. If you do want chi-square, it would be $\frac{(2-0)^2}{0} + \frac{(3-5)^2}{5} = \infty$
Chi-squared test with 0 expected values You would only ignore the 0's if there is some reason (not a statistical one) to do so; but including it would only change the degrees of freedom since (0-0) is, of course, 0. However, I am not sure y
29,186
Chi-squared test with 0 expected values
The ChiSquare approximation is not valid when cell counts are small. Try a Fisher's exact test, using a multinomial probability distribution. Wiki: https://en.wikipedia.org/wiki/Fisher%27s_exact_test
Chi-squared test with 0 expected values
The ChiSquare approximation is not valid when cell counts are small. Try a Fisher's exact test, using a multinomial probability distribution. Wiki: https://en.wikipedia.org/wiki/Fisher%27s_exact_t
Chi-squared test with 0 expected values The ChiSquare approximation is not valid when cell counts are small. Try a Fisher's exact test, using a multinomial probability distribution. Wiki: https://en.wikipedia.org/wiki/Fisher%27s_exact_test
Chi-squared test with 0 expected values The ChiSquare approximation is not valid when cell counts are small. Try a Fisher's exact test, using a multinomial probability distribution. Wiki: https://en.wikipedia.org/wiki/Fisher%27s_exact_t
29,187
Can I combine several ordinal questions to create an index and/or composite measure?
Yes, both your points make perfect sense, and are indeed a standard practice - at least in an area called psychometry. But I cannot agree with the title question: it is not always valid to ordinal variables. In general case one cannot add nor subtract values measured on ordinal scale and hope, that the result would be independent from arbitrariness that come in the notion of ordinal variable. Ordinal variable is a special case of interval variable; one in which we cannot say how far away from each other are adjacent levels of the variable. For instance, the education (which in many contexts is a valid ordinal variable) can be measured in 3 levels: Primary education Secondary education Higher education These 3 levels are usually mapped internally into numeric values "1", "2" and "3" - but this mapping is completely arbitrary. One can equally well map these levels as "1", "10", "100", or "8", "12", "17" (the last example would be a rough estimate of years of education) or employ the procedure from the Witkowski's paper. All statistical procedures that are designed for ordinal variables are invariant with respect to any injective function applied to the values associated to the levels. Imagine now, that we asked the subject to state the education level of mother and father. And now we want to build a parents' education index - by simply averaging parents' education level. Now the outcome will become highly dependent on the mapping done between education levels and numbers, that represent them internally. For the most typical case ("1", "2" and "3") the process of averaging yields the same level "2" if one parent has Primary education and the other has Higher education, and if both parents have Secondary education. This feature might be correct, or might not, depending on how well the assigned numerical values represent the actual value each education has in our view. The typical 5-level ranking you mentioned (a.k.a. Likert scale) was specially crafted in such a way, that the semantical distance between consecutive levels is kept roughly constant. Because of this property, such variables can be classified as interval, hence we can proceed with addition (or arithmetic mean, or any other mathematical manipulations).
Can I combine several ordinal questions to create an index and/or composite measure?
Yes, both your points make perfect sense, and are indeed a standard practice - at least in an area called psychometry. But I cannot agree with the title question: it is not always valid to ordinal va
Can I combine several ordinal questions to create an index and/or composite measure? Yes, both your points make perfect sense, and are indeed a standard practice - at least in an area called psychometry. But I cannot agree with the title question: it is not always valid to ordinal variables. In general case one cannot add nor subtract values measured on ordinal scale and hope, that the result would be independent from arbitrariness that come in the notion of ordinal variable. Ordinal variable is a special case of interval variable; one in which we cannot say how far away from each other are adjacent levels of the variable. For instance, the education (which in many contexts is a valid ordinal variable) can be measured in 3 levels: Primary education Secondary education Higher education These 3 levels are usually mapped internally into numeric values "1", "2" and "3" - but this mapping is completely arbitrary. One can equally well map these levels as "1", "10", "100", or "8", "12", "17" (the last example would be a rough estimate of years of education) or employ the procedure from the Witkowski's paper. All statistical procedures that are designed for ordinal variables are invariant with respect to any injective function applied to the values associated to the levels. Imagine now, that we asked the subject to state the education level of mother and father. And now we want to build a parents' education index - by simply averaging parents' education level. Now the outcome will become highly dependent on the mapping done between education levels and numbers, that represent them internally. For the most typical case ("1", "2" and "3") the process of averaging yields the same level "2" if one parent has Primary education and the other has Higher education, and if both parents have Secondary education. This feature might be correct, or might not, depending on how well the assigned numerical values represent the actual value each education has in our view. The typical 5-level ranking you mentioned (a.k.a. Likert scale) was specially crafted in such a way, that the semantical distance between consecutive levels is kept roughly constant. Because of this property, such variables can be classified as interval, hence we can proceed with addition (or arithmetic mean, or any other mathematical manipulations).
Can I combine several ordinal questions to create an index and/or composite measure? Yes, both your points make perfect sense, and are indeed a standard practice - at least in an area called psychometry. But I cannot agree with the title question: it is not always valid to ordinal va
29,188
Can I combine several ordinal questions to create an index and/or composite measure?
I assume here that your study's requirement is something along the lines of: Given the ordinal responses to n questions for each candidate (a policy research institute in your case), rank/sort order the candidates by functionally combining the n-dimensional response tuple's elements into a score/metric. Then perhaps you can look into the work of Wittkowski et al. (2004) and the references therein, on combining multiple ordinal variables for scoring. Reference: Wittkowski et al. (2004) Combining several ordinal measures in clinical studies. Statistics in medicine, 23, 1579-1592.
Can I combine several ordinal questions to create an index and/or composite measure?
I assume here that your study's requirement is something along the lines of: Given the ordinal responses to n questions for each candidate (a policy research institute in your case), rank/sort order t
Can I combine several ordinal questions to create an index and/or composite measure? I assume here that your study's requirement is something along the lines of: Given the ordinal responses to n questions for each candidate (a policy research institute in your case), rank/sort order the candidates by functionally combining the n-dimensional response tuple's elements into a score/metric. Then perhaps you can look into the work of Wittkowski et al. (2004) and the references therein, on combining multiple ordinal variables for scoring. Reference: Wittkowski et al. (2004) Combining several ordinal measures in clinical studies. Statistics in medicine, 23, 1579-1592.
Can I combine several ordinal questions to create an index and/or composite measure? I assume here that your study's requirement is something along the lines of: Given the ordinal responses to n questions for each candidate (a policy research institute in your case), rank/sort order t
29,189
I log transformed my dependent variable, can I use GLM normal distribution with LOG link function?
Can I use GLM normal distribution with LOG link function on a DV that has already been log transformed? Yes; if the assumptions are satisfied on that scale Is the variance homogeneity test sufficient to justify using normal distribution? Why would equality of variance imply normality? Is the residual checking procedure correct to justify choosing the link function model? You should beware of using both histograms and goodness of fit tests to check the suitability of your assumptions: 1) Beware using the histogram for assessing normality. (Also see here) In short, depending on something as simple as a small change in your choice of binwidth, or even just the location of the bin boundary, it's possible to get quite different impresssions of the shape of the data: That's two histograms of the same data set. Using several different binwidths can be useful in seeing whether the impression is sensitive to that. 2) Beware using goodness of fit tests for concluding that the assumption of normality is reasonable. Formal hypothesis tests don't really answer the right question. e.g. see the links under item 2. here About the variance, that was mentioned in some papers using similar datasets "because distributions had homogeneous variances a GLM with a Gaussian distribution was used". If this is not correct, how can I justify or decide the distribution? In normal circumstances, the question isn't 'are my errors (or conditional distributions) normal?' - they won't be, we don't even need to check. A more relevant question is 'how badly does the degree of non-normality that's present impact my inferences?" I suggest a kernel density estimate or normal QQplot (plot of residuals vs normal scores). If the distribution looks reasonably normal, you have little to worry about. In fact, even when it's clearly non-normal it still may not matter very much, depending on what you want to do (normal prediction intervals really will rely on normality, for example, but many other things will tend to work at large sample sizes) Funnily enough, at large samples, normality becomes generally less and less crucial (apart from PIs as mentioned above), but your ability to reject normality becomes greater and greater. Edit: the point about equality of variance is that really can impact your inferences, even at large sample sizes. But you probably shouldn't assess that by hypothesis tests either. Getting the variance assumption wrong is an issue whatever your assumed distribution. I read that scaled deviance should be around N-p for the model for a good fit right? When you fit a normal model it has a scale parameter, in which case your scaled deviance will be about N-p even if your distribution isn't normal. in your opinion the normal distribution with log link is a good choice In the continued absence of knowing what you're measuring or what you're using the inference for, I still can't judge whether to suggest another distribution for the GLM, nor how important normality might be to your inferences. However, if your other assumptions are also reasonable (linearity and equality of variance should at least be checked and potential sources of dependence considered), then in most circumstances I'd be very comfortable doing things like using CIs and performing tests on coefficients or contrasts - there's only a very slight impression of skewness in those residuals, which, even if it's a real effect, should have no substantive impact on those kinds of inference. In short, you should be fine. (While another distribution and link function might do a little better in terms of fit, only in restricted circumstances would they be likely to also make more sense.)
I log transformed my dependent variable, can I use GLM normal distribution with LOG link function?
Can I use GLM normal distribution with LOG link function on a DV that has already been log transformed? Yes; if the assumptions are satisfied on that scale Is the variance homogeneity test sufficien
I log transformed my dependent variable, can I use GLM normal distribution with LOG link function? Can I use GLM normal distribution with LOG link function on a DV that has already been log transformed? Yes; if the assumptions are satisfied on that scale Is the variance homogeneity test sufficient to justify using normal distribution? Why would equality of variance imply normality? Is the residual checking procedure correct to justify choosing the link function model? You should beware of using both histograms and goodness of fit tests to check the suitability of your assumptions: 1) Beware using the histogram for assessing normality. (Also see here) In short, depending on something as simple as a small change in your choice of binwidth, or even just the location of the bin boundary, it's possible to get quite different impresssions of the shape of the data: That's two histograms of the same data set. Using several different binwidths can be useful in seeing whether the impression is sensitive to that. 2) Beware using goodness of fit tests for concluding that the assumption of normality is reasonable. Formal hypothesis tests don't really answer the right question. e.g. see the links under item 2. here About the variance, that was mentioned in some papers using similar datasets "because distributions had homogeneous variances a GLM with a Gaussian distribution was used". If this is not correct, how can I justify or decide the distribution? In normal circumstances, the question isn't 'are my errors (or conditional distributions) normal?' - they won't be, we don't even need to check. A more relevant question is 'how badly does the degree of non-normality that's present impact my inferences?" I suggest a kernel density estimate or normal QQplot (plot of residuals vs normal scores). If the distribution looks reasonably normal, you have little to worry about. In fact, even when it's clearly non-normal it still may not matter very much, depending on what you want to do (normal prediction intervals really will rely on normality, for example, but many other things will tend to work at large sample sizes) Funnily enough, at large samples, normality becomes generally less and less crucial (apart from PIs as mentioned above), but your ability to reject normality becomes greater and greater. Edit: the point about equality of variance is that really can impact your inferences, even at large sample sizes. But you probably shouldn't assess that by hypothesis tests either. Getting the variance assumption wrong is an issue whatever your assumed distribution. I read that scaled deviance should be around N-p for the model for a good fit right? When you fit a normal model it has a scale parameter, in which case your scaled deviance will be about N-p even if your distribution isn't normal. in your opinion the normal distribution with log link is a good choice In the continued absence of knowing what you're measuring or what you're using the inference for, I still can't judge whether to suggest another distribution for the GLM, nor how important normality might be to your inferences. However, if your other assumptions are also reasonable (linearity and equality of variance should at least be checked and potential sources of dependence considered), then in most circumstances I'd be very comfortable doing things like using CIs and performing tests on coefficients or contrasts - there's only a very slight impression of skewness in those residuals, which, even if it's a real effect, should have no substantive impact on those kinds of inference. In short, you should be fine. (While another distribution and link function might do a little better in terms of fit, only in restricted circumstances would they be likely to also make more sense.)
I log transformed my dependent variable, can I use GLM normal distribution with LOG link function? Can I use GLM normal distribution with LOG link function on a DV that has already been log transformed? Yes; if the assumptions are satisfied on that scale Is the variance homogeneity test sufficien
29,190
Determine the communication classes for this Markov Chain
I was not familiar with the definition of communicating classes for Markov chains, but found agreement in the definitions given on Wikipedia and on this webpage from the University of Cambridge. Assume that $\{X_n\}_{n\geq 0}$ is a time homogenous Markov Chain. Both sources state a set of states $C$ of a Markov Chain is a communicating class if all states in $C$ communicate. However, for two states, $i$ and $j$, to communicate, it is only necessary that there exists $n>0$ and $n^{\prime}>0$ such that $$ P(X_n=i|X_0=j)>0 $$ and $$ P(X_{n^{\prime}}=j|X_0=i)>0 $$ It is not necessary that $n=n^{\prime} = 1$ as stated by @Varunicarus. As you mentioned, this Markov chain is indeed irreducible and thus all states of the Markov chain form a single communicating class, which is actually the definition of irreducibility given in the Wikipedia entry. It is often helpful for problems with small transition matrices like this to draw a directed graph of the Markov chain and see if you can find a cycle that includes all states of the Markov Chain. If so, the chain is irreducible and all states form a single communicating class. For larger transition matrices, more theory and\or computer programming will be necessary.
Determine the communication classes for this Markov Chain
I was not familiar with the definition of communicating classes for Markov chains, but found agreement in the definitions given on Wikipedia and on this webpage from the University of Cambridge. Ass
Determine the communication classes for this Markov Chain I was not familiar with the definition of communicating classes for Markov chains, but found agreement in the definitions given on Wikipedia and on this webpage from the University of Cambridge. Assume that $\{X_n\}_{n\geq 0}$ is a time homogenous Markov Chain. Both sources state a set of states $C$ of a Markov Chain is a communicating class if all states in $C$ communicate. However, for two states, $i$ and $j$, to communicate, it is only necessary that there exists $n>0$ and $n^{\prime}>0$ such that $$ P(X_n=i|X_0=j)>0 $$ and $$ P(X_{n^{\prime}}=j|X_0=i)>0 $$ It is not necessary that $n=n^{\prime} = 1$ as stated by @Varunicarus. As you mentioned, this Markov chain is indeed irreducible and thus all states of the Markov chain form a single communicating class, which is actually the definition of irreducibility given in the Wikipedia entry. It is often helpful for problems with small transition matrices like this to draw a directed graph of the Markov chain and see if you can find a cycle that includes all states of the Markov Chain. If so, the chain is irreducible and all states form a single communicating class. For larger transition matrices, more theory and\or computer programming will be necessary.
Determine the communication classes for this Markov Chain I was not familiar with the definition of communicating classes for Markov chains, but found agreement in the definitions given on Wikipedia and on this webpage from the University of Cambridge. Ass
29,191
Determine the communication classes for this Markov Chain
In an irreducible Markov Chain all states belong to a single communicating class. The given transition probability matrix corresponds to an irreducible Markov Chain. This can be easily observed by drawing a state transition diagram. Alternatively, by computing $P^{(4)}$, we can observe that the given TPM is regular. This concludes that the given Markov Chain is irreducible. $$ P^{(4)} = \begin{matrix} 0.1576938 & 0.2583928 & 0.08312500 & 0.2327933 & 0.2588625\cr 0.1655115 & 0.2854474 & 0.11632500 & 0.2161569 & 0.1923158\cr 0.1500375 & 0.1895678 & 0.09953125 & 0.2075683 & 0.3465500\cr 0.1218750 & 0.2135125 & 0.10625000 & 0.2215688 & 0.3334688\cr 0.1277500 & 0.0615000 & 0.07750000 & 0.1892750 & 0.5437250\cr \end{matrix} $$
Determine the communication classes for this Markov Chain
In an irreducible Markov Chain all states belong to a single communicating class. The given transition probability matrix corresponds to an irreducible Markov Chain. This can be easily observed by dr
Determine the communication classes for this Markov Chain In an irreducible Markov Chain all states belong to a single communicating class. The given transition probability matrix corresponds to an irreducible Markov Chain. This can be easily observed by drawing a state transition diagram. Alternatively, by computing $P^{(4)}$, we can observe that the given TPM is regular. This concludes that the given Markov Chain is irreducible. $$ P^{(4)} = \begin{matrix} 0.1576938 & 0.2583928 & 0.08312500 & 0.2327933 & 0.2588625\cr 0.1655115 & 0.2854474 & 0.11632500 & 0.2161569 & 0.1923158\cr 0.1500375 & 0.1895678 & 0.09953125 & 0.2075683 & 0.3465500\cr 0.1218750 & 0.2135125 & 0.10625000 & 0.2215688 & 0.3334688\cr 0.1277500 & 0.0615000 & 0.07750000 & 0.1892750 & 0.5437250\cr \end{matrix} $$
Determine the communication classes for this Markov Chain In an irreducible Markov Chain all states belong to a single communicating class. The given transition probability matrix corresponds to an irreducible Markov Chain. This can be easily observed by dr
29,192
Determine the communication classes for this Markov Chain
Communicating Classes for this matrix would be: {1}, {2}, {3}, {4,5}. State 4 and 5 communicate with each other directly, therefore they constitute the same communicating class -- they are an equivalent class . The rest of the states do not have two way direct communication. For example you may access state 1 from 4 but not state 4 from 1, therefore states 4 and 1 are not in the same communication class. Disclaimer: So I just started learning this topic myself -- I feel your pain of having a poor lecturer -- and thus the above is a result of novice understanding.
Determine the communication classes for this Markov Chain
Communicating Classes for this matrix would be: {1}, {2}, {3}, {4,5}. State 4 and 5 communicate with each other directly, therefore they constitute the same communicating class -- they are an equivale
Determine the communication classes for this Markov Chain Communicating Classes for this matrix would be: {1}, {2}, {3}, {4,5}. State 4 and 5 communicate with each other directly, therefore they constitute the same communicating class -- they are an equivalent class . The rest of the states do not have two way direct communication. For example you may access state 1 from 4 but not state 4 from 1, therefore states 4 and 1 are not in the same communication class. Disclaimer: So I just started learning this topic myself -- I feel your pain of having a poor lecturer -- and thus the above is a result of novice understanding.
Determine the communication classes for this Markov Chain Communicating Classes for this matrix would be: {1}, {2}, {3}, {4,5}. State 4 and 5 communicate with each other directly, therefore they constitute the same communicating class -- they are an equivale
29,193
Introductory textbook on nonparametric Bayesian models?
Regarding your comment to @jerad's solution, I believe that you don't have to get disappointed because you cannot prove formula 12. It needs some theory of Stochastic Processes. If you want to know how formula 12 is derived check at Ferguson's paper, A bayesian analysis of some nonparametric problems (The Annals of Statistics 1973, 1(2):209), who first proved the existence of Dirichlet Process and its properties. In general, to study Bayesian Nonparametrics you need to study Probability Theory and Stochastic Processes. I mention down two books that are common in BNP are: Ghosh and Ramamoorthi, Bayesian Nonparametrics, Springer; 2003. Hjort, Holmes, Müller, and Walker, Bayesian Nonparametrics, Cambridge University Press; 2010.
Introductory textbook on nonparametric Bayesian models?
Regarding your comment to @jerad's solution, I believe that you don't have to get disappointed because you cannot prove formula 12. It needs some theory of Stochastic Processes. If you want to know ho
Introductory textbook on nonparametric Bayesian models? Regarding your comment to @jerad's solution, I believe that you don't have to get disappointed because you cannot prove formula 12. It needs some theory of Stochastic Processes. If you want to know how formula 12 is derived check at Ferguson's paper, A bayesian analysis of some nonparametric problems (The Annals of Statistics 1973, 1(2):209), who first proved the existence of Dirichlet Process and its properties. In general, to study Bayesian Nonparametrics you need to study Probability Theory and Stochastic Processes. I mention down two books that are common in BNP are: Ghosh and Ramamoorthi, Bayesian Nonparametrics, Springer; 2003. Hjort, Holmes, Müller, and Walker, Bayesian Nonparametrics, Cambridge University Press; 2010.
Introductory textbook on nonparametric Bayesian models? Regarding your comment to @jerad's solution, I believe that you don't have to get disappointed because you cannot prove formula 12. It needs some theory of Stochastic Processes. If you want to know ho
29,194
Introductory textbook on nonparametric Bayesian models?
As far as I know, no such book exists yet as the area is still quite new. The couple of Bayesian nonparametrics books I've seen are basically just a bunch of review papers from various researchers bound together. If you have a Ph.D. in math, applied or not, I'm sure you can get your head around by reading the standard papers. Probably the gentlest yet most thorough introduction to BNP methods is this tutorial by Sam Gershman.
Introductory textbook on nonparametric Bayesian models?
As far as I know, no such book exists yet as the area is still quite new. The couple of Bayesian nonparametrics books I've seen are basically just a bunch of review papers from various researchers bou
Introductory textbook on nonparametric Bayesian models? As far as I know, no such book exists yet as the area is still quite new. The couple of Bayesian nonparametrics books I've seen are basically just a bunch of review papers from various researchers bound together. If you have a Ph.D. in math, applied or not, I'm sure you can get your head around by reading the standard papers. Probably the gentlest yet most thorough introduction to BNP methods is this tutorial by Sam Gershman.
Introductory textbook on nonparametric Bayesian models? As far as I know, no such book exists yet as the area is still quite new. The couple of Bayesian nonparametrics books I've seen are basically just a bunch of review papers from various researchers bou
29,195
What is the advantage of imputation over building multiple models in regression?
I think the key here is understanding the missing data mechanism; or at least ruling some out. Building seperate models is akin to treating missing and non-missing groups as random samples. If missingness on X3 is related to X1 or X2 or some other unobserved variable, then your estimates will likely be biased in each model. Why not use multiple imputation on the development data set and use the combined coefficients on a multiply imputed prediction set? Average across the predictions and you should be good.
What is the advantage of imputation over building multiple models in regression?
I think the key here is understanding the missing data mechanism; or at least ruling some out. Building seperate models is akin to treating missing and non-missing groups as random samples. If missing
What is the advantage of imputation over building multiple models in regression? I think the key here is understanding the missing data mechanism; or at least ruling some out. Building seperate models is akin to treating missing and non-missing groups as random samples. If missingness on X3 is related to X1 or X2 or some other unobserved variable, then your estimates will likely be biased in each model. Why not use multiple imputation on the development data set and use the combined coefficients on a multiply imputed prediction set? Average across the predictions and you should be good.
What is the advantage of imputation over building multiple models in regression? I think the key here is understanding the missing data mechanism; or at least ruling some out. Building seperate models is akin to treating missing and non-missing groups as random samples. If missing
29,196
What is the advantage of imputation over building multiple models in regression?
I assume that you are interested in obtaining unbiased estimates of the regression coefficients. The analysis of the complete cases yields unbiased estimates of your regression coefficients provided that the probability that X3 is missing does not depend on Y. This holds even if the missingness probability depends on X1 or X2, and for any type of regression analysis. Of course, the estimates may be inefficient if the proportion of complete cases is small. In that case you could use multiple imputation of X3 given X2, X1 and Y to increase precision. See White and Carlin (2010) Stat Med for details.
What is the advantage of imputation over building multiple models in regression?
I assume that you are interested in obtaining unbiased estimates of the regression coefficients. The analysis of the complete cases yields unbiased estimates of your regression coefficients provided t
What is the advantage of imputation over building multiple models in regression? I assume that you are interested in obtaining unbiased estimates of the regression coefficients. The analysis of the complete cases yields unbiased estimates of your regression coefficients provided that the probability that X3 is missing does not depend on Y. This holds even if the missingness probability depends on X1 or X2, and for any type of regression analysis. Of course, the estimates may be inefficient if the proportion of complete cases is small. In that case you could use multiple imputation of X3 given X2, X1 and Y to increase precision. See White and Carlin (2010) Stat Med for details.
What is the advantage of imputation over building multiple models in regression? I assume that you are interested in obtaining unbiased estimates of the regression coefficients. The analysis of the complete cases yields unbiased estimates of your regression coefficients provided t
29,197
What is the advantage of imputation over building multiple models in regression?
One study out of Harvard suggests multiple imputation with five forecasts of the missing data (here is refererence, http://m.circoutcomes.ahajournals.org/content/3/1/98.full ). Even then, I do recall comments that imputation models may still not produce cover intervals for the model parameters that do not include the true underlying values! With that in mind, it appears best to use five simple naive models for the missing value (assuming not missing at random in the current discussion) that produce a good spread of values, so that cover intervals may, at least, contain the true parameters. My experience in Sampling theory is that much resources are often spent in subsampling the non-response population which, at times, appears to be very different from the response population. As such, I would recommend a similar exercise in missing value regression at least once in the particular area of application. The relationships unrecovered in such an exploration of the missing data can be of historical value in constructing better missing data forecast models for the future.
What is the advantage of imputation over building multiple models in regression?
One study out of Harvard suggests multiple imputation with five forecasts of the missing data (here is refererence, http://m.circoutcomes.ahajournals.org/content/3/1/98.full ). Even then, I do recall
What is the advantage of imputation over building multiple models in regression? One study out of Harvard suggests multiple imputation with five forecasts of the missing data (here is refererence, http://m.circoutcomes.ahajournals.org/content/3/1/98.full ). Even then, I do recall comments that imputation models may still not produce cover intervals for the model parameters that do not include the true underlying values! With that in mind, it appears best to use five simple naive models for the missing value (assuming not missing at random in the current discussion) that produce a good spread of values, so that cover intervals may, at least, contain the true parameters. My experience in Sampling theory is that much resources are often spent in subsampling the non-response population which, at times, appears to be very different from the response population. As such, I would recommend a similar exercise in missing value regression at least once in the particular area of application. The relationships unrecovered in such an exploration of the missing data can be of historical value in constructing better missing data forecast models for the future.
What is the advantage of imputation over building multiple models in regression? One study out of Harvard suggests multiple imputation with five forecasts of the missing data (here is refererence, http://m.circoutcomes.ahajournals.org/content/3/1/98.full ). Even then, I do recall
29,198
How to calculate mutual information?
How about forming a joint probability table holding the normalized co-occurences in documents. Then you can obtain joint entropy and marginal entropies using the table. Finally, $$I(X,Y) = H(X)+H(Y)-H(X,Y). $$
How to calculate mutual information?
How about forming a joint probability table holding the normalized co-occurences in documents. Then you can obtain joint entropy and marginal entropies using the table. Finally, $$I(X,Y) = H(X)+H(Y)-
How to calculate mutual information? How about forming a joint probability table holding the normalized co-occurences in documents. Then you can obtain joint entropy and marginal entropies using the table. Finally, $$I(X,Y) = H(X)+H(Y)-H(X,Y). $$
How to calculate mutual information? How about forming a joint probability table holding the normalized co-occurences in documents. Then you can obtain joint entropy and marginal entropies using the table. Finally, $$I(X,Y) = H(X)+H(Y)-
29,199
Is this a correct way to continually update a probability using Bayes Theorem?
This is not correct. Sequential updating of this type only works when the information you are receiving sequentially is independent (e.g. iid observations of a random variable). If each observation is not independent, as in this case, you need to consider the joint probability distribution. The correct way to update would be to go back to the prior, find the joint probability that someone loves horror movies, has seen a horror movie in the last 30 days, and owns a cat given that they do or do not choose vanilla as their favorite ice cream flavor, and then update in a single step. Updating sequentially like this when your data are not independent will rapidly drive your posterior probability much higher or lower than it ought to be.
Is this a correct way to continually update a probability using Bayes Theorem?
This is not correct. Sequential updating of this type only works when the information you are receiving sequentially is independent (e.g. iid observations of a random variable). If each observation is
Is this a correct way to continually update a probability using Bayes Theorem? This is not correct. Sequential updating of this type only works when the information you are receiving sequentially is independent (e.g. iid observations of a random variable). If each observation is not independent, as in this case, you need to consider the joint probability distribution. The correct way to update would be to go back to the prior, find the joint probability that someone loves horror movies, has seen a horror movie in the last 30 days, and owns a cat given that they do or do not choose vanilla as their favorite ice cream flavor, and then update in a single step. Updating sequentially like this when your data are not independent will rapidly drive your posterior probability much higher or lower than it ought to be.
Is this a correct way to continually update a probability using Bayes Theorem? This is not correct. Sequential updating of this type only works when the information you are receiving sequentially is independent (e.g. iid observations of a random variable). If each observation is
29,200
How can I transform time series data so I can use simpler techniques for fault prediction?
You may want to look at survival analysis, with which you can estimate the survival function (the probability that the time of failure is greater than a specific time) and the hazard function (the instantaneous probability that a unit will fail, given it has not experienced failure so far). With most survival analysis approaches you can enter time-invariant and time-varying predictors. There are a variety of different survival analysis approaches including the semi-parametric Cox proportional hazards model (a.k.a. Cox regression) and parametric models. Cox regression doesn't require you to specify the underlying base hazard function but you might find that you need a parametric model to properly capture the failure patterns in your data. Sometimes parametric accelerated failure time models are appropriate, where the rate of failure increases over time. You might try starting with Cox regression since it is the simplest to use and check how well you can predict failure on a holdout test set. I suspect you may have better results with some sort of survival analysis that explicitly takes into account time and censoring (pumps that have not failed yet) than with trying to turn this into a non-time-based classification problem.
How can I transform time series data so I can use simpler techniques for fault prediction?
You may want to look at survival analysis, with which you can estimate the survival function (the probability that the time of failure is greater than a specific time) and the hazard function (the ins
How can I transform time series data so I can use simpler techniques for fault prediction? You may want to look at survival analysis, with which you can estimate the survival function (the probability that the time of failure is greater than a specific time) and the hazard function (the instantaneous probability that a unit will fail, given it has not experienced failure so far). With most survival analysis approaches you can enter time-invariant and time-varying predictors. There are a variety of different survival analysis approaches including the semi-parametric Cox proportional hazards model (a.k.a. Cox regression) and parametric models. Cox regression doesn't require you to specify the underlying base hazard function but you might find that you need a parametric model to properly capture the failure patterns in your data. Sometimes parametric accelerated failure time models are appropriate, where the rate of failure increases over time. You might try starting with Cox regression since it is the simplest to use and check how well you can predict failure on a holdout test set. I suspect you may have better results with some sort of survival analysis that explicitly takes into account time and censoring (pumps that have not failed yet) than with trying to turn this into a non-time-based classification problem.
How can I transform time series data so I can use simpler techniques for fault prediction? You may want to look at survival analysis, with which you can estimate the survival function (the probability that the time of failure is greater than a specific time) and the hazard function (the ins