idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
9,101
Introduction to structural equation modeling
Jarrett Byrnes (jebyrnes here) also has his weeklong SEM intro course materials posted here: http://byrneslab.net/teaching/sem/ The course is intended for researchers applying SEMs to biological and ecological data but covers general introductions to SEM concepts, R code, and examples so is likely to be helpful to othe...
Introduction to structural equation modeling
Jarrett Byrnes (jebyrnes here) also has his weeklong SEM intro course materials posted here: http://byrneslab.net/teaching/sem/ The course is intended for researchers applying SEMs to biological and e
Introduction to structural equation modeling Jarrett Byrnes (jebyrnes here) also has his weeklong SEM intro course materials posted here: http://byrneslab.net/teaching/sem/ The course is intended for researchers applying SEMs to biological and ecological data but covers general introductions to SEM concepts, R code, an...
Introduction to structural equation modeling Jarrett Byrnes (jebyrnes here) also has his weeklong SEM intro course materials posted here: http://byrneslab.net/teaching/sem/ The course is intended for researchers applying SEMs to biological and e
9,102
What are examples of statistical experiments that allow the calculation of the golden ratio?
There's only one Mr Tripletoddletrouble. In fact, unless he has a son to pass his surname down to, he'll be the last Mr Tripletoddletrouble. Social mores of his time and place sadly disallow even such an exquisite surname to survive by passing through the female line. Mr Tripletoddletrouble has a rare and mathematicall...
What are examples of statistical experiments that allow the calculation of the golden ratio?
There's only one Mr Tripletoddletrouble. In fact, unless he has a son to pass his surname down to, he'll be the last Mr Tripletoddletrouble. Social mores of his time and place sadly disallow even such
What are examples of statistical experiments that allow the calculation of the golden ratio? There's only one Mr Tripletoddletrouble. In fact, unless he has a son to pass his surname down to, he'll be the last Mr Tripletoddletrouble. Social mores of his time and place sadly disallow even such an exquisite surname to su...
What are examples of statistical experiments that allow the calculation of the golden ratio? There's only one Mr Tripletoddletrouble. In fact, unless he has a son to pass his surname down to, he'll be the last Mr Tripletoddletrouble. Social mores of his time and place sadly disallow even such
9,103
What are examples of statistical experiments that allow the calculation of the golden ratio?
Because you are looking for "unexpected" solutions, permit me to offer one before explaining it. This is working R code to estimate $\varphi=(1+\sqrt{5})/2$ from iid uniform values and relatively simple (algebraic) calculations: u <- runif(1e6) v <- runif(length(u)) median((v/u)[u^2 + v^2 <= 1 & u <= 2*v]) 1.61998 ...
What are examples of statistical experiments that allow the calculation of the golden ratio?
Because you are looking for "unexpected" solutions, permit me to offer one before explaining it. This is working R code to estimate $\varphi=(1+\sqrt{5})/2$ from iid uniform values and relatively simp
What are examples of statistical experiments that allow the calculation of the golden ratio? Because you are looking for "unexpected" solutions, permit me to offer one before explaining it. This is working R code to estimate $\varphi=(1+\sqrt{5})/2$ from iid uniform values and relatively simple (algebraic) calculations...
What are examples of statistical experiments that allow the calculation of the golden ratio? Because you are looking for "unexpected" solutions, permit me to offer one before explaining it. This is working R code to estimate $\varphi=(1+\sqrt{5})/2$ from iid uniform values and relatively simp
9,104
What are examples of statistical experiments that allow the calculation of the golden ratio?
There is a recursive algorithm that succeeds (outputs heads) with probability $1/\Phi$. It takes advantage of the fact that the continued fraction representation of $\Phi$ has all ones. The algorithm follows: Procedure OnePhi(): Returns 1 with probability $1/\Phi.$ Do the following steps repeatedly, until the algorith...
What are examples of statistical experiments that allow the calculation of the golden ratio?
There is a recursive algorithm that succeeds (outputs heads) with probability $1/\Phi$. It takes advantage of the fact that the continued fraction representation of $\Phi$ has all ones. The algorithm
What are examples of statistical experiments that allow the calculation of the golden ratio? There is a recursive algorithm that succeeds (outputs heads) with probability $1/\Phi$. It takes advantage of the fact that the continued fraction representation of $\Phi$ has all ones. The algorithm follows: Procedure OnePhi()...
What are examples of statistical experiments that allow the calculation of the golden ratio? There is a recursive algorithm that succeeds (outputs heads) with probability $1/\Phi$. It takes advantage of the fact that the continued fraction representation of $\Phi$ has all ones. The algorithm
9,105
What are examples of statistical experiments that allow the calculation of the golden ratio?
Here's a quick one. It's related to the branching process from Silverfish's answer. Run a random walk, starting from height 0, say. At each step, either move up by 2 or move down by 1, with probability 1/2 each. Count the times at which the current height is below the maximum height so far. The proportion of such times...
What are examples of statistical experiments that allow the calculation of the golden ratio?
Here's a quick one. It's related to the branching process from Silverfish's answer. Run a random walk, starting from height 0, say. At each step, either move up by 2 or move down by 1, with probabilit
What are examples of statistical experiments that allow the calculation of the golden ratio? Here's a quick one. It's related to the branching process from Silverfish's answer. Run a random walk, starting from height 0, say. At each step, either move up by 2 or move down by 1, with probability 1/2 each. Count the times...
What are examples of statistical experiments that allow the calculation of the golden ratio? Here's a quick one. It's related to the branching process from Silverfish's answer. Run a random walk, starting from height 0, say. At each step, either move up by 2 or move down by 1, with probabilit
9,106
What are examples of statistical experiments that allow the calculation of the golden ratio?
Fibonacci numbers and Markov chains I remember a question in which the Fibonacci numbers occurred. While computing the waiting time for the probability of flipping '1-0-0' the probabilities of the state '1' and the state '1-0' are Fibonacci numbers (divided by some power of 2). We can simulate this in several ways Exam...
What are examples of statistical experiments that allow the calculation of the golden ratio?
Fibonacci numbers and Markov chains I remember a question in which the Fibonacci numbers occurred. While computing the waiting time for the probability of flipping '1-0-0' the probabilities of the sta
What are examples of statistical experiments that allow the calculation of the golden ratio? Fibonacci numbers and Markov chains I remember a question in which the Fibonacci numbers occurred. While computing the waiting time for the probability of flipping '1-0-0' the probabilities of the state '1' and the state '1-0' ...
What are examples of statistical experiments that allow the calculation of the golden ratio? Fibonacci numbers and Markov chains I remember a question in which the Fibonacci numbers occurred. While computing the waiting time for the probability of flipping '1-0-0' the probabilities of the sta
9,107
Does correlation assume stationarity of data?
The correlation measures linear relationship. In informal context relationship means something stable. When we calculate the sample correlation for stationary variables and increase the number of available data points this sample correlation tends to true correlation. It can be shown that for prices, which usually ar...
Does correlation assume stationarity of data?
The correlation measures linear relationship. In informal context relationship means something stable. When we calculate the sample correlation for stationary variables and increase the number of ava
Does correlation assume stationarity of data? The correlation measures linear relationship. In informal context relationship means something stable. When we calculate the sample correlation for stationary variables and increase the number of available data points this sample correlation tends to true correlation. It ...
Does correlation assume stationarity of data? The correlation measures linear relationship. In informal context relationship means something stable. When we calculate the sample correlation for stationary variables and increase the number of ava
9,108
Does correlation assume stationarity of data?
...is the computation of correlation whose data is non-stationary even a valid statistical calculation? Let $W$ be a discrete random walk. Pick a positive number $h$. Define the processes $P$ and $V$ by $P(0) = 1$, $P(t+1) = -P(t)$ if $V(t) > h$, and otherwise $P(t+1) = P(t)$; and $V(t) = P(t)W(t)$. In other words...
Does correlation assume stationarity of data?
...is the computation of correlation whose data is non-stationary even a valid statistical calculation? Let $W$ be a discrete random walk. Pick a positive number $h$. Define the processes $P$ and $
Does correlation assume stationarity of data? ...is the computation of correlation whose data is non-stationary even a valid statistical calculation? Let $W$ be a discrete random walk. Pick a positive number $h$. Define the processes $P$ and $V$ by $P(0) = 1$, $P(t+1) = -P(t)$ if $V(t) > h$, and otherwise $P(t+1) =...
Does correlation assume stationarity of data? ...is the computation of correlation whose data is non-stationary even a valid statistical calculation? Let $W$ be a discrete random walk. Pick a positive number $h$. Define the processes $P$ and $
9,109
How to calculate Zipf's law coefficient from a set of top frequencies?
Update I've updated the code with maximum likelihood estimator as per @whuber suggestion. Minimizing sum of squares of differences between log theoretical probabilities and log frequencies though gives an answer would be a statistical procedure if it could be shown that it is some kind of M-estimator. Unfortunately I c...
How to calculate Zipf's law coefficient from a set of top frequencies?
Update I've updated the code with maximum likelihood estimator as per @whuber suggestion. Minimizing sum of squares of differences between log theoretical probabilities and log frequencies though give
How to calculate Zipf's law coefficient from a set of top frequencies? Update I've updated the code with maximum likelihood estimator as per @whuber suggestion. Minimizing sum of squares of differences between log theoretical probabilities and log frequencies though gives an answer would be a statistical procedure if i...
How to calculate Zipf's law coefficient from a set of top frequencies? Update I've updated the code with maximum likelihood estimator as per @whuber suggestion. Minimizing sum of squares of differences between log theoretical probabilities and log frequencies though give
9,110
How to calculate Zipf's law coefficient from a set of top frequencies?
There are several issues before us in any estimation problem: Estimate the parameter. Assess the quality of that estimate. Explore the data. Evaluate the fit. For those who would use statistical methods for understanding and communication, the first should never be done without the others. For estimation it is conven...
How to calculate Zipf's law coefficient from a set of top frequencies?
There are several issues before us in any estimation problem: Estimate the parameter. Assess the quality of that estimate. Explore the data. Evaluate the fit. For those who would use statistical met
How to calculate Zipf's law coefficient from a set of top frequencies? There are several issues before us in any estimation problem: Estimate the parameter. Assess the quality of that estimate. Explore the data. Evaluate the fit. For those who would use statistical methods for understanding and communication, the fir...
How to calculate Zipf's law coefficient from a set of top frequencies? There are several issues before us in any estimation problem: Estimate the parameter. Assess the quality of that estimate. Explore the data. Evaluate the fit. For those who would use statistical met
9,111
How to calculate Zipf's law coefficient from a set of top frequencies?
The Maximum Likelihood estimates are only point estimates of the parameter $s$. Extra effort is needed to find also the confidence interval of the estimate. The problem is that this interval is not probabilistic. One cannot say "the parameter value s=... is with probability of 95% in the range [...]". One of the probab...
How to calculate Zipf's law coefficient from a set of top frequencies?
The Maximum Likelihood estimates are only point estimates of the parameter $s$. Extra effort is needed to find also the confidence interval of the estimate. The problem is that this interval is not pr
How to calculate Zipf's law coefficient from a set of top frequencies? The Maximum Likelihood estimates are only point estimates of the parameter $s$. Extra effort is needed to find also the confidence interval of the estimate. The problem is that this interval is not probabilistic. One cannot say "the parameter value ...
How to calculate Zipf's law coefficient from a set of top frequencies? The Maximum Likelihood estimates are only point estimates of the parameter $s$. Extra effort is needed to find also the confidence interval of the estimate. The problem is that this interval is not pr
9,112
How to calculate Zipf's law coefficient from a set of top frequencies?
Here is my attempt to fit the data, evaluate and explore the results using VGAM: require("VGAM") freq <- dzipf(1:100, N = 100, s = 1)*1000 #randomizing values freq <- freq + abs(rnorm(n=1,m=0, sd=100)) #adding noize zdata <- data.frame(y = rank(-freq, ties.method = "first") , ofreq = freq) fit = vglm(y ~ 1, zipf, zd...
How to calculate Zipf's law coefficient from a set of top frequencies?
Here is my attempt to fit the data, evaluate and explore the results using VGAM: require("VGAM") freq <- dzipf(1:100, N = 100, s = 1)*1000 #randomizing values freq <- freq + abs(rnorm(n=1,m=0, sd=10
How to calculate Zipf's law coefficient from a set of top frequencies? Here is my attempt to fit the data, evaluate and explore the results using VGAM: require("VGAM") freq <- dzipf(1:100, N = 100, s = 1)*1000 #randomizing values freq <- freq + abs(rnorm(n=1,m=0, sd=100)) #adding noize zdata <- data.frame(y = rank(-...
How to calculate Zipf's law coefficient from a set of top frequencies? Here is my attempt to fit the data, evaluate and explore the results using VGAM: require("VGAM") freq <- dzipf(1:100, N = 100, s = 1)*1000 #randomizing values freq <- freq + abs(rnorm(n=1,m=0, sd=10
9,113
How to calculate Zipf's law coefficient from a set of top frequencies?
Just for fun, this is another instance where the UWSE can provide a closed form solution using only the top most frequency - though at a cost of accuracy. The probability on $ x = 1$ is unique across parameter values. If $ \hat{w_{x=1}} $ denotes the corresponding relative frequency then, $$ \hat{s_{UWSE}} = H_{10}^{-1...
How to calculate Zipf's law coefficient from a set of top frequencies?
Just for fun, this is another instance where the UWSE can provide a closed form solution using only the top most frequency - though at a cost of accuracy. The probability on $ x = 1$ is unique across
How to calculate Zipf's law coefficient from a set of top frequencies? Just for fun, this is another instance where the UWSE can provide a closed form solution using only the top most frequency - though at a cost of accuracy. The probability on $ x = 1$ is unique across parameter values. If $ \hat{w_{x=1}} $ denotes th...
How to calculate Zipf's law coefficient from a set of top frequencies? Just for fun, this is another instance where the UWSE can provide a closed form solution using only the top most frequency - though at a cost of accuracy. The probability on $ x = 1$ is unique across
9,114
How to calculate Zipf's law coefficient from a set of top frequencies?
Here is a simple example using the text Ulysses. I'll use a simple bash script to acquire the type frequencies: cat ulysses.txt | tr 'A-Z' 'a-z' | tr -dc 'a-z ' | tr ' ' '\n' | sort | uniq -c | sort -k1,1nr | awk '{print $1}' > ulysses_freq.txt And then use R to fit model using mle. The normalized frequency of the ele...
How to calculate Zipf's law coefficient from a set of top frequencies?
Here is a simple example using the text Ulysses. I'll use a simple bash script to acquire the type frequencies: cat ulysses.txt | tr 'A-Z' 'a-z' | tr -dc 'a-z ' | tr ' ' '\n' | sort | uniq -c | sort -
How to calculate Zipf's law coefficient from a set of top frequencies? Here is a simple example using the text Ulysses. I'll use a simple bash script to acquire the type frequencies: cat ulysses.txt | tr 'A-Z' 'a-z' | tr -dc 'a-z ' | tr ' ' '\n' | sort | uniq -c | sort -k1,1nr | awk '{print $1}' > ulysses_freq.txt And...
How to calculate Zipf's law coefficient from a set of top frequencies? Here is a simple example using the text Ulysses. I'll use a simple bash script to acquire the type frequencies: cat ulysses.txt | tr 'A-Z' 'a-z' | tr -dc 'a-z ' | tr ' ' '\n' | sort | uniq -c | sort -
9,115
How to calculate Zipf's law coefficient from a set of top frequencies?
My solution try to be complementary to the answers provided by mpiktas and whuber doing an implementation in Python. Our frequencies and ranges x are: freqs = np.asarray([26486, 12053, 5052, 3033, 2536, 2391, 1444, 1220, 1152, 1039]) x = np.asarray([1, 2, 3, 4, 5 ,6 ,7 ,8 ,9, 10]) As our function is not defined in all...
How to calculate Zipf's law coefficient from a set of top frequencies?
My solution try to be complementary to the answers provided by mpiktas and whuber doing an implementation in Python. Our frequencies and ranges x are: freqs = np.asarray([26486, 12053, 5052, 3033, 253
How to calculate Zipf's law coefficient from a set of top frequencies? My solution try to be complementary to the answers provided by mpiktas and whuber doing an implementation in Python. Our frequencies and ranges x are: freqs = np.asarray([26486, 12053, 5052, 3033, 2536, 2391, 1444, 1220, 1152, 1039]) x = np.asarray(...
How to calculate Zipf's law coefficient from a set of top frequencies? My solution try to be complementary to the answers provided by mpiktas and whuber doing an implementation in Python. Our frequencies and ranges x are: freqs = np.asarray([26486, 12053, 5052, 3033, 253
9,116
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R)
Variable importance might generally be computed based on the corresponding reduction of predictive accuracy when the predictor of interest is removed (with a permutation technique, like in Random Forest) or some measure of decrease of node impurity, but see (1) for an overview of available methods. An obvious alternati...
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R)
Variable importance might generally be computed based on the corresponding reduction of predictive accuracy when the predictor of interest is removed (with a permutation technique, like in Random Fore
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R) Variable importance might generally be computed based on the corresponding reduction of predictive accuracy when the predictor of interest is removed (with a permutation technique, like in Random Forest) or some measure of de...
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R) Variable importance might generally be computed based on the corresponding reduction of predictive accuracy when the predictor of interest is removed (with a permutation technique, like in Random Fore
9,117
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R)
The following function(from Caret package) can be used for evaluating variable importance in rpart trees. I corrected a bug in the Caret function when this only root node in the tree. varImp <- function(object, surrogates = FALSE, competes = TRUE, ...) { tmp <- rownames(object$splits) allVars <- colnames(attributes...
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R)
The following function(from Caret package) can be used for evaluating variable importance in rpart trees. I corrected a bug in the Caret function when this only root node in the tree. varImp <- functi
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R) The following function(from Caret package) can be used for evaluating variable importance in rpart trees. I corrected a bug in the Caret function when this only root node in the tree. varImp <- function(object, surrogates = F...
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R) The following function(from Caret package) can be used for evaluating variable importance in rpart trees. I corrected a bug in the Caret function when this only root node in the tree. varImp <- functi
9,118
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R)
I think chl has pretty much answered the first part: What common measures exists for ranking/measuring variable importance of participating variables in a CART model? With respect to the second part of your question: And how can this be computed using R (for example, when using the rpart package) You can find the ...
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R)
I think chl has pretty much answered the first part: What common measures exists for ranking/measuring variable importance of participating variables in a CART model? With respect to the second par
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R) I think chl has pretty much answered the first part: What common measures exists for ranking/measuring variable importance of participating variables in a CART model? With respect to the second part of your question: And ...
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R) I think chl has pretty much answered the first part: What common measures exists for ranking/measuring variable importance of participating variables in a CART model? With respect to the second par
9,119
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R)
names(result) shows variable.importance result$variable.importance should help?
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R)
names(result) shows variable.importance result$variable.importance should help?
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R) names(result) shows variable.importance result$variable.importance should help?
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R) names(result) shows variable.importance result$variable.importance should help?
9,120
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R)
The caret package and the rpart package each have ways to list the variables and rank their importance, but generate different results from each other when calculating variable importance. fit$variable.importance ## shows different results than caret::varImp(fit) The list of variables used is the same, but the scale i...
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R)
The caret package and the rpart package each have ways to list the variables and rank their importance, but generate different results from each other when calculating variable importance. fit$variabl
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R) The caret package and the rpart package each have ways to list the variables and rank their importance, but generate different results from each other when calculating variable importance. fit$variable.importance ## shows dif...
How to measure/rank "variable importance" when using CART? (specifically using {rpart} from R) The caret package and the rpart package each have ways to list the variables and rank their importance, but generate different results from each other when calculating variable importance. fit$variabl
9,121
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors?
Automatically finding good starting values for a nonlinear model is an art. (It's relatively easy for one-off datasets when you can just plot the data and make some good guesses visually.) One approach is to linearize the model and use least squares estimates. In this case, the model has the form $$\mathbb{E}(Y) = a ...
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors?
Automatically finding good starting values for a nonlinear model is an art. (It's relatively easy for one-off datasets when you can just plot the data and make some good guesses visually.) One appro
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors? Automatically finding good starting values for a nonlinear model is an art. (It's relatively easy for one-off datasets when you can just plot the data and make some good guesses visually.) One approach is to linearize the model a...
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors? Automatically finding good starting values for a nonlinear model is an art. (It's relatively easy for one-off datasets when you can just plot the data and make some good guesses visually.) One appro
9,122
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors?
This library was able to resolve my problem with nls's singular gradient: http://www.r-bloggers.com/a-better-nls/ An example: library(minpack.lm) nlsLM(function, start=list(variable=2,variable2=12))
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors?
This library was able to resolve my problem with nls's singular gradient: http://www.r-bloggers.com/a-better-nls/ An example: library(minpack.lm) nlsLM(function, start=list(variable=2,variable2=12))
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors? This library was able to resolve my problem with nls's singular gradient: http://www.r-bloggers.com/a-better-nls/ An example: library(minpack.lm) nlsLM(function, start=list(variable=2,variable2=12))
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors? This library was able to resolve my problem with nls's singular gradient: http://www.r-bloggers.com/a-better-nls/ An example: library(minpack.lm) nlsLM(function, start=list(variable=2,variable2=12))
9,123
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors?
So ... I think I mis-read this as an exponential function. All I needed was poly() model <- lm(cost.per.car ~ poly(reductions, 3), data=q24) new.data <- data.frame(reductions = c(91,92,93,94)) predict(model, new.data) plot(q24) lines(q24$reductions, predict(model, list(reductions = q24$reductions))) Or, using lattice...
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors?
So ... I think I mis-read this as an exponential function. All I needed was poly() model <- lm(cost.per.car ~ poly(reductions, 3), data=q24) new.data <- data.frame(reductions = c(91,92,93,94)) predict
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors? So ... I think I mis-read this as an exponential function. All I needed was poly() model <- lm(cost.per.car ~ poly(reductions, 3), data=q24) new.data <- data.frame(reductions = c(91,92,93,94)) predict(model, new.data) plot(q24) li...
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors? So ... I think I mis-read this as an exponential function. All I needed was poly() model <- lm(cost.per.car ~ poly(reductions, 3), data=q24) new.data <- data.frame(reductions = c(91,92,93,94)) predict
9,124
Why should I be Bayesian when my dataset is large?
Being Bayesian is not only about information fed through the prior. But even then: Where the prior is zero, no amount of data will turn that over. Having a full Bayesian posterior distribution to draw from opens loads and loads of ways to make inference from. It is easy to explain a credible interval to any audience ...
Why should I be Bayesian when my dataset is large?
Being Bayesian is not only about information fed through the prior. But even then: Where the prior is zero, no amount of data will turn that over. Having a full Bayesian posterior distribution to dra
Why should I be Bayesian when my dataset is large? Being Bayesian is not only about information fed through the prior. But even then: Where the prior is zero, no amount of data will turn that over. Having a full Bayesian posterior distribution to draw from opens loads and loads of ways to make inference from. It is e...
Why should I be Bayesian when my dataset is large? Being Bayesian is not only about information fed through the prior. But even then: Where the prior is zero, no amount of data will turn that over. Having a full Bayesian posterior distribution to dra
9,125
Why should I be Bayesian when my dataset is large?
I'd like to echo some of the points in the other answer with slightly different emphasis. To me the most important issue is that the Bayesian view of uncertainty/probability/randomness is the one that directly answers the questions we probably care about, whereas the Frequentist view of uncertainty directly answers oth...
Why should I be Bayesian when my dataset is large?
I'd like to echo some of the points in the other answer with slightly different emphasis. To me the most important issue is that the Bayesian view of uncertainty/probability/randomness is the one that
Why should I be Bayesian when my dataset is large? I'd like to echo some of the points in the other answer with slightly different emphasis. To me the most important issue is that the Bayesian view of uncertainty/probability/randomness is the one that directly answers the questions we probably care about, whereas the F...
Why should I be Bayesian when my dataset is large? I'd like to echo some of the points in the other answer with slightly different emphasis. To me the most important issue is that the Bayesian view of uncertainty/probability/randomness is the one that
9,126
Why should I be Bayesian when my dataset is large?
The other answers address what's probably your actual question. But just to add a more concrete viewpoint: if you're already a Bayesian (for small/medium datasets) and you get a large data, why not use the methodology you're familiar with? It will be relatively slow but you are familiar with the steps so you're less li...
Why should I be Bayesian when my dataset is large?
The other answers address what's probably your actual question. But just to add a more concrete viewpoint: if you're already a Bayesian (for small/medium datasets) and you get a large data, why not us
Why should I be Bayesian when my dataset is large? The other answers address what's probably your actual question. But just to add a more concrete viewpoint: if you're already a Bayesian (for small/medium datasets) and you get a large data, why not use the methodology you're familiar with? It will be relatively slow bu...
Why should I be Bayesian when my dataset is large? The other answers address what's probably your actual question. But just to add a more concrete viewpoint: if you're already a Bayesian (for small/medium datasets) and you get a large data, why not us
9,127
Why should I be Bayesian when my dataset is large?
One place where Bayesian approach meets large datasets is Bayesian deep learning. When using Bayesian approach to neural networks people usually use rather simplistic priors (Gaussians, centered at zero), this is mostly for computational reasons, but also because there is not much prior knowledge (neural network parame...
Why should I be Bayesian when my dataset is large?
One place where Bayesian approach meets large datasets is Bayesian deep learning. When using Bayesian approach to neural networks people usually use rather simplistic priors (Gaussians, centered at ze
Why should I be Bayesian when my dataset is large? One place where Bayesian approach meets large datasets is Bayesian deep learning. When using Bayesian approach to neural networks people usually use rather simplistic priors (Gaussians, centered at zero), this is mostly for computational reasons, but also because there...
Why should I be Bayesian when my dataset is large? One place where Bayesian approach meets large datasets is Bayesian deep learning. When using Bayesian approach to neural networks people usually use rather simplistic priors (Gaussians, centered at ze
9,128
Simulate a uniform distribution on a disc
You want the proportion of points to be uniformly proportional to area rather than distance to the origin. Since area is proportional to the squared distance, generate uniform random areas and take their square roots; scale the results as desired. Combine that with a uniform polar angle. This is quick and simple to co...
Simulate a uniform distribution on a disc
You want the proportion of points to be uniformly proportional to area rather than distance to the origin. Since area is proportional to the squared distance, generate uniform random areas and take t
Simulate a uniform distribution on a disc You want the proportion of points to be uniformly proportional to area rather than distance to the origin. Since area is proportional to the squared distance, generate uniform random areas and take their square roots; scale the results as desired. Combine that with a uniform p...
Simulate a uniform distribution on a disc You want the proportion of points to be uniformly proportional to area rather than distance to the origin. Since area is proportional to the squared distance, generate uniform random areas and take t
9,129
Simulate a uniform distribution on a disc
Rejection Sampling can be used. This means we can sample from 2D uniform distribution, and select samples that satisfy the disc condition. Here is an example. x=runif(1e4,-1,1) y=runif(1e4,-1,1) d=data.frame(x=x,y=y) disc_sample=d[d$x^2+d$y^2<1,] plot(disc_sample)
Simulate a uniform distribution on a disc
Rejection Sampling can be used. This means we can sample from 2D uniform distribution, and select samples that satisfy the disc condition. Here is an example. x=runif(1e4,-1,1) y=runif(1e4,-1,1) d=d
Simulate a uniform distribution on a disc Rejection Sampling can be used. This means we can sample from 2D uniform distribution, and select samples that satisfy the disc condition. Here is an example. x=runif(1e4,-1,1) y=runif(1e4,-1,1) d=data.frame(x=x,y=y) disc_sample=d[d$x^2+d$y^2<1,] plot(disc_sample)
Simulate a uniform distribution on a disc Rejection Sampling can be used. This means we can sample from 2D uniform distribution, and select samples that satisfy the disc condition. Here is an example. x=runif(1e4,-1,1) y=runif(1e4,-1,1) d=d
9,130
Simulate a uniform distribution on a disc
I'll give you a general n-dimensional answer that works for two-dimensional case too, of course. In three dimensions an analog of a disk is a volume of a solid ball (sphere). There are two approaches I'm going to discuss. One of them I would call "precise", and you'll get a complete solution with it in R. The second on...
Simulate a uniform distribution on a disc
I'll give you a general n-dimensional answer that works for two-dimensional case too, of course. In three dimensions an analog of a disk is a volume of a solid ball (sphere). There are two approaches
Simulate a uniform distribution on a disc I'll give you a general n-dimensional answer that works for two-dimensional case too, of course. In three dimensions an analog of a disk is a volume of a solid ball (sphere). There are two approaches I'm going to discuss. One of them I would call "precise", and you'll get a com...
Simulate a uniform distribution on a disc I'll give you a general n-dimensional answer that works for two-dimensional case too, of course. In three dimensions an analog of a disk is a volume of a solid ball (sphere). There are two approaches
9,131
Simulate a uniform distribution on a disc
Here is an alternative solution in R: n <- 1e4 ## r <- seq(0, 1, by=1/1000) r <- runif(n) rho <- sample(r, size=n, replace=T, prob=r) theta <- runif(n, 0, 2*pi) x <- rho * cos(theta) y <- rho * sin(theta) plot(x, y, pch=19, cex=0.6, col="#00000020")
Simulate a uniform distribution on a disc
Here is an alternative solution in R: n <- 1e4 ## r <- seq(0, 1, by=1/1000) r <- runif(n) rho <- sample(r, size=n, replace=T, prob=r) theta <- runif(n, 0, 2*pi) x <- rho * cos(theta) y <- rho * sin(th
Simulate a uniform distribution on a disc Here is an alternative solution in R: n <- 1e4 ## r <- seq(0, 1, by=1/1000) r <- runif(n) rho <- sample(r, size=n, replace=T, prob=r) theta <- runif(n, 0, 2*pi) x <- rho * cos(theta) y <- rho * sin(theta) plot(x, y, pch=19, cex=0.6, col="#00000020")
Simulate a uniform distribution on a disc Here is an alternative solution in R: n <- 1e4 ## r <- seq(0, 1, by=1/1000) r <- runif(n) rho <- sample(r, size=n, replace=T, prob=r) theta <- runif(n, 0, 2*pi) x <- rho * cos(theta) y <- rho * sin(th
9,132
An adaptation of the Kullback-Leibler distance?
You might look at Chapter 3 of Devroye, Gyorfi, and Lugosi, A Probabilistic Theory of Pattern Recognition, Springer, 1996. See, in particular, the section on $f$-divergences. $f$-Divergences can be viewed as a generalization of Kullback--Leibler (or, alternatively, KL can be viewed as a special case of an $f$-Divergenc...
An adaptation of the Kullback-Leibler distance?
You might look at Chapter 3 of Devroye, Gyorfi, and Lugosi, A Probabilistic Theory of Pattern Recognition, Springer, 1996. See, in particular, the section on $f$-divergences. $f$-Divergences can be vi
An adaptation of the Kullback-Leibler distance? You might look at Chapter 3 of Devroye, Gyorfi, and Lugosi, A Probabilistic Theory of Pattern Recognition, Springer, 1996. See, in particular, the section on $f$-divergences. $f$-Divergences can be viewed as a generalization of Kullback--Leibler (or, alternatively, KL can...
An adaptation of the Kullback-Leibler distance? You might look at Chapter 3 of Devroye, Gyorfi, and Lugosi, A Probabilistic Theory of Pattern Recognition, Springer, 1996. See, in particular, the section on $f$-divergences. $f$-Divergences can be vi
9,133
An adaptation of the Kullback-Leibler distance?
The Kullback-Leibler divergence $\kappa(P|Q)$ of $P$ with respect to $Q$ is infinite when $P$ is not absolutely continuous with respect to $Q$, that is, when there exists a measurable set $A$ such that $Q(A)=0$ and $P(A)\ne0$. Furthermore the KL divergence is not symmetric, in the sense that in general $\kappa(P\mid Q)...
An adaptation of the Kullback-Leibler distance?
The Kullback-Leibler divergence $\kappa(P|Q)$ of $P$ with respect to $Q$ is infinite when $P$ is not absolutely continuous with respect to $Q$, that is, when there exists a measurable set $A$ such tha
An adaptation of the Kullback-Leibler distance? The Kullback-Leibler divergence $\kappa(P|Q)$ of $P$ with respect to $Q$ is infinite when $P$ is not absolutely continuous with respect to $Q$, that is, when there exists a measurable set $A$ such that $Q(A)=0$ and $P(A)\ne0$. Furthermore the KL divergence is not symmetri...
An adaptation of the Kullback-Leibler distance? The Kullback-Leibler divergence $\kappa(P|Q)$ of $P$ with respect to $Q$ is infinite when $P$ is not absolutely continuous with respect to $Q$, that is, when there exists a measurable set $A$ such tha
9,134
An adaptation of the Kullback-Leibler distance?
The Kolmogorov distance between two distributions $P$ and $Q$ is the sup norm of their CDFs. (This is the largest vertical discrepancy between the two graphs of the CDFs.) It is used in distributional testing where $P$ is an hypothesized distribution and $Q$ is the empirical distribution function of a dataset. It is ...
An adaptation of the Kullback-Leibler distance?
The Kolmogorov distance between two distributions $P$ and $Q$ is the sup norm of their CDFs. (This is the largest vertical discrepancy between the two graphs of the CDFs.) It is used in distribution
An adaptation of the Kullback-Leibler distance? The Kolmogorov distance between two distributions $P$ and $Q$ is the sup norm of their CDFs. (This is the largest vertical discrepancy between the two graphs of the CDFs.) It is used in distributional testing where $P$ is an hypothesized distribution and $Q$ is the empi...
An adaptation of the Kullback-Leibler distance? The Kolmogorov distance between two distributions $P$ and $Q$ is the sup norm of their CDFs. (This is the largest vertical discrepancy between the two graphs of the CDFs.) It is used in distribution
9,135
An adaptation of the Kullback-Leibler distance?
Yes there does, Bernardo and Reuda defined something called the "intrinsic discrepancy" which for all purposes is a "symmetrised" version of the KL-divergence. Taking the KL divergence from $P$ to $Q$ to be $\kappa(P \mid Q)$ The intrinsic discrepancy is given by: $$\delta(P,Q)\equiv \min \big[\kappa(P \mid Q),\kappa...
An adaptation of the Kullback-Leibler distance?
Yes there does, Bernardo and Reuda defined something called the "intrinsic discrepancy" which for all purposes is a "symmetrised" version of the KL-divergence. Taking the KL divergence from $P$ to $Q
An adaptation of the Kullback-Leibler distance? Yes there does, Bernardo and Reuda defined something called the "intrinsic discrepancy" which for all purposes is a "symmetrised" version of the KL-divergence. Taking the KL divergence from $P$ to $Q$ to be $\kappa(P \mid Q)$ The intrinsic discrepancy is given by: $$\de...
An adaptation of the Kullback-Leibler distance? Yes there does, Bernardo and Reuda defined something called the "intrinsic discrepancy" which for all purposes is a "symmetrised" version of the KL-divergence. Taking the KL divergence from $P$ to $Q
9,136
Intuitive explanation for periodicity in Markov chains
First of all, your definition is not entirely correct. Here is the correct definition from wikipedia, as suggested by Cyan. Periodicity (source: wikipedia) A state i has period k if any return to state i must occur in multiples of k time steps. Formally, the period of a state is defined as k = $gcd\{ n: \Pr(X_n = i | ...
Intuitive explanation for periodicity in Markov chains
First of all, your definition is not entirely correct. Here is the correct definition from wikipedia, as suggested by Cyan. Periodicity (source: wikipedia) A state i has period k if any return to sta
Intuitive explanation for periodicity in Markov chains First of all, your definition is not entirely correct. Here is the correct definition from wikipedia, as suggested by Cyan. Periodicity (source: wikipedia) A state i has period k if any return to state i must occur in multiples of k time steps. Formally, the perio...
Intuitive explanation for periodicity in Markov chains First of all, your definition is not entirely correct. Here is the correct definition from wikipedia, as suggested by Cyan. Periodicity (source: wikipedia) A state i has period k if any return to sta
9,137
Intuitive explanation for periodicity in Markov chains
Let $x$ be a state in some Markov chain. Consider the set $T(x)$ of all possible return times, that is, numbers $t$ such that there is a non-zero probability of returning to $x$ in exactly $t$ steps, starting from $x$. Notice that this is a purely graph-theoretic, not probabilistic notion, in the sense that, if you dra...
Intuitive explanation for periodicity in Markov chains
Let $x$ be a state in some Markov chain. Consider the set $T(x)$ of all possible return times, that is, numbers $t$ such that there is a non-zero probability of returning to $x$ in exactly $t$ steps,
Intuitive explanation for periodicity in Markov chains Let $x$ be a state in some Markov chain. Consider the set $T(x)$ of all possible return times, that is, numbers $t$ such that there is a non-zero probability of returning to $x$ in exactly $t$ steps, starting from $x$. Notice that this is a purely graph-theoretic, ...
Intuitive explanation for periodicity in Markov chains Let $x$ be a state in some Markov chain. Consider the set $T(x)$ of all possible return times, that is, numbers $t$ such that there is a non-zero probability of returning to $x$ in exactly $t$ steps,
9,138
Intuitive explanation for periodicity in Markov chains
To be short, periodic is when you visit each state at uniform rate, aperiodic is when you visit each state at random rate.
Intuitive explanation for periodicity in Markov chains
To be short, periodic is when you visit each state at uniform rate, aperiodic is when you visit each state at random rate.
Intuitive explanation for periodicity in Markov chains To be short, periodic is when you visit each state at uniform rate, aperiodic is when you visit each state at random rate.
Intuitive explanation for periodicity in Markov chains To be short, periodic is when you visit each state at uniform rate, aperiodic is when you visit each state at random rate.
9,139
What is "feature space"?
Feature Space Feature space refers to the $n$-dimensions where your variables live (not including a target variable, if it is present). The term is used often in ML literature because a task in ML is feature extraction, hence we view all variables as features. For example, consider the data set with: Target $Y \equiv$...
What is "feature space"?
Feature Space Feature space refers to the $n$-dimensions where your variables live (not including a target variable, if it is present). The term is used often in ML literature because a task in ML is
What is "feature space"? Feature Space Feature space refers to the $n$-dimensions where your variables live (not including a target variable, if it is present). The term is used often in ML literature because a task in ML is feature extraction, hence we view all variables as features. For example, consider the data set...
What is "feature space"? Feature Space Feature space refers to the $n$-dimensions where your variables live (not including a target variable, if it is present). The term is used often in ML literature because a task in ML is
9,140
Minimum sample size for PCA or FA when the main goal is to estimate only few components?
You can actually measure whether your sample size is "large enough". One symptom of small sample size being too small is instability. Bootstrap or cross validate your PCA: these techniques disturb your data set by deleting/exchanging a small fraction of your sample and then build "surrogate models" for each of the dist...
Minimum sample size for PCA or FA when the main goal is to estimate only few components?
You can actually measure whether your sample size is "large enough". One symptom of small sample size being too small is instability. Bootstrap or cross validate your PCA: these techniques disturb you
Minimum sample size for PCA or FA when the main goal is to estimate only few components? You can actually measure whether your sample size is "large enough". One symptom of small sample size being too small is instability. Bootstrap or cross validate your PCA: these techniques disturb your data set by deleting/exchangi...
Minimum sample size for PCA or FA when the main goal is to estimate only few components? You can actually measure whether your sample size is "large enough". One symptom of small sample size being too small is instability. Bootstrap or cross validate your PCA: these techniques disturb you
9,141
Minimum sample size for PCA or FA when the main goal is to estimate only few components?
For factor analysis (not principal component analysis), there is quite a literature calling into question some of the old rules of thumb on the number of observations. Traditional recommendations – at least within psychometrics – would be to have at least $x$ observations per variable (with $x$ typically anywhere from ...
Minimum sample size for PCA or FA when the main goal is to estimate only few components?
For factor analysis (not principal component analysis), there is quite a literature calling into question some of the old rules of thumb on the number of observations. Traditional recommendations – at
Minimum sample size for PCA or FA when the main goal is to estimate only few components? For factor analysis (not principal component analysis), there is quite a literature calling into question some of the old rules of thumb on the number of observations. Traditional recommendations – at least within psychometrics – w...
Minimum sample size for PCA or FA when the main goal is to estimate only few components? For factor analysis (not principal component analysis), there is quite a literature calling into question some of the old rules of thumb on the number of observations. Traditional recommendations – at
9,142
Minimum sample size for PCA or FA when the main goal is to estimate only few components?
The idea behind the MVA inequalities is simple: PCA is equivalent to estimate the correlation matrix of the variables. You are trying to guess $p\frac{p-1}{2}$ (symetric matrix) coefficients from $np$ data. (That is why you should have n>>p.) The equivalence can be seen this way: each PCA step is an optimization proble...
Minimum sample size for PCA or FA when the main goal is to estimate only few components?
The idea behind the MVA inequalities is simple: PCA is equivalent to estimate the correlation matrix of the variables. You are trying to guess $p\frac{p-1}{2}$ (symetric matrix) coefficients from $np$
Minimum sample size for PCA or FA when the main goal is to estimate only few components? The idea behind the MVA inequalities is simple: PCA is equivalent to estimate the correlation matrix of the variables. You are trying to guess $p\frac{p-1}{2}$ (symetric matrix) coefficients from $np$ data. (That is why you should ...
Minimum sample size for PCA or FA when the main goal is to estimate only few components? The idea behind the MVA inequalities is simple: PCA is equivalent to estimate the correlation matrix of the variables. You are trying to guess $p\frac{p-1}{2}$ (symetric matrix) coefficients from $np$
9,143
Minimum sample size for PCA or FA when the main goal is to estimate only few components?
I hope this might be helpful: for both FA and PCA ''The methods described in this chapter require large samples to derive stable solutions. What constitutes an adequate sample size is somewhat complicated. Until recently, analysts used rules of thumb like “factor analysis requires 5–10 times as many subj...
Minimum sample size for PCA or FA when the main goal is to estimate only few components?
I hope this might be helpful: for both FA and PCA ''The methods described in this chapter require large samples to derive stable solutions. What constitutes an adequate sample size is s
Minimum sample size for PCA or FA when the main goal is to estimate only few components? I hope this might be helpful: for both FA and PCA ''The methods described in this chapter require large samples to derive stable solutions. What constitutes an adequate sample size is somewhat complicated. Until rece...
Minimum sample size for PCA or FA when the main goal is to estimate only few components? I hope this might be helpful: for both FA and PCA ''The methods described in this chapter require large samples to derive stable solutions. What constitutes an adequate sample size is s
9,144
Does a uniform distribution of many p-values give statistical evidence that H0 is true?
I like your question, but unfortunately my answer is NO, it doesn't prove $H_0$. The reason is very simple. How would do you know that the distribution of p-values is uniform? You would probably have to run a test for uniformity which will return you its own p-value, and you end up with the same kind of inference quest...
Does a uniform distribution of many p-values give statistical evidence that H0 is true?
I like your question, but unfortunately my answer is NO, it doesn't prove $H_0$. The reason is very simple. How would do you know that the distribution of p-values is uniform? You would probably have
Does a uniform distribution of many p-values give statistical evidence that H0 is true? I like your question, but unfortunately my answer is NO, it doesn't prove $H_0$. The reason is very simple. How would do you know that the distribution of p-values is uniform? You would probably have to run a test for uniformity whi...
Does a uniform distribution of many p-values give statistical evidence that H0 is true? I like your question, but unfortunately my answer is NO, it doesn't prove $H_0$. The reason is very simple. How would do you know that the distribution of p-values is uniform? You would probably have
9,145
Does a uniform distribution of many p-values give statistical evidence that H0 is true?
Your series of experiments can be viewed as a single experiment with far more data, and as we know, more data is advantageous (eg. typically standard errors decrease as $\sqrt{n}$ increases for independent data). But you ask, "Is this ... enough evidence to conclude that H0 is true?" No. A basic problem is that anothe...
Does a uniform distribution of many p-values give statistical evidence that H0 is true?
Your series of experiments can be viewed as a single experiment with far more data, and as we know, more data is advantageous (eg. typically standard errors decrease as $\sqrt{n}$ increases for indepe
Does a uniform distribution of many p-values give statistical evidence that H0 is true? Your series of experiments can be viewed as a single experiment with far more data, and as we know, more data is advantageous (eg. typically standard errors decrease as $\sqrt{n}$ increases for independent data). But you ask, "Is t...
Does a uniform distribution of many p-values give statistical evidence that H0 is true? Your series of experiments can be viewed as a single experiment with far more data, and as we know, more data is advantageous (eg. typically standard errors decrease as $\sqrt{n}$ increases for indepe
9,146
Does a uniform distribution of many p-values give statistical evidence that H0 is true?
In a sense you are right (see the p-curve) with some small caveats: you need the test to have some power under the alternative. Illustration of the potential problem: generating a p-value as a uniform distribution on 0 to 1 and rejecting when $p \leq \alpha$ is a (admittedly pretty useless) level $\alpha$ test for an...
Does a uniform distribution of many p-values give statistical evidence that H0 is true?
In a sense you are right (see the p-curve) with some small caveats: you need the test to have some power under the alternative. Illustration of the potential problem: generating a p-value as a unifo
Does a uniform distribution of many p-values give statistical evidence that H0 is true? In a sense you are right (see the p-curve) with some small caveats: you need the test to have some power under the alternative. Illustration of the potential problem: generating a p-value as a uniform distribution on 0 to 1 and re...
Does a uniform distribution of many p-values give statistical evidence that H0 is true? In a sense you are right (see the p-curve) with some small caveats: you need the test to have some power under the alternative. Illustration of the potential problem: generating a p-value as a unifo
9,147
Does a uniform distribution of many p-values give statistical evidence that H0 is true?
Null hypothesis (H0): Gravity causes everything in the universe to fall toward Earth's surface. Alternate hypothesis (H1): Nothing ever falls. Performed 1 million experiments with dozens of household objects, fail to reject H0 with $p < 0.01$ every time. Is H0 true?
Does a uniform distribution of many p-values give statistical evidence that H0 is true?
Null hypothesis (H0): Gravity causes everything in the universe to fall toward Earth's surface. Alternate hypothesis (H1): Nothing ever falls. Performed 1 million experiments with dozens of household
Does a uniform distribution of many p-values give statistical evidence that H0 is true? Null hypothesis (H0): Gravity causes everything in the universe to fall toward Earth's surface. Alternate hypothesis (H1): Nothing ever falls. Performed 1 million experiments with dozens of household objects, fail to reject H0 with ...
Does a uniform distribution of many p-values give statistical evidence that H0 is true? Null hypothesis (H0): Gravity causes everything in the universe to fall toward Earth's surface. Alternate hypothesis (H1): Nothing ever falls. Performed 1 million experiments with dozens of household
9,148
Variable selection procedure for binary classification
A very popular approach is penalized logistic regression, in which one maximizes the sum of the log-likelihood and a penalization term consisting of the L1-norm ("lasso"), L2-norm ("ridge"), a combination of the two ("elastic"), or a penalty associated to groups of variables ("group lasso"). This approach has several a...
Variable selection procedure for binary classification
A very popular approach is penalized logistic regression, in which one maximizes the sum of the log-likelihood and a penalization term consisting of the L1-norm ("lasso"), L2-norm ("ridge"), a combina
Variable selection procedure for binary classification A very popular approach is penalized logistic regression, in which one maximizes the sum of the log-likelihood and a penalization term consisting of the L1-norm ("lasso"), L2-norm ("ridge"), a combination of the two ("elastic"), or a penalty associated to groups of...
Variable selection procedure for binary classification A very popular approach is penalized logistic regression, in which one maximizes the sum of the log-likelihood and a penalization term consisting of the L1-norm ("lasso"), L2-norm ("ridge"), a combina
9,149
Variable selection procedure for binary classification
I have a slight preference for Random Forests by Leo Breiman & Adele Cutleer for several reasons: it allows to cope with categorical and continuous predictors, as well as unbalanced class sample size; as an ensemble/embedded method, cross-validation is embedded and allows to estimate a generalization error; it is rela...
Variable selection procedure for binary classification
I have a slight preference for Random Forests by Leo Breiman & Adele Cutleer for several reasons: it allows to cope with categorical and continuous predictors, as well as unbalanced class sample size
Variable selection procedure for binary classification I have a slight preference for Random Forests by Leo Breiman & Adele Cutleer for several reasons: it allows to cope with categorical and continuous predictors, as well as unbalanced class sample size; as an ensemble/embedded method, cross-validation is embedded an...
Variable selection procedure for binary classification I have a slight preference for Random Forests by Leo Breiman & Adele Cutleer for several reasons: it allows to cope with categorical and continuous predictors, as well as unbalanced class sample size
9,150
Variable selection procedure for binary classification
Metropolis scanning / MCMC Select few features randomly for a start, train classifier only on them and obtain the error. Make some random change to this working set -- either remove one feature, add another at random or replace some feature with one not being currently used. Train new classifier and get its error; st...
Variable selection procedure for binary classification
Metropolis scanning / MCMC Select few features randomly for a start, train classifier only on them and obtain the error. Make some random change to this working set -- either remove one feature, add
Variable selection procedure for binary classification Metropolis scanning / MCMC Select few features randomly for a start, train classifier only on them and obtain the error. Make some random change to this working set -- either remove one feature, add another at random or replace some feature with one not being cur...
Variable selection procedure for binary classification Metropolis scanning / MCMC Select few features randomly for a start, train classifier only on them and obtain the error. Make some random change to this working set -- either remove one feature, add
9,151
Variable selection procedure for binary classification
If you are only interested in generalization performance, you are probably better off not performing any feature selection and using regularization instead (e.g. ridge regression). There have been several open challenges in the machine learning community on feature selection, and methods that rely on regularization ra...
Variable selection procedure for binary classification
If you are only interested in generalization performance, you are probably better off not performing any feature selection and using regularization instead (e.g. ridge regression). There have been se
Variable selection procedure for binary classification If you are only interested in generalization performance, you are probably better off not performing any feature selection and using regularization instead (e.g. ridge regression). There have been several open challenges in the machine learning community on featur...
Variable selection procedure for binary classification If you are only interested in generalization performance, you are probably better off not performing any feature selection and using regularization instead (e.g. ridge regression). There have been se
9,152
Variable selection procedure for binary classification
Greedy forward selection. The steps for this method are: Make sure you have a train and validation set Repeat the following Train a classifier with each single feature separately that is not selected yet and with all the previously selected features If the result improves, add the best performing feature, else stop p...
Variable selection procedure for binary classification
Greedy forward selection. The steps for this method are: Make sure you have a train and validation set Repeat the following Train a classifier with each single feature separately that is not selecte
Variable selection procedure for binary classification Greedy forward selection. The steps for this method are: Make sure you have a train and validation set Repeat the following Train a classifier with each single feature separately that is not selected yet and with all the previously selected features If the result...
Variable selection procedure for binary classification Greedy forward selection. The steps for this method are: Make sure you have a train and validation set Repeat the following Train a classifier with each single feature separately that is not selecte
9,153
Variable selection procedure for binary classification
Backward elimination. Start with the full set, then iteratively train the classifier on the remaining features and remove the feature with the smallest importance, stop when the classifier error rapidly increases/becomes unacceptable high. Importance can be even obtained by removing iteratively each feature and check t...
Variable selection procedure for binary classification
Backward elimination. Start with the full set, then iteratively train the classifier on the remaining features and remove the feature with the smallest importance, stop when the classifier error rapid
Variable selection procedure for binary classification Backward elimination. Start with the full set, then iteratively train the classifier on the remaining features and remove the feature with the smallest importance, stop when the classifier error rapidly increases/becomes unacceptable high. Importance can be even ob...
Variable selection procedure for binary classification Backward elimination. Start with the full set, then iteratively train the classifier on the remaining features and remove the feature with the smallest importance, stop when the classifier error rapid
9,154
Why does my bootstrap interval have terrible coverage?
Bootstrap diagnostics and remedies by Canto, Davison, Hinkley & Ventura (2006) seems to be a logical point of departure. They discuss multiple ways the bootstrap can break down and - more importantly here - offer diagnostics and possible remedies: Outliers Incorrect resampling model Nonpivotality Inconsistency of the ...
Why does my bootstrap interval have terrible coverage?
Bootstrap diagnostics and remedies by Canto, Davison, Hinkley & Ventura (2006) seems to be a logical point of departure. They discuss multiple ways the bootstrap can break down and - more importantly
Why does my bootstrap interval have terrible coverage? Bootstrap diagnostics and remedies by Canto, Davison, Hinkley & Ventura (2006) seems to be a logical point of departure. They discuss multiple ways the bootstrap can break down and - more importantly here - offer diagnostics and possible remedies: Outliers Incorre...
Why does my bootstrap interval have terrible coverage? Bootstrap diagnostics and remedies by Canto, Davison, Hinkley & Ventura (2006) seems to be a logical point of departure. They discuss multiple ways the bootstrap can break down and - more importantly
9,155
Why does my bootstrap interval have terrible coverage?
While I agree with Stephan Kolassa's analysis and conclusion, $$\hat{\mu} - \mu$$ with $\hat{\mu}$ the sample mean is definitely not an approximate pivot, let me make an additional remark. I investigated the use of the $t$-statistic $$\sqrt{m} \frac{\hat{\mu} - \mu}{\hat{\sigma}}$$ together with bootstrapping. The resu...
Why does my bootstrap interval have terrible coverage?
While I agree with Stephan Kolassa's analysis and conclusion, $$\hat{\mu} - \mu$$ with $\hat{\mu}$ the sample mean is definitely not an approximate pivot, let me make an additional remark. I investiga
Why does my bootstrap interval have terrible coverage? While I agree with Stephan Kolassa's analysis and conclusion, $$\hat{\mu} - \mu$$ with $\hat{\mu}$ the sample mean is definitely not an approximate pivot, let me make an additional remark. I investigated the use of the $t$-statistic $$\sqrt{m} \frac{\hat{\mu} - \mu...
Why does my bootstrap interval have terrible coverage? While I agree with Stephan Kolassa's analysis and conclusion, $$\hat{\mu} - \mu$$ with $\hat{\mu}$ the sample mean is definitely not an approximate pivot, let me make an additional remark. I investiga
9,156
Why does my bootstrap interval have terrible coverage?
The calculations were right, I cross-checked with the well-known package boot. Additionally I added the BCa-interval (by Efron), a bias-corrected version of the percentile bootstrap interval: for (i in 1:1000) { samp <- exp(rnorm(m, 0, 2)) + 1 boot.out <- boot(samp, function(d, i) sum(d[i]) / m, R=999) ci <- boo...
Why does my bootstrap interval have terrible coverage?
The calculations were right, I cross-checked with the well-known package boot. Additionally I added the BCa-interval (by Efron), a bias-corrected version of the percentile bootstrap interval: for (i i
Why does my bootstrap interval have terrible coverage? The calculations were right, I cross-checked with the well-known package boot. Additionally I added the BCa-interval (by Efron), a bias-corrected version of the percentile bootstrap interval: for (i in 1:1000) { samp <- exp(rnorm(m, 0, 2)) + 1 boot.out <- boot...
Why does my bootstrap interval have terrible coverage? The calculations were right, I cross-checked with the well-known package boot. Additionally I added the BCa-interval (by Efron), a bias-corrected version of the percentile bootstrap interval: for (i i
9,157
Why does my bootstrap interval have terrible coverage?
I was confused about this too, and I spent a lot of time of on the 1996 DiCiccio and Efron paper Bootstrap Confidence Intervals, without much to show for it. It actually led me to think less of the bootstrap as a general purpose method. I used to think of it as something that would pull you out of a jam when you were r...
Why does my bootstrap interval have terrible coverage?
I was confused about this too, and I spent a lot of time of on the 1996 DiCiccio and Efron paper Bootstrap Confidence Intervals, without much to show for it. It actually led me to think less of the bo
Why does my bootstrap interval have terrible coverage? I was confused about this too, and I spent a lot of time of on the 1996 DiCiccio and Efron paper Bootstrap Confidence Intervals, without much to show for it. It actually led me to think less of the bootstrap as a general purpose method. I used to think of it as som...
Why does my bootstrap interval have terrible coverage? I was confused about this too, and I spent a lot of time of on the 1996 DiCiccio and Efron paper Bootstrap Confidence Intervals, without much to show for it. It actually led me to think less of the bo
9,158
Why does my bootstrap interval have terrible coverage?
Check out Tim Hesterberg's article in The American Statistician at http://www.timhesterberg.net/bootstrap#TOC-What-Teachers-Should-Know-about-the-Bootstrap:-Resampling-in-the-Undergraduate-Statistics-Curriculum. Essentially, the bootstrap percentile interval does not have strong coverage probability for skewed data unl...
Why does my bootstrap interval have terrible coverage?
Check out Tim Hesterberg's article in The American Statistician at http://www.timhesterberg.net/bootstrap#TOC-What-Teachers-Should-Know-about-the-Bootstrap:-Resampling-in-the-Undergraduate-Statistics-
Why does my bootstrap interval have terrible coverage? Check out Tim Hesterberg's article in The American Statistician at http://www.timhesterberg.net/bootstrap#TOC-What-Teachers-Should-Know-about-the-Bootstrap:-Resampling-in-the-Undergraduate-Statistics-Curriculum. Essentially, the bootstrap percentile interval does n...
Why does my bootstrap interval have terrible coverage? Check out Tim Hesterberg's article in The American Statistician at http://www.timhesterberg.net/bootstrap#TOC-What-Teachers-Should-Know-about-the-Bootstrap:-Resampling-in-the-Undergraduate-Statistics-
9,159
Why do we not care about completeness, sufficiency of an estimator as much anymore?
We still care. However, a large part of statistics is now based on a data-driven approach where these concepts may not be essential or there are many other important concepts. With computation power and lots of data, a large body of statistics is devoted to provide models that solve specific problems (such as forecast...
Why do we not care about completeness, sufficiency of an estimator as much anymore?
We still care. However, a large part of statistics is now based on a data-driven approach where these concepts may not be essential or there are many other important concepts. With computation power
Why do we not care about completeness, sufficiency of an estimator as much anymore? We still care. However, a large part of statistics is now based on a data-driven approach where these concepts may not be essential or there are many other important concepts. With computation power and lots of data, a large body of st...
Why do we not care about completeness, sufficiency of an estimator as much anymore? We still care. However, a large part of statistics is now based on a data-driven approach where these concepts may not be essential or there are many other important concepts. With computation power
9,160
Why do we not care about completeness, sufficiency of an estimator as much anymore?
We do care but usually either the issue is taken care of, or we're not making a specific distributional assumption with which we could apply those considerations. Many of the usual estimators for commonly used parametric models are either fully efficient under the usual distributional assumptions for that model or as...
Why do we not care about completeness, sufficiency of an estimator as much anymore?
We do care but usually either the issue is taken care of, or we're not making a specific distributional assumption with which we could apply those considerations. Many of the usual estimators for co
Why do we not care about completeness, sufficiency of an estimator as much anymore? We do care but usually either the issue is taken care of, or we're not making a specific distributional assumption with which we could apply those considerations. Many of the usual estimators for commonly used parametric models are ei...
Why do we not care about completeness, sufficiency of an estimator as much anymore? We do care but usually either the issue is taken care of, or we're not making a specific distributional assumption with which we could apply those considerations. Many of the usual estimators for co
9,161
How does the L-BFGS work?
Basically think of L-BFGS as a way of finding a (local) minimum of an objective function, making use of objective function values and the gradient of the objective function. That level of description covers many optimization methods in addition to L-BFGS though. You can read more about it in section 7.2 of Nocedal and ...
How does the L-BFGS work?
Basically think of L-BFGS as a way of finding a (local) minimum of an objective function, making use of objective function values and the gradient of the objective function. That level of description
How does the L-BFGS work? Basically think of L-BFGS as a way of finding a (local) minimum of an objective function, making use of objective function values and the gradient of the objective function. That level of description covers many optimization methods in addition to L-BFGS though. You can read more about it in s...
How does the L-BFGS work? Basically think of L-BFGS as a way of finding a (local) minimum of an objective function, making use of objective function values and the gradient of the objective function. That level of description
9,162
Which is the best visualization for contingency tables?
There isn't going to be a one-size-fits-all solution here. If you have a very simple table (e.g., $2\times 2$), simply presenting the table is probably best. If you want an actual figure, mosaic plots (as @xan suggests) are probably a nice place to start. There are some other options that are analogous to mosaic plo...
Which is the best visualization for contingency tables?
There isn't going to be a one-size-fits-all solution here. If you have a very simple table (e.g., $2\times 2$), simply presenting the table is probably best. If you want an actual figure, mosaic plo
Which is the best visualization for contingency tables? There isn't going to be a one-size-fits-all solution here. If you have a very simple table (e.g., $2\times 2$), simply presenting the table is probably best. If you want an actual figure, mosaic plots (as @xan suggests) are probably a nice place to start. There...
Which is the best visualization for contingency tables? There isn't going to be a one-size-fits-all solution here. If you have a very simple table (e.g., $2\times 2$), simply presenting the table is probably best. If you want an actual figure, mosaic plo
9,163
Which is the best visualization for contingency tables?
Different visuals will be better at highlighting different features, but Mosaic plots work well for a general view (checking to see if anything stands out). Maybe that's what you meant by dodged bar plot. Like most options, they're not symmetric in that they represent relative frequencies better in one dimension than t...
Which is the best visualization for contingency tables?
Different visuals will be better at highlighting different features, but Mosaic plots work well for a general view (checking to see if anything stands out). Maybe that's what you meant by dodged bar p
Which is the best visualization for contingency tables? Different visuals will be better at highlighting different features, but Mosaic plots work well for a general view (checking to see if anything stands out). Maybe that's what you meant by dodged bar plot. Like most options, they're not symmetric in that they repre...
Which is the best visualization for contingency tables? Different visuals will be better at highlighting different features, but Mosaic plots work well for a general view (checking to see if anything stands out). Maybe that's what you meant by dodged bar p
9,164
Which is the best visualization for contingency tables?
I agree that the "best" plot doesn't exist independent of dataset, readership and purpose. For two measured variables, scatter plots are arguably the design that leaves all others in its wake, except for specific purposes, but no such market leader is evident for categorical data. My aim here is just to mention a simpl...
Which is the best visualization for contingency tables?
I agree that the "best" plot doesn't exist independent of dataset, readership and purpose. For two measured variables, scatter plots are arguably the design that leaves all others in its wake, except
Which is the best visualization for contingency tables? I agree that the "best" plot doesn't exist independent of dataset, readership and purpose. For two measured variables, scatter plots are arguably the design that leaves all others in its wake, except for specific purposes, but no such market leader is evident for ...
Which is the best visualization for contingency tables? I agree that the "best" plot doesn't exist independent of dataset, readership and purpose. For two measured variables, scatter plots are arguably the design that leaves all others in its wake, except
9,165
Which is the best visualization for contingency tables?
To complement @gung's and @xan's answers, here's an example of mosaic and association plots using vcd in R. > tab period activity morning noon afternoon evening feed 28 4 0 56 social 38 5 9 10 travel 6 6 14 13 To obtain the plots: require...
Which is the best visualization for contingency tables?
To complement @gung's and @xan's answers, here's an example of mosaic and association plots using vcd in R. > tab period activity morning noon afternoon evening feed 28 4
Which is the best visualization for contingency tables? To complement @gung's and @xan's answers, here's an example of mosaic and association plots using vcd in R. > tab period activity morning noon afternoon evening feed 28 4 0 56 social 38 5 9 10 travel ...
Which is the best visualization for contingency tables? To complement @gung's and @xan's answers, here's an example of mosaic and association plots using vcd in R. > tab period activity morning noon afternoon evening feed 28 4
9,166
Which is the best visualization for contingency tables?
One idea that is sometimes useful, especially for somewhat large tables, is to reorder rows/columns to make any structure clearer. One way to do the reordering is to use the sort order of the correspondence analysis row/column scores (on the first eigenvalue). I will show this by an example, in R: library(FactoMineR) d...
Which is the best visualization for contingency tables?
One idea that is sometimes useful, especially for somewhat large tables, is to reorder rows/columns to make any structure clearer. One way to do the reordering is to use the sort order of the correspo
Which is the best visualization for contingency tables? One idea that is sometimes useful, especially for somewhat large tables, is to reorder rows/columns to make any structure clearer. One way to do the reordering is to use the sort order of the correspondence analysis row/column scores (on the first eigenvalue). I w...
Which is the best visualization for contingency tables? One idea that is sometimes useful, especially for somewhat large tables, is to reorder rows/columns to make any structure clearer. One way to do the reordering is to use the sort order of the correspo
9,167
Sum of exponential random variables follows Gamma, confused by the parameters
The sum of $n$ independent Gamma random variables $\sim \Gamma(t_i, \lambda)$ is a Gamma random variable $\sim \Gamma\left(\sum_i t_i, \lambda\right)$. It does not matter what the second parameter means (scale or inverse of scale) as long as all $n$ random variable have the same second parameter. This idea extends rea...
Sum of exponential random variables follows Gamma, confused by the parameters
The sum of $n$ independent Gamma random variables $\sim \Gamma(t_i, \lambda)$ is a Gamma random variable $\sim \Gamma\left(\sum_i t_i, \lambda\right)$. It does not matter what the second parameter mea
Sum of exponential random variables follows Gamma, confused by the parameters The sum of $n$ independent Gamma random variables $\sim \Gamma(t_i, \lambda)$ is a Gamma random variable $\sim \Gamma\left(\sum_i t_i, \lambda\right)$. It does not matter what the second parameter means (scale or inverse of scale) as long as ...
Sum of exponential random variables follows Gamma, confused by the parameters The sum of $n$ independent Gamma random variables $\sim \Gamma(t_i, \lambda)$ is a Gamma random variable $\sim \Gamma\left(\sum_i t_i, \lambda\right)$. It does not matter what the second parameter mea
9,168
Sum of exponential random variables follows Gamma, confused by the parameters
The sum of $n$ iid exponential distributions with scale $\theta$ (rate $\theta^{-1}$) is gamma-distributed with shape $n$ and scale $\theta$ (rate $\theta^{-1}$).
Sum of exponential random variables follows Gamma, confused by the parameters
The sum of $n$ iid exponential distributions with scale $\theta$ (rate $\theta^{-1}$) is gamma-distributed with shape $n$ and scale $\theta$ (rate $\theta^{-1}$).
Sum of exponential random variables follows Gamma, confused by the parameters The sum of $n$ iid exponential distributions with scale $\theta$ (rate $\theta^{-1}$) is gamma-distributed with shape $n$ and scale $\theta$ (rate $\theta^{-1}$).
Sum of exponential random variables follows Gamma, confused by the parameters The sum of $n$ iid exponential distributions with scale $\theta$ (rate $\theta^{-1}$) is gamma-distributed with shape $n$ and scale $\theta$ (rate $\theta^{-1}$).
9,169
Sum of exponential random variables follows Gamma, confused by the parameters
gamma distribution is made of exponential distribution that is exponential distribution is base for gamma distribution. then if $f(x|\lambda)=\lambda e^{−\lambda x}$ we have $\sum_n x_i \sim \text{Gamma}(n,\lambda)$, as long as all $X_i$ are independent. $$f(x|\alpha,\beta)=\frac{\beta^α}{\Gamma(\alpha)} \cdot x^{\alph...
Sum of exponential random variables follows Gamma, confused by the parameters
gamma distribution is made of exponential distribution that is exponential distribution is base for gamma distribution. then if $f(x|\lambda)=\lambda e^{−\lambda x}$ we have $\sum_n x_i \sim \text{Gam
Sum of exponential random variables follows Gamma, confused by the parameters gamma distribution is made of exponential distribution that is exponential distribution is base for gamma distribution. then if $f(x|\lambda)=\lambda e^{−\lambda x}$ we have $\sum_n x_i \sim \text{Gamma}(n,\lambda)$, as long as all $X_i$ are ...
Sum of exponential random variables follows Gamma, confused by the parameters gamma distribution is made of exponential distribution that is exponential distribution is base for gamma distribution. then if $f(x|\lambda)=\lambda e^{−\lambda x}$ we have $\sum_n x_i \sim \text{Gam
9,170
Why are p-values misleading after performing a stepwise selection?
after performing a stepwise selection based on the AIC criterion, it is misleading to look at the p-values to test the null hypothesis that each true regression coefficient is zero. Indeed, p-values represent the probability of seeing a test statistic at least as extreme as the one you have, when the null hypothesis i...
Why are p-values misleading after performing a stepwise selection?
after performing a stepwise selection based on the AIC criterion, it is misleading to look at the p-values to test the null hypothesis that each true regression coefficient is zero. Indeed, p-values
Why are p-values misleading after performing a stepwise selection? after performing a stepwise selection based on the AIC criterion, it is misleading to look at the p-values to test the null hypothesis that each true regression coefficient is zero. Indeed, p-values represent the probability of seeing a test statistic ...
Why are p-values misleading after performing a stepwise selection? after performing a stepwise selection based on the AIC criterion, it is misleading to look at the p-values to test the null hypothesis that each true regression coefficient is zero. Indeed, p-values
9,171
Why are p-values misleading after performing a stepwise selection?
An analogy may help. Stepwise regression when the candidate variables are indicator (dummy) variables representing mutually exclusive categories (as in ANOVA) corresponds exactly to choosing which groups to combine by finding out which groups are minimally different by $t$-tests. If the original ANOVA was tested agai...
Why are p-values misleading after performing a stepwise selection?
An analogy may help. Stepwise regression when the candidate variables are indicator (dummy) variables representing mutually exclusive categories (as in ANOVA) corresponds exactly to choosing which gr
Why are p-values misleading after performing a stepwise selection? An analogy may help. Stepwise regression when the candidate variables are indicator (dummy) variables representing mutually exclusive categories (as in ANOVA) corresponds exactly to choosing which groups to combine by finding out which groups are minim...
Why are p-values misleading after performing a stepwise selection? An analogy may help. Stepwise regression when the candidate variables are indicator (dummy) variables representing mutually exclusive categories (as in ANOVA) corresponds exactly to choosing which gr
9,172
Why is $SST=SSE + SSR$? (One variable linear regression)
Adding and subtracting gives \begin{eqnarray*} \sum_{i=1}^n (y_i-\bar y)^2&=&\sum_{i=1}^n (y_i-\hat y_i+\hat y_i-\bar y)^2\\ &=&\sum_{i=1}^n (y_i-\hat y_i)^2+2\sum_{i=1}^n(y_i-\hat y_i)(\hat y_i-\bar y)+\sum_{i=1}^n(\hat y_i-\bar y)^2 \end{eqnarray*} So we need to show that $\sum_{i=1}^n(y_i-\hat y_i)(\hat y_i-\bar y)=...
Why is $SST=SSE + SSR$? (One variable linear regression)
Adding and subtracting gives \begin{eqnarray*} \sum_{i=1}^n (y_i-\bar y)^2&=&\sum_{i=1}^n (y_i-\hat y_i+\hat y_i-\bar y)^2\\ &=&\sum_{i=1}^n (y_i-\hat y_i)^2+2\sum_{i=1}^n(y_i-\hat y_i)(\hat y_i-\bar
Why is $SST=SSE + SSR$? (One variable linear regression) Adding and subtracting gives \begin{eqnarray*} \sum_{i=1}^n (y_i-\bar y)^2&=&\sum_{i=1}^n (y_i-\hat y_i+\hat y_i-\bar y)^2\\ &=&\sum_{i=1}^n (y_i-\hat y_i)^2+2\sum_{i=1}^n(y_i-\hat y_i)(\hat y_i-\bar y)+\sum_{i=1}^n(\hat y_i-\bar y)^2 \end{eqnarray*} So we need t...
Why is $SST=SSE + SSR$? (One variable linear regression) Adding and subtracting gives \begin{eqnarray*} \sum_{i=1}^n (y_i-\bar y)^2&=&\sum_{i=1}^n (y_i-\hat y_i+\hat y_i-\bar y)^2\\ &=&\sum_{i=1}^n (y_i-\hat y_i)^2+2\sum_{i=1}^n(y_i-\hat y_i)(\hat y_i-\bar
9,173
Why is $SST=SSE + SSR$? (One variable linear regression)
This is just the Pythagorean theorem!                                          Hence, $$Y'Y=(Y-X\hat{\beta})'(Y-X\hat{\beta})+(X\hat{\beta})'X\hat{\beta}$$ or $$SST=SSE+SSR$$
Why is $SST=SSE + SSR$? (One variable linear regression)
This is just the Pythagorean theorem!                                          Hence, $$Y'Y=(Y-X\hat{\beta})'(Y-X\hat{\beta})+(X\hat{\beta})'X\hat{\beta}$$ or $$SST=SSE+SSR$$
Why is $SST=SSE + SSR$? (One variable linear regression) This is just the Pythagorean theorem!                                          Hence, $$Y'Y=(Y-X\hat{\beta})'(Y-X\hat{\beta})+(X\hat{\beta})'X\hat{\beta}$$ or $$SST=SSE+SSR$$
Why is $SST=SSE + SSR$? (One variable linear regression) This is just the Pythagorean theorem!                                          Hence, $$Y'Y=(Y-X\hat{\beta})'(Y-X\hat{\beta})+(X\hat{\beta})'X\hat{\beta}$$ or $$SST=SSE+SSR$$
9,174
Why is $SST=SSE + SSR$? (One variable linear regression)
(1) Intuition for why $SST = SSR + SSE$ When we try to explain the total variation in Y ($SST$) with one explanatory variable, X, then there are exactly two sources of variability. First, there is the variability captured by X (Sum Square Regression), and second, there is the variability not captured by X (Sum Square ...
Why is $SST=SSE + SSR$? (One variable linear regression)
(1) Intuition for why $SST = SSR + SSE$ When we try to explain the total variation in Y ($SST$) with one explanatory variable, X, then there are exactly two sources of variability. First, there is th
Why is $SST=SSE + SSR$? (One variable linear regression) (1) Intuition for why $SST = SSR + SSE$ When we try to explain the total variation in Y ($SST$) with one explanatory variable, X, then there are exactly two sources of variability. First, there is the variability captured by X (Sum Square Regression), and second...
Why is $SST=SSE + SSR$? (One variable linear regression) (1) Intuition for why $SST = SSR + SSE$ When we try to explain the total variation in Y ($SST$) with one explanatory variable, X, then there are exactly two sources of variability. First, there is th
9,175
Why is $SST=SSE + SSR$? (One variable linear regression)
When an intercept is included in linear regression(sum of residuals is zero), $SST=SSE+SSR$. prove $$ \begin{eqnarray*} SST&=&\sum_{i=1}^n (y_i-\bar y)^2\\&=&\sum_{i=1}^n (y_i-\hat y_i+\hat y_i-\bar y)^2\\&=&\sum_{i=1}^n (y_i-\hat y_i)^2+2\sum_{i=1}^n(y_i-\hat y_i)(\hat y_i-\bar y)+\sum_{i=1}^n(\hat y_i-\bar y)^2\\&=&S...
Why is $SST=SSE + SSR$? (One variable linear regression)
When an intercept is included in linear regression(sum of residuals is zero), $SST=SSE+SSR$. prove $$ \begin{eqnarray*} SST&=&\sum_{i=1}^n (y_i-\bar y)^2\\&=&\sum_{i=1}^n (y_i-\hat y_i+\hat y_i-\bar y
Why is $SST=SSE + SSR$? (One variable linear regression) When an intercept is included in linear regression(sum of residuals is zero), $SST=SSE+SSR$. prove $$ \begin{eqnarray*} SST&=&\sum_{i=1}^n (y_i-\bar y)^2\\&=&\sum_{i=1}^n (y_i-\hat y_i+\hat y_i-\bar y)^2\\&=&\sum_{i=1}^n (y_i-\hat y_i)^2+2\sum_{i=1}^n(y_i-\hat y_...
Why is $SST=SSE + SSR$? (One variable linear regression) When an intercept is included in linear regression(sum of residuals is zero), $SST=SSE+SSR$. prove $$ \begin{eqnarray*} SST&=&\sum_{i=1}^n (y_i-\bar y)^2\\&=&\sum_{i=1}^n (y_i-\hat y_i+\hat y_i-\bar y
9,176
Why is $SST=SSE + SSR$? (One variable linear regression)
Here is a great graphical representation of why SST = SSR + SSE.
Why is $SST=SSE + SSR$? (One variable linear regression)
Here is a great graphical representation of why SST = SSR + SSE.
Why is $SST=SSE + SSR$? (One variable linear regression) Here is a great graphical representation of why SST = SSR + SSE.
Why is $SST=SSE + SSR$? (One variable linear regression) Here is a great graphical representation of why SST = SSR + SSE.
9,177
Why is $SST=SSE + SSR$? (One variable linear regression)
If a model predicts $3$ and the residual is $2$ because the actual value is $5$, it doesn't look like variance is decomposing, since $3^2 + 2^2 \neq 5^2$. If you only have one data point, your model would fit it perfectly and the residual would be zero, so you can't get that case by itself. There have to be multiple da...
Why is $SST=SSE + SSR$? (One variable linear regression)
If a model predicts $3$ and the residual is $2$ because the actual value is $5$, it doesn't look like variance is decomposing, since $3^2 + 2^2 \neq 5^2$. If you only have one data point, your model w
Why is $SST=SSE + SSR$? (One variable linear regression) If a model predicts $3$ and the residual is $2$ because the actual value is $5$, it doesn't look like variance is decomposing, since $3^2 + 2^2 \neq 5^2$. If you only have one data point, your model would fit it perfectly and the residual would be zero, so you ca...
Why is $SST=SSE + SSR$? (One variable linear regression) If a model predicts $3$ and the residual is $2$ because the actual value is $5$, it doesn't look like variance is decomposing, since $3^2 + 2^2 \neq 5^2$. If you only have one data point, your model w
9,178
How exactly is sparse PCA better than PCA?
Whether sparse PCA is easier to interpret than standard PCA or not, depends on the dataset you are investigating. Here is how I think about it: sometimes one is more interested in the PCA projections (low dimensional representation of the data), and sometimes -- in the principal axes; it is only in the latter case that...
How exactly is sparse PCA better than PCA?
Whether sparse PCA is easier to interpret than standard PCA or not, depends on the dataset you are investigating. Here is how I think about it: sometimes one is more interested in the PCA projections
How exactly is sparse PCA better than PCA? Whether sparse PCA is easier to interpret than standard PCA or not, depends on the dataset you are investigating. Here is how I think about it: sometimes one is more interested in the PCA projections (low dimensional representation of the data), and sometimes -- in the princip...
How exactly is sparse PCA better than PCA? Whether sparse PCA is easier to interpret than standard PCA or not, depends on the dataset you are investigating. Here is how I think about it: sometimes one is more interested in the PCA projections
9,179
How exactly is sparse PCA better than PCA?
To understand the advantages of sparsity in PCA, you need to make sure you know the difference between "loadings" and "variables" (to me these names are somewhat arbitrary, but that's not important). Say you have an $n\times p$ data matrix $\textbf{X}$, where $n$ is the number of samples. The SVD of $\textbf{X}=\textb...
How exactly is sparse PCA better than PCA?
To understand the advantages of sparsity in PCA, you need to make sure you know the difference between "loadings" and "variables" (to me these names are somewhat arbitrary, but that's not important).
How exactly is sparse PCA better than PCA? To understand the advantages of sparsity in PCA, you need to make sure you know the difference between "loadings" and "variables" (to me these names are somewhat arbitrary, but that's not important). Say you have an $n\times p$ data matrix $\textbf{X}$, where $n$ is the numbe...
How exactly is sparse PCA better than PCA? To understand the advantages of sparsity in PCA, you need to make sure you know the difference between "loadings" and "variables" (to me these names are somewhat arbitrary, but that's not important).
9,180
How exactly is sparse PCA better than PCA?
So you can eliminate the last few principal components, as they will not cause a lot of loss of data, and you can compress the data. Right? yes, you're right. And if there are $N$ variables $V_1, V_2, \cdots , V_N$, you then have $N$ Principal Component $PC_1, PC_2, \cdots , PC_N$, and every variable $V_i$ has an info...
How exactly is sparse PCA better than PCA?
So you can eliminate the last few principal components, as they will not cause a lot of loss of data, and you can compress the data. Right? yes, you're right. And if there are $N$ variables $V_1, V_2
How exactly is sparse PCA better than PCA? So you can eliminate the last few principal components, as they will not cause a lot of loss of data, and you can compress the data. Right? yes, you're right. And if there are $N$ variables $V_1, V_2, \cdots , V_N$, you then have $N$ Principal Component $PC_1, PC_2, \cdots , ...
How exactly is sparse PCA better than PCA? So you can eliminate the last few principal components, as they will not cause a lot of loss of data, and you can compress the data. Right? yes, you're right. And if there are $N$ variables $V_1, V_2
9,181
How exactly is sparse PCA better than PCA?
Like all good things, it depends. After applying PCA, you can again represent it in the same dimensional space, but, this time, the first principal component will contain the most variance, the second will contain the second most variance direction and so on. So you can eliminate the last few principal components, as ...
How exactly is sparse PCA better than PCA?
Like all good things, it depends. After applying PCA, you can again represent it in the same dimensional space, but, this time, the first principal component will contain the most variance, the secon
How exactly is sparse PCA better than PCA? Like all good things, it depends. After applying PCA, you can again represent it in the same dimensional space, but, this time, the first principal component will contain the most variance, the second will contain the second most variance direction and so on. So you can elimi...
How exactly is sparse PCA better than PCA? Like all good things, it depends. After applying PCA, you can again represent it in the same dimensional space, but, this time, the first principal component will contain the most variance, the secon
9,182
Proof that moment generating functions uniquely determine probability distributions
The general proof of this can be found in Feller (An Introduction to Probability Theory and Its Applications, Vol. 2). It is an inversion problem involving Laplace transform theory. Did you notice that the mgf bears a striking resemblance to the Laplace transform?. For use of Laplace Transformation you can see Widder (...
Proof that moment generating functions uniquely determine probability distributions
The general proof of this can be found in Feller (An Introduction to Probability Theory and Its Applications, Vol. 2). It is an inversion problem involving Laplace transform theory. Did you notice tha
Proof that moment generating functions uniquely determine probability distributions The general proof of this can be found in Feller (An Introduction to Probability Theory and Its Applications, Vol. 2). It is an inversion problem involving Laplace transform theory. Did you notice that the mgf bears a striking resemblan...
Proof that moment generating functions uniquely determine probability distributions The general proof of this can be found in Feller (An Introduction to Probability Theory and Its Applications, Vol. 2). It is an inversion problem involving Laplace transform theory. Did you notice tha
9,183
Proof that moment generating functions uniquely determine probability distributions
The theorem you are discussing is a basic result in probability/measure theory. The proofs would more likely be found in books on probability or statistical theory. I found the analogous result for characteristic functions given in Hoel Port and Stone pp 205-208 Tucker pp 51-53 and Chung pp 151-155 This is the Third ...
Proof that moment generating functions uniquely determine probability distributions
The theorem you are discussing is a basic result in probability/measure theory. The proofs would more likely be found in books on probability or statistical theory. I found the analogous result for
Proof that moment generating functions uniquely determine probability distributions The theorem you are discussing is a basic result in probability/measure theory. The proofs would more likely be found in books on probability or statistical theory. I found the analogous result for characteristic functions given in Ho...
Proof that moment generating functions uniquely determine probability distributions The theorem you are discussing is a basic result in probability/measure theory. The proofs would more likely be found in books on probability or statistical theory. I found the analogous result for
9,184
Proof that moment generating functions uniquely determine probability distributions
Denote the moment generating function of $X$ by $M_X(t)=Ee^{tX}$. Uniqueness Theorem. If there exists $\delta>0$ such that $M_X(t) = M_Y(t) < \infty$ for all $t \in (-\delta,\delta)$, then $F_X(t) = F_Y(t)$ for all $t \in \mathbb{R}$. To prove that the moment generating function determines the distribution, there are...
Proof that moment generating functions uniquely determine probability distributions
Denote the moment generating function of $X$ by $M_X(t)=Ee^{tX}$. Uniqueness Theorem. If there exists $\delta>0$ such that $M_X(t) = M_Y(t) < \infty$ for all $t \in (-\delta,\delta)$, then $F_X(t) =
Proof that moment generating functions uniquely determine probability distributions Denote the moment generating function of $X$ by $M_X(t)=Ee^{tX}$. Uniqueness Theorem. If there exists $\delta>0$ such that $M_X(t) = M_Y(t) < \infty$ for all $t \in (-\delta,\delta)$, then $F_X(t) = F_Y(t)$ for all $t \in \mathbb{R}$. ...
Proof that moment generating functions uniquely determine probability distributions Denote the moment generating function of $X$ by $M_X(t)=Ee^{tX}$. Uniqueness Theorem. If there exists $\delta>0$ such that $M_X(t) = M_Y(t) < \infty$ for all $t \in (-\delta,\delta)$, then $F_X(t) =
9,185
Deep learning : How do I know which variables are important?
What you describe is indeed one standard way of quantifying the importance of neural-net inputs. Note that in order for this to work, however, the input variables must be normalized in some way. Otherwise weights corresponding to input variables that tend to have larger values will be proportionally smaller. There a...
Deep learning : How do I know which variables are important?
What you describe is indeed one standard way of quantifying the importance of neural-net inputs. Note that in order for this to work, however, the input variables must be normalized in some way. Oth
Deep learning : How do I know which variables are important? What you describe is indeed one standard way of quantifying the importance of neural-net inputs. Note that in order for this to work, however, the input variables must be normalized in some way. Otherwise weights corresponding to input variables that tend t...
Deep learning : How do I know which variables are important? What you describe is indeed one standard way of quantifying the importance of neural-net inputs. Note that in order for this to work, however, the input variables must be normalized in some way. Oth
9,186
Deep learning : How do I know which variables are important?
A somewhat brute force but effective solution: Try 'droping' an input by using a constant for one of your input features. Then, train the network for each of the possible cases and see how your accuracy drops. Important inputs will provide the greatest benefit to overall accuracy.
Deep learning : How do I know which variables are important?
A somewhat brute force but effective solution: Try 'droping' an input by using a constant for one of your input features. Then, train the network for each of the possible cases and see how your accura
Deep learning : How do I know which variables are important? A somewhat brute force but effective solution: Try 'droping' an input by using a constant for one of your input features. Then, train the network for each of the possible cases and see how your accuracy drops. Important inputs will provide the greatest benefi...
Deep learning : How do I know which variables are important? A somewhat brute force but effective solution: Try 'droping' an input by using a constant for one of your input features. Then, train the network for each of the possible cases and see how your accura
9,187
Deep learning : How do I know which variables are important?
What you described is not "deep network", where you only have $10$ inputs and $5$ units in hidden layer. When people say deep learning, it usually means hundreds of thousands of hidden units. For a shallow network, this gives an example of define the variable importance. For a really deep network, people do not talk ab...
Deep learning : How do I know which variables are important?
What you described is not "deep network", where you only have $10$ inputs and $5$ units in hidden layer. When people say deep learning, it usually means hundreds of thousands of hidden units. For a sh
Deep learning : How do I know which variables are important? What you described is not "deep network", where you only have $10$ inputs and $5$ units in hidden layer. When people say deep learning, it usually means hundreds of thousands of hidden units. For a shallow network, this gives an example of define the variable...
Deep learning : How do I know which variables are important? What you described is not "deep network", where you only have $10$ inputs and $5$ units in hidden layer. When people say deep learning, it usually means hundreds of thousands of hidden units. For a sh
9,188
Deep learning : How do I know which variables are important?
The most that ive found about this is elaborately listed on this site more specifically you can look at this. If you talk only about linear models then you have to normalize the weights to make them interpret-able but even this can be misleading more on this on the link mentioned . Some people tried making complex func...
Deep learning : How do I know which variables are important?
The most that ive found about this is elaborately listed on this site more specifically you can look at this. If you talk only about linear models then you have to normalize the weights to make them i
Deep learning : How do I know which variables are important? The most that ive found about this is elaborately listed on this site more specifically you can look at this. If you talk only about linear models then you have to normalize the weights to make them interpret-able but even this can be misleading more on this ...
Deep learning : How do I know which variables are important? The most that ive found about this is elaborately listed on this site more specifically you can look at this. If you talk only about linear models then you have to normalize the weights to make them i
9,189
Deep learning : How do I know which variables are important?
Given that you have: A classification task A trained model Normalised features (between 0 and 1) Has anyone tried: Zeroing out the biases Pass each time as features a one hot vector where all features are zero except one. Examine the output. In that case, I think the output would be a number designating the "imp...
Deep learning : How do I know which variables are important?
Given that you have: A classification task A trained model Normalised features (between 0 and 1) Has anyone tried: Zeroing out the biases Pass each time as features a one hot vector where all fe
Deep learning : How do I know which variables are important? Given that you have: A classification task A trained model Normalised features (between 0 and 1) Has anyone tried: Zeroing out the biases Pass each time as features a one hot vector where all features are zero except one. Examine the output. In that ca...
Deep learning : How do I know which variables are important? Given that you have: A classification task A trained model Normalised features (between 0 and 1) Has anyone tried: Zeroing out the biases Pass each time as features a one hot vector where all fe
9,190
Deep learning : How do I know which variables are important?
You can also compute permutation importance of the input variables: https://scikit-learn.org/stable/modules/permutation_importance.html It is model-agnostic and is applicable to measure importance of input variables for “black-box” models like neural networks.
Deep learning : How do I know which variables are important?
You can also compute permutation importance of the input variables: https://scikit-learn.org/stable/modules/permutation_importance.html It is model-agnostic and is applicable to measure importance of
Deep learning : How do I know which variables are important? You can also compute permutation importance of the input variables: https://scikit-learn.org/stable/modules/permutation_importance.html It is model-agnostic and is applicable to measure importance of input variables for “black-box” models like neural networks...
Deep learning : How do I know which variables are important? You can also compute permutation importance of the input variables: https://scikit-learn.org/stable/modules/permutation_importance.html It is model-agnostic and is applicable to measure importance of
9,191
Deep learning : How do I know which variables are important?
What you describe is IMHO a simple and effective way to determine what inputs your model is most sensitive to. However, 'sensitive' is not necessarily the same as 'important'. For example if your model is very prone to overfitting issues then such a metric could easily lead you in the wrong direction: Those 'highly sen...
Deep learning : How do I know which variables are important?
What you describe is IMHO a simple and effective way to determine what inputs your model is most sensitive to. However, 'sensitive' is not necessarily the same as 'important'. For example if your mode
Deep learning : How do I know which variables are important? What you describe is IMHO a simple and effective way to determine what inputs your model is most sensitive to. However, 'sensitive' is not necessarily the same as 'important'. For example if your model is very prone to overfitting issues then such a metric co...
Deep learning : How do I know which variables are important? What you describe is IMHO a simple and effective way to determine what inputs your model is most sensitive to. However, 'sensitive' is not necessarily the same as 'important'. For example if your mode
9,192
The proof of shrinking coefficients using ridge regression through "spectral decomposition"
The question appears to ask for a demonstration that Ridge Regression shrinks coefficient estimates towards zero, using a spectral decomposition. The spectral decomposition can be understood as an easy consequence of the Singular Value Decomposition (SVD). Therefore, this post starts with SVD. It explains it in simp...
The proof of shrinking coefficients using ridge regression through "spectral decomposition"
The question appears to ask for a demonstration that Ridge Regression shrinks coefficient estimates towards zero, using a spectral decomposition. The spectral decomposition can be understood as an ea
The proof of shrinking coefficients using ridge regression through "spectral decomposition" The question appears to ask for a demonstration that Ridge Regression shrinks coefficient estimates towards zero, using a spectral decomposition. The spectral decomposition can be understood as an easy consequence of the Singul...
The proof of shrinking coefficients using ridge regression through "spectral decomposition" The question appears to ask for a demonstration that Ridge Regression shrinks coefficient estimates towards zero, using a spectral decomposition. The spectral decomposition can be understood as an ea
9,193
Advantages of doing "double lasso" or performing lasso twice?
Yes, the procedure you are asking (or thinking of) is called the relaxed lasso. The general idea is that in the process of performing the LASSO for the first time you are probably including "noise variables"; performing the LASSO on a second set of variables (after the first LASSO) gives less competition between varia...
Advantages of doing "double lasso" or performing lasso twice?
Yes, the procedure you are asking (or thinking of) is called the relaxed lasso. The general idea is that in the process of performing the LASSO for the first time you are probably including "noise va
Advantages of doing "double lasso" or performing lasso twice? Yes, the procedure you are asking (or thinking of) is called the relaxed lasso. The general idea is that in the process of performing the LASSO for the first time you are probably including "noise variables"; performing the LASSO on a second set of variable...
Advantages of doing "double lasso" or performing lasso twice? Yes, the procedure you are asking (or thinking of) is called the relaxed lasso. The general idea is that in the process of performing the LASSO for the first time you are probably including "noise va
9,194
Advantages of doing "double lasso" or performing lasso twice?
The idea is to separate the two effects of lasso Variable selection (i.e., many, even most, $\beta$s are zero) Coefficient shrinkage (i.e., even non-zero $\beta$s are smaller, in absolute value, than in unpenalised regression). This is often a good thing even without selection because you avoid over-fitting. If you h...
Advantages of doing "double lasso" or performing lasso twice?
The idea is to separate the two effects of lasso Variable selection (i.e., many, even most, $\beta$s are zero) Coefficient shrinkage (i.e., even non-zero $\beta$s are smaller, in absolute value, than
Advantages of doing "double lasso" or performing lasso twice? The idea is to separate the two effects of lasso Variable selection (i.e., many, even most, $\beta$s are zero) Coefficient shrinkage (i.e., even non-zero $\beta$s are smaller, in absolute value, than in unpenalised regression). This is often a good thing ev...
Advantages of doing "double lasso" or performing lasso twice? The idea is to separate the two effects of lasso Variable selection (i.e., many, even most, $\beta$s are zero) Coefficient shrinkage (i.e., even non-zero $\beta$s are smaller, in absolute value, than
9,195
How to compute the confidence interval of the ratio of two normal means
Fieller's method does what you want -- compute a confidence interval for the quotient of two means, both assumed to be sampled from Gaussian distributions. The original citation is: Fieller EC: The biological standardization of Insulin. Suppl to J R Statist Soc 1940, 7:1-64. The Wikipedia article does a good job of s...
How to compute the confidence interval of the ratio of two normal means
Fieller's method does what you want -- compute a confidence interval for the quotient of two means, both assumed to be sampled from Gaussian distributions. The original citation is: Fieller EC: The
How to compute the confidence interval of the ratio of two normal means Fieller's method does what you want -- compute a confidence interval for the quotient of two means, both assumed to be sampled from Gaussian distributions. The original citation is: Fieller EC: The biological standardization of Insulin. Suppl to ...
How to compute the confidence interval of the ratio of two normal means Fieller's method does what you want -- compute a confidence interval for the quotient of two means, both assumed to be sampled from Gaussian distributions. The original citation is: Fieller EC: The
9,196
How to compute the confidence interval of the ratio of two normal means
R has the package mratios with the function t.test.ratio. Gemechis Dilba Djira, Mario Hasler, Daniel Gerhard and Frank Schaarschmidt (2011). mratios: Inferences for ratios of coefficients in the general linear model. R package version 1.3.15. http://CRAN.R-project.org/package=mratios See also http://www.r-proje...
How to compute the confidence interval of the ratio of two normal means
R has the package mratios with the function t.test.ratio. Gemechis Dilba Djira, Mario Hasler, Daniel Gerhard and Frank Schaarschmidt (2011). mratios: Inferences for ratios of coefficients in the
How to compute the confidence interval of the ratio of two normal means R has the package mratios with the function t.test.ratio. Gemechis Dilba Djira, Mario Hasler, Daniel Gerhard and Frank Schaarschmidt (2011). mratios: Inferences for ratios of coefficients in the general linear model. R package version 1.3.15. ...
How to compute the confidence interval of the ratio of two normal means R has the package mratios with the function t.test.ratio. Gemechis Dilba Djira, Mario Hasler, Daniel Gerhard and Frank Schaarschmidt (2011). mratios: Inferences for ratios of coefficients in the
9,197
How to compute the confidence interval of the ratio of two normal means
Also if you want to compute Fieller's confidence interval not using mratios (typically because you don't want a simple lm fit but for example a glmer or glmer.nb fit), you can use the following FiellerRatioCI function, with model the output of the model, aname the name of the numerator parameter, bname the name of the ...
How to compute the confidence interval of the ratio of two normal means
Also if you want to compute Fieller's confidence interval not using mratios (typically because you don't want a simple lm fit but for example a glmer or glmer.nb fit), you can use the following Fielle
How to compute the confidence interval of the ratio of two normal means Also if you want to compute Fieller's confidence interval not using mratios (typically because you don't want a simple lm fit but for example a glmer or glmer.nb fit), you can use the following FiellerRatioCI function, with model the output of the ...
How to compute the confidence interval of the ratio of two normal means Also if you want to compute Fieller's confidence interval not using mratios (typically because you don't want a simple lm fit but for example a glmer or glmer.nb fit), you can use the following Fielle
9,198
How to compute the confidence interval of the ratio of two normal means
You can calculate it through: Fieller's method The Taylor method, also called Delta method: it's easier than Fieller's but will fail if the denominator approaches zero. The Hwang–bootstrap method, a bootstrap technique that does not result in unbounded confidence limits. Here you can find a thorough description and ...
How to compute the confidence interval of the ratio of two normal means
You can calculate it through: Fieller's method The Taylor method, also called Delta method: it's easier than Fieller's but will fail if the denominator approaches zero. The Hwang–bootstrap method, a
How to compute the confidence interval of the ratio of two normal means You can calculate it through: Fieller's method The Taylor method, also called Delta method: it's easier than Fieller's but will fail if the denominator approaches zero. The Hwang–bootstrap method, a bootstrap technique that does not result in unbo...
How to compute the confidence interval of the ratio of two normal means You can calculate it through: Fieller's method The Taylor method, also called Delta method: it's easier than Fieller's but will fail if the denominator approaches zero. The Hwang–bootstrap method, a
9,199
Statistical podcasts
BBC's More or Less is often concerned with numeracy and statistical literacy issues. But it's not specifically about statistics. Their About page has some background. More or Less is devoted to the powerful, sometimes beautiful, often abused but ever ubiquitous world of numbers. The programme was an idea born of...
Statistical podcasts
BBC's More or Less is often concerned with numeracy and statistical literacy issues. But it's not specifically about statistics. Their About page has some background. More or Less is devoted to the
Statistical podcasts BBC's More or Less is often concerned with numeracy and statistical literacy issues. But it's not specifically about statistics. Their About page has some background. More or Less is devoted to the powerful, sometimes beautiful, often abused but ever ubiquitous world of numbers. The programm...
Statistical podcasts BBC's More or Less is often concerned with numeracy and statistical literacy issues. But it's not specifically about statistics. Their About page has some background. More or Less is devoted to the
9,200
Statistical podcasts
There is econtalk, it is mostly about economics, but delves very often to issues of research, science, and statistics.
Statistical podcasts
There is econtalk, it is mostly about economics, but delves very often to issues of research, science, and statistics.
Statistical podcasts There is econtalk, it is mostly about economics, but delves very often to issues of research, science, and statistics.
Statistical podcasts There is econtalk, it is mostly about economics, but delves very often to issues of research, science, and statistics.