idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
9,501
Why doesn't backpropagation work when you initialize the weights the same value?
To add to Thierry's answer, you can think of the error as a function of the weight vector i.e. as a function from $R^n \rightarrow R$ which you would like to minimize. The back propagation algorithm works by looking at a local neighborhood of a point and seeing which direction will lead to a smaller error. This will th...
Why doesn't backpropagation work when you initialize the weights the same value?
To add to Thierry's answer, you can think of the error as a function of the weight vector i.e. as a function from $R^n \rightarrow R$ which you would like to minimize. The back propagation algorithm w
Why doesn't backpropagation work when you initialize the weights the same value? To add to Thierry's answer, you can think of the error as a function of the weight vector i.e. as a function from $R^n \rightarrow R$ which you would like to minimize. The back propagation algorithm works by looking at a local neighborhood...
Why doesn't backpropagation work when you initialize the weights the same value? To add to Thierry's answer, you can think of the error as a function of the weight vector i.e. as a function from $R^n \rightarrow R$ which you would like to minimize. The back propagation algorithm w
9,502
How to interpret main effects when the interaction effect is not significant?
A little niggle 'Now many textbook examples tell me that if there is a significant effect of the interaction, the main effects cannot be interpreted' I hope that's not true. They should say that if there is an interaction term, say between X and Z called XZ, then the interpretation of the individual coefficients fo...
How to interpret main effects when the interaction effect is not significant?
A little niggle 'Now many textbook examples tell me that if there is a significant effect of the interaction, the main effects cannot be interpreted' I hope that's not true. They should say that i
How to interpret main effects when the interaction effect is not significant? A little niggle 'Now many textbook examples tell me that if there is a significant effect of the interaction, the main effects cannot be interpreted' I hope that's not true. They should say that if there is an interaction term, say betwee...
How to interpret main effects when the interaction effect is not significant? A little niggle 'Now many textbook examples tell me that if there is a significant effect of the interaction, the main effects cannot be interpreted' I hope that's not true. They should say that i
9,503
How to interpret main effects when the interaction effect is not significant?
If you want the unconditional main effect then yes you do want to run a new model without the interaction term because that interaction term is not allowing you to see your unconditional main effects correctly. The main effects calculated with the interaction present are different from the main effects as one typicall...
How to interpret main effects when the interaction effect is not significant?
If you want the unconditional main effect then yes you do want to run a new model without the interaction term because that interaction term is not allowing you to see your unconditional main effects
How to interpret main effects when the interaction effect is not significant? If you want the unconditional main effect then yes you do want to run a new model without the interaction term because that interaction term is not allowing you to see your unconditional main effects correctly. The main effects calculated wi...
How to interpret main effects when the interaction effect is not significant? If you want the unconditional main effect then yes you do want to run a new model without the interaction term because that interaction term is not allowing you to see your unconditional main effects
9,504
How to interpret main effects when the interaction effect is not significant?
If the main effects are significant but not the interaction you simply interpret the main effects, as you suggested. You do not need to run another model without the interaction (it is generally not the best advice to exclude parameters based on significance, there are many answers here discussing that). Just take the ...
How to interpret main effects when the interaction effect is not significant?
If the main effects are significant but not the interaction you simply interpret the main effects, as you suggested. You do not need to run another model without the interaction (it is generally not t
How to interpret main effects when the interaction effect is not significant? If the main effects are significant but not the interaction you simply interpret the main effects, as you suggested. You do not need to run another model without the interaction (it is generally not the best advice to exclude parameters based...
How to interpret main effects when the interaction effect is not significant? If the main effects are significant but not the interaction you simply interpret the main effects, as you suggested. You do not need to run another model without the interaction (it is generally not t
9,505
How can I calculate margin of error in a NPS (Net Promoter Score) result?
Suppose the population, from which we assume you are sampling randomly, contains proportions $p_1$ of promoters, $p_0$ of passives, and $p_{-1}$ of detractors, with $p_1+p_0+p_{-1}=1$. To model the NPS, imagine filling a large hat with a huge number of tickets (one for each member of your population) labeled $+1$ for p...
How can I calculate margin of error in a NPS (Net Promoter Score) result?
Suppose the population, from which we assume you are sampling randomly, contains proportions $p_1$ of promoters, $p_0$ of passives, and $p_{-1}$ of detractors, with $p_1+p_0+p_{-1}=1$. To model the NP
How can I calculate margin of error in a NPS (Net Promoter Score) result? Suppose the population, from which we assume you are sampling randomly, contains proportions $p_1$ of promoters, $p_0$ of passives, and $p_{-1}$ of detractors, with $p_1+p_0+p_{-1}=1$. To model the NPS, imagine filling a large hat with a huge num...
How can I calculate margin of error in a NPS (Net Promoter Score) result? Suppose the population, from which we assume you are sampling randomly, contains proportions $p_1$ of promoters, $p_0$ of passives, and $p_{-1}$ of detractors, with $p_1+p_0+p_{-1}=1$. To model the NP
9,506
How can I calculate margin of error in a NPS (Net Promoter Score) result?
You could also use the variance estimator for continuous variables. Actually, I'd prefer it over the variance estimator for the random discrete variable, since there is a well-known correction for calculating the sample variance: https://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation As others noted, W...
How can I calculate margin of error in a NPS (Net Promoter Score) result?
You could also use the variance estimator for continuous variables. Actually, I'd prefer it over the variance estimator for the random discrete variable, since there is a well-known correction for cal
How can I calculate margin of error in a NPS (Net Promoter Score) result? You could also use the variance estimator for continuous variables. Actually, I'd prefer it over the variance estimator for the random discrete variable, since there is a well-known correction for calculating the sample variance: https://en.wikip...
How can I calculate margin of error in a NPS (Net Promoter Score) result? You could also use the variance estimator for continuous variables. Actually, I'd prefer it over the variance estimator for the random discrete variable, since there is a well-known correction for cal
9,507
How can I calculate margin of error in a NPS (Net Promoter Score) result?
You can potentially use bootstrap to simplify your calculations. In R the code would be: library(bootstrap) NPS=function(x){ if(sum(!x%%1==0)>0){stop("Non-integers found in the scores.")} if(sum(x>10|x<0)>0){stop("Scores not on scale of 0 to 10.")} sum(ifelse(x<7,-1,ifelse(x>8,1,0)))/length(x)*100 } NPSconfInt=...
How can I calculate margin of error in a NPS (Net Promoter Score) result?
You can potentially use bootstrap to simplify your calculations. In R the code would be: library(bootstrap) NPS=function(x){ if(sum(!x%%1==0)>0){stop("Non-integers found in the scores.")} if(sum(
How can I calculate margin of error in a NPS (Net Promoter Score) result? You can potentially use bootstrap to simplify your calculations. In R the code would be: library(bootstrap) NPS=function(x){ if(sum(!x%%1==0)>0){stop("Non-integers found in the scores.")} if(sum(x>10|x<0)>0){stop("Scores not on scale of 0 to...
How can I calculate margin of error in a NPS (Net Promoter Score) result? You can potentially use bootstrap to simplify your calculations. In R the code would be: library(bootstrap) NPS=function(x){ if(sum(!x%%1==0)>0){stop("Non-integers found in the scores.")} if(sum(
9,508
What is Bayes Error in machine learning?
Bayes error is the lowest possible prediction error that can be achieved and is the same as irreducible error. If one would know exactly what process generates the data, then errors will still be made if the process is random. This is also what is meant by "$y$ is inherently stochastic". For example, when flipping a fa...
What is Bayes Error in machine learning?
Bayes error is the lowest possible prediction error that can be achieved and is the same as irreducible error. If one would know exactly what process generates the data, then errors will still be made
What is Bayes Error in machine learning? Bayes error is the lowest possible prediction error that can be achieved and is the same as irreducible error. If one would know exactly what process generates the data, then errors will still be made if the process is random. This is also what is meant by "$y$ is inherently sto...
What is Bayes Error in machine learning? Bayes error is the lowest possible prediction error that can be achieved and is the same as irreducible error. If one would know exactly what process generates the data, then errors will still be made
9,509
What is Bayes Error in machine learning?
the essential of the statistic is the lack of information: ex: to determine the output of flip coin, we have to know earth gravitation at the test point, coin curvature, wind speed, hand posture,... If it is determined, it will surely know the output of that experiment. But we can't determine it all. Or in the determin...
What is Bayes Error in machine learning?
the essential of the statistic is the lack of information: ex: to determine the output of flip coin, we have to know earth gravitation at the test point, coin curvature, wind speed, hand posture,... I
What is Bayes Error in machine learning? the essential of the statistic is the lack of information: ex: to determine the output of flip coin, we have to know earth gravitation at the test point, coin curvature, wind speed, hand posture,... If it is determined, it will surely know the output of that experiment. But we c...
What is Bayes Error in machine learning? the essential of the statistic is the lack of information: ex: to determine the output of flip coin, we have to know earth gravitation at the test point, coin curvature, wind speed, hand posture,... I
9,510
What is Bayes Error in machine learning?
From https://www.cs.helsinki.fi/u/jkivinen/opetus/iml/2013/Bayes.pdf. For classification task, bayes error is defined as : $min_f=Cost(f)$ Bayes Classifier is defined as: $argmin_f=Cost(f)$ So total error=bayes error + how much your model is worse than bayes error$\not\equiv$Bias + Variance +Bayes error which may depe...
What is Bayes Error in machine learning?
From https://www.cs.helsinki.fi/u/jkivinen/opetus/iml/2013/Bayes.pdf. For classification task, bayes error is defined as : $min_f=Cost(f)$ Bayes Classifier is defined as: $argmin_f=Cost(f)$ So total
What is Bayes Error in machine learning? From https://www.cs.helsinki.fi/u/jkivinen/opetus/iml/2013/Bayes.pdf. For classification task, bayes error is defined as : $min_f=Cost(f)$ Bayes Classifier is defined as: $argmin_f=Cost(f)$ So total error=bayes error + how much your model is worse than bayes error$\not\equiv$Bi...
What is Bayes Error in machine learning? From https://www.cs.helsinki.fi/u/jkivinen/opetus/iml/2013/Bayes.pdf. For classification task, bayes error is defined as : $min_f=Cost(f)$ Bayes Classifier is defined as: $argmin_f=Cost(f)$ So total
9,511
What is the mathematical difference between random- and fixed-effects?
The simplest model with random effects is the one-way ANOVA model with random effects, given by observations $y_{ij}$ with distributional assumptions: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(\mu_i, \sigma^2_w), \quad j=1,\ldots,J, \qquad \mu_i \sim_{\text{iid}} {\cal N}(\mu, \sigma^2_b), \quad i=1,\ldots,I.$...
What is the mathematical difference between random- and fixed-effects?
The simplest model with random effects is the one-way ANOVA model with random effects, given by observations $y_{ij}$ with distributional assumptions: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(
What is the mathematical difference between random- and fixed-effects? The simplest model with random effects is the one-way ANOVA model with random effects, given by observations $y_{ij}$ with distributional assumptions: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(\mu_i, \sigma^2_w), \quad j=1,\ldots,J, \qquad ...
What is the mathematical difference between random- and fixed-effects? The simplest model with random effects is the one-way ANOVA model with random effects, given by observations $y_{ij}$ with distributional assumptions: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(
9,512
What is the mathematical difference between random- and fixed-effects?
Basically, what I think is the most distinct difference if you model a factor as random, is that the effects are assumed to be drawn from a common normal distribution. For example, if you have some sort of model regarding grades and you want to account for your student data coming from different schools and you model s...
What is the mathematical difference between random- and fixed-effects?
Basically, what I think is the most distinct difference if you model a factor as random, is that the effects are assumed to be drawn from a common normal distribution. For example, if you have some so
What is the mathematical difference between random- and fixed-effects? Basically, what I think is the most distinct difference if you model a factor as random, is that the effects are assumed to be drawn from a common normal distribution. For example, if you have some sort of model regarding grades and you want to acco...
What is the mathematical difference between random- and fixed-effects? Basically, what I think is the most distinct difference if you model a factor as random, is that the effects are assumed to be drawn from a common normal distribution. For example, if you have some so
9,513
What is the mathematical difference between random- and fixed-effects?
In econ land, such effects are individual-specific intercepts (or constants) that are unobserved, but can be estimated using panel data (repeated observation on the same units over time). The fixed effects estimation method allows for correlation between the unit-specific intercepts and the independent explanatory vari...
What is the mathematical difference between random- and fixed-effects?
In econ land, such effects are individual-specific intercepts (or constants) that are unobserved, but can be estimated using panel data (repeated observation on the same units over time). The fixed ef
What is the mathematical difference between random- and fixed-effects? In econ land, such effects are individual-specific intercepts (or constants) that are unobserved, but can be estimated using panel data (repeated observation on the same units over time). The fixed effects estimation method allows for correlation be...
What is the mathematical difference between random- and fixed-effects? In econ land, such effects are individual-specific intercepts (or constants) that are unobserved, but can be estimated using panel data (repeated observation on the same units over time). The fixed ef
9,514
What is the mathematical difference between random- and fixed-effects?
In a standard software package (e.g. R's lmer), the basic difference is: fixed effects are estimated by maximum likelihood (least squares for a linear model) random effects are estimated by empirical Bayes (least squares with some shrinkage for a linear model, where the shrinkage parameter is chosen by maximum likelih...
What is the mathematical difference between random- and fixed-effects?
In a standard software package (e.g. R's lmer), the basic difference is: fixed effects are estimated by maximum likelihood (least squares for a linear model) random effects are estimated by empirical
What is the mathematical difference between random- and fixed-effects? In a standard software package (e.g. R's lmer), the basic difference is: fixed effects are estimated by maximum likelihood (least squares for a linear model) random effects are estimated by empirical Bayes (least squares with some shrinkage for a l...
What is the mathematical difference between random- and fixed-effects? In a standard software package (e.g. R's lmer), the basic difference is: fixed effects are estimated by maximum likelihood (least squares for a linear model) random effects are estimated by empirical
9,515
What is the mathematical difference between random- and fixed-effects?
From reading the answers above I guess the major difference is whether we assume a Gaussian for the individual means. Fixed effects don't say much about that assumption because what we are interested is whether A sample differs from B sample (e.g., Are males taller than females?). While if that's not our aim, estimatio...
What is the mathematical difference between random- and fixed-effects?
From reading the answers above I guess the major difference is whether we assume a Gaussian for the individual means. Fixed effects don't say much about that assumption because what we are interested
What is the mathematical difference between random- and fixed-effects? From reading the answers above I guess the major difference is whether we assume a Gaussian for the individual means. Fixed effects don't say much about that assumption because what we are interested is whether A sample differs from B sample (e.g., ...
What is the mathematical difference between random- and fixed-effects? From reading the answers above I guess the major difference is whether we assume a Gaussian for the individual means. Fixed effects don't say much about that assumption because what we are interested
9,516
How can I test the fairness of a d20?
Here's an example with R code. The output is preceded by #'s. A fair die: rolls <- sample(1:20, 200, replace = T) table(rolls) #rolls # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # 7 8 11 9 12 14 9 14 11 7 11 10 13 8 8 5 13 9 10 11 chisq.test(table(rolls), p = rep(0.05, 20)) # Chi-sq...
How can I test the fairness of a d20?
Here's an example with R code. The output is preceded by #'s. A fair die: rolls <- sample(1:20, 200, replace = T) table(rolls) #rolls # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # 7
How can I test the fairness of a d20? Here's an example with R code. The output is preceded by #'s. A fair die: rolls <- sample(1:20, 200, replace = T) table(rolls) #rolls # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # 7 8 11 9 12 14 9 14 11 7 11 10 13 8 8 5 13 9 10 11 chisq.test(table(rolls)...
How can I test the fairness of a d20? Here's an example with R code. The output is preceded by #'s. A fair die: rolls <- sample(1:20, 200, replace = T) table(rolls) #rolls # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # 7
9,517
How can I test the fairness of a d20?
Do you want to do it by hand, or in excel ? If you want to do it in R, you can do it this way: Step 1: roll your die (let's say) 100 times. Step 2: count how many times you got each of your numbers Step 3: put them in R like this (write the number of times each die roll you got, instead of the numbers I wrote): x <- as...
How can I test the fairness of a d20?
Do you want to do it by hand, or in excel ? If you want to do it in R, you can do it this way: Step 1: roll your die (let's say) 100 times. Step 2: count how many times you got each of your numbers St
How can I test the fairness of a d20? Do you want to do it by hand, or in excel ? If you want to do it in R, you can do it this way: Step 1: roll your die (let's say) 100 times. Step 2: count how many times you got each of your numbers Step 3: put them in R like this (write the number of times each die roll you got, in...
How can I test the fairness of a d20? Do you want to do it by hand, or in excel ? If you want to do it in R, you can do it this way: Step 1: roll your die (let's say) 100 times. Step 2: count how many times you got each of your numbers St
9,518
How can I test the fairness of a d20?
Nobody has suggested a Bayesian approach yet? I know the question has been answered already, but what the heck. Below is for only a 3-sided die, but I'm guessing it's obvious how to fix it for $n=37$ sides. First, in line with what @Glen_b said, a bayesian is not actually interested whether or not the die is exactly f...
How can I test the fairness of a d20?
Nobody has suggested a Bayesian approach yet? I know the question has been answered already, but what the heck. Below is for only a 3-sided die, but I'm guessing it's obvious how to fix it for $n=37$
How can I test the fairness of a d20? Nobody has suggested a Bayesian approach yet? I know the question has been answered already, but what the heck. Below is for only a 3-sided die, but I'm guessing it's obvious how to fix it for $n=37$ sides. First, in line with what @Glen_b said, a bayesian is not actually interest...
How can I test the fairness of a d20? Nobody has suggested a Bayesian approach yet? I know the question has been answered already, but what the heck. Below is for only a 3-sided die, but I'm guessing it's obvious how to fix it for $n=37$
9,519
How can I test the fairness of a d20?
If you are interested in just checking the number of times each number appears, then a Chi-squared test would be suitable. Suppose you roll a die N times. You would expect each value to come up N/20 times. All a chi-square test does is compare what you observed with what you get. If this difference is too large, then t...
How can I test the fairness of a d20?
If you are interested in just checking the number of times each number appears, then a Chi-squared test would be suitable. Suppose you roll a die N times. You would expect each value to come up N/20 t
How can I test the fairness of a d20? If you are interested in just checking the number of times each number appears, then a Chi-squared test would be suitable. Suppose you roll a die N times. You would expect each value to come up N/20 times. All a chi-square test does is compare what you observed with what you get. I...
How can I test the fairness of a d20? If you are interested in just checking the number of times each number appears, then a Chi-squared test would be suitable. Suppose you roll a die N times. You would expect each value to come up N/20 t
9,520
How can I test the fairness of a d20?
A chi-squared goodness of fit test aims to find all possible kinds of deviations from strict uniformity. This is reasonable with a d4 or a d6, but with a d20, you're probably more interested in checking that the probability that you roll under (or possibly exceed) each outcome is close to what it should be. What I am g...
How can I test the fairness of a d20?
A chi-squared goodness of fit test aims to find all possible kinds of deviations from strict uniformity. This is reasonable with a d4 or a d6, but with a d20, you're probably more interested in checki
How can I test the fairness of a d20? A chi-squared goodness of fit test aims to find all possible kinds of deviations from strict uniformity. This is reasonable with a d4 or a d6, but with a d20, you're probably more interested in checking that the probability that you roll under (or possibly exceed) each outcome is c...
How can I test the fairness of a d20? A chi-squared goodness of fit test aims to find all possible kinds of deviations from strict uniformity. This is reasonable with a d4 or a d6, but with a d20, you're probably more interested in checki
9,521
How can I test the fairness of a d20?
Perhaps one should not focus as much on one set of rolls. Try rolling a 6 side die 10 times and repeat the process 8 times. > xy <- rmultinom(10, n = N, prob = rep(1, K)/K) > xy [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [1,] 3 1 0 0 1 1 2 1 [2,] 0 0 1 2 1 1 0 1 [3,] ...
How can I test the fairness of a d20?
Perhaps one should not focus as much on one set of rolls. Try rolling a 6 side die 10 times and repeat the process 8 times. > xy <- rmultinom(10, n = N, prob = rep(1, K)/K) > xy [,1] [,2] [,3] [,
How can I test the fairness of a d20? Perhaps one should not focus as much on one set of rolls. Try rolling a 6 side die 10 times and repeat the process 8 times. > xy <- rmultinom(10, n = N, prob = rep(1, K)/K) > xy [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [1,] 3 1 0 0 1 1 2 1 [2,] 0 0...
How can I test the fairness of a d20? Perhaps one should not focus as much on one set of rolls. Try rolling a 6 side die 10 times and repeat the process 8 times. > xy <- rmultinom(10, n = N, prob = rep(1, K)/K) > xy [,1] [,2] [,3] [,
9,522
Unsupervised, supervised and semi-supervised learning
Generally, the problems of machine learning may be considered variations on function estimation for classification, prediction or modeling. In supervised learning one is furnished with input ($x_1$, $x_2$, ...,) and output ($y_1$, $y_2$, ...,) and are challenged with finding a function that approximates this behavior i...
Unsupervised, supervised and semi-supervised learning
Generally, the problems of machine learning may be considered variations on function estimation for classification, prediction or modeling. In supervised learning one is furnished with input ($x_1$, $
Unsupervised, supervised and semi-supervised learning Generally, the problems of machine learning may be considered variations on function estimation for classification, prediction or modeling. In supervised learning one is furnished with input ($x_1$, $x_2$, ...,) and output ($y_1$, $y_2$, ...,) and are challenged wit...
Unsupervised, supervised and semi-supervised learning Generally, the problems of machine learning may be considered variations on function estimation for classification, prediction or modeling. In supervised learning one is furnished with input ($x_1$, $
9,523
Unsupervised, supervised and semi-supervised learning
Unsupervised Learning Unsupervised learning is when you have no labeled data available for training. Examples of this are often clustering methods. Supervised Learning In this case your training data exists out of labeled data. The problem you solve here is often predicting the labels for data points without label. Sem...
Unsupervised, supervised and semi-supervised learning
Unsupervised Learning Unsupervised learning is when you have no labeled data available for training. Examples of this are often clustering methods. Supervised Learning In this case your training data
Unsupervised, supervised and semi-supervised learning Unsupervised Learning Unsupervised learning is when you have no labeled data available for training. Examples of this are often clustering methods. Supervised Learning In this case your training data exists out of labeled data. The problem you solve here is often pr...
Unsupervised, supervised and semi-supervised learning Unsupervised Learning Unsupervised learning is when you have no labeled data available for training. Examples of this are often clustering methods. Supervised Learning In this case your training data
9,524
Unsupervised, supervised and semi-supervised learning
I don't think that supervised/unsupervised is the best way to think about it. For basic data mining, it's better to think about what you are trying to do. There are four main tasks: prediction. if you are predicting a real number, it is called regression. if you are predicting a whole number or class, it is called cla...
Unsupervised, supervised and semi-supervised learning
I don't think that supervised/unsupervised is the best way to think about it. For basic data mining, it's better to think about what you are trying to do. There are four main tasks: prediction. if yo
Unsupervised, supervised and semi-supervised learning I don't think that supervised/unsupervised is the best way to think about it. For basic data mining, it's better to think about what you are trying to do. There are four main tasks: prediction. if you are predicting a real number, it is called regression. if you ar...
Unsupervised, supervised and semi-supervised learning I don't think that supervised/unsupervised is the best way to think about it. For basic data mining, it's better to think about what you are trying to do. There are four main tasks: prediction. if yo
9,525
How to find/estimate probability density function from density function in R
?density points out that it uses approx to do linear interpolation already; ?approx points out that approxfun generates a suitable function: x <- log(rgamma(150,5)) df <- approxfun(density(x)) plot(density(x)) xnew <- c(0.45,1.84,2.3) points(xnew,df(xnew),col=2) By use of integrate starting from an appropriate distan...
How to find/estimate probability density function from density function in R
?density points out that it uses approx to do linear interpolation already; ?approx points out that approxfun generates a suitable function: x <- log(rgamma(150,5)) df <- approxfun(density(x)) plot(de
How to find/estimate probability density function from density function in R ?density points out that it uses approx to do linear interpolation already; ?approx points out that approxfun generates a suitable function: x <- log(rgamma(150,5)) df <- approxfun(density(x)) plot(density(x)) xnew <- c(0.45,1.84,2.3) points(x...
How to find/estimate probability density function from density function in R ?density points out that it uses approx to do linear interpolation already; ?approx points out that approxfun generates a suitable function: x <- log(rgamma(150,5)) df <- approxfun(density(x)) plot(de
9,526
How to find/estimate probability density function from density function in R
spatstat.core::CDF() can be used to to create a cumulative density function from a given output from density(). set.seed(123) x <- rnorm(10000000) x_density <- density(x, n = 10000) x_cdf <- spatstat.core::CDF(x_density) sds <- c(-2, -1, 0, 1, 2) names(sds) <- sds # check cdf at different values setNames( x_cdf(s...
How to find/estimate probability density function from density function in R
spatstat.core::CDF() can be used to to create a cumulative density function from a given output from density(). set.seed(123) x <- rnorm(10000000) x_density <- density(x, n = 10000) x_cdf <- spatsta
How to find/estimate probability density function from density function in R spatstat.core::CDF() can be used to to create a cumulative density function from a given output from density(). set.seed(123) x <- rnorm(10000000) x_density <- density(x, n = 10000) x_cdf <- spatstat.core::CDF(x_density) sds <- c(-2, -1, 0,...
How to find/estimate probability density function from density function in R spatstat.core::CDF() can be used to to create a cumulative density function from a given output from density(). set.seed(123) x <- rnorm(10000000) x_density <- density(x, n = 10000) x_cdf <- spatsta
9,527
Proof that F-statistic follows F-distribution
Let us show the result for the general case of which your formula for the test statistic is a special case. In general, we need to verify that the statistic can be, according to the characterization of the $F$ distribution, be written as the ratio of independent $\chi^2$ r.v.s divided by their degrees of freedom. Let $...
Proof that F-statistic follows F-distribution
Let us show the result for the general case of which your formula for the test statistic is a special case. In general, we need to verify that the statistic can be, according to the characterization o
Proof that F-statistic follows F-distribution Let us show the result for the general case of which your formula for the test statistic is a special case. In general, we need to verify that the statistic can be, according to the characterization of the $F$ distribution, be written as the ratio of independent $\chi^2$ r....
Proof that F-statistic follows F-distribution Let us show the result for the general case of which your formula for the test statistic is a special case. In general, we need to verify that the statistic can be, according to the characterization o
9,528
Proof that F-statistic follows F-distribution
@ChristophHanck has provided a very comprehensive answer, here I will add a sketch of proof on the special case OP mentioned. Hopefully it's also easier to follow for beginners. A random variable $Y\sim F_{d_1,d_2}$ if $$Y=\frac{X_1/d_1}{X_2/d_2},$$ where $X_1\sim\chi^2_{d_1}$ and $X_2\sim\chi^2_{d_2}$ are independent....
Proof that F-statistic follows F-distribution
@ChristophHanck has provided a very comprehensive answer, here I will add a sketch of proof on the special case OP mentioned. Hopefully it's also easier to follow for beginners. A random variable $Y\s
Proof that F-statistic follows F-distribution @ChristophHanck has provided a very comprehensive answer, here I will add a sketch of proof on the special case OP mentioned. Hopefully it's also easier to follow for beginners. A random variable $Y\sim F_{d_1,d_2}$ if $$Y=\frac{X_1/d_1}{X_2/d_2},$$ where $X_1\sim\chi^2_{d_...
Proof that F-statistic follows F-distribution @ChristophHanck has provided a very comprehensive answer, here I will add a sketch of proof on the special case OP mentioned. Hopefully it's also easier to follow for beginners. A random variable $Y\s
9,529
How well does bootstrapping approximate the sampling distribution of an estimator?
In Information Theory the typical way to quantify how "close" one distribution to another is to use KL-divergence Let's try to illustrate it with a highly skewed long-tail dataset - delays of plane arrivals in the Houston airport (from hflights package). Let $\hat \theta$ be the mean estimator. First, we find the samp...
How well does bootstrapping approximate the sampling distribution of an estimator?
In Information Theory the typical way to quantify how "close" one distribution to another is to use KL-divergence Let's try to illustrate it with a highly skewed long-tail dataset - delays of plane a
How well does bootstrapping approximate the sampling distribution of an estimator? In Information Theory the typical way to quantify how "close" one distribution to another is to use KL-divergence Let's try to illustrate it with a highly skewed long-tail dataset - delays of plane arrivals in the Houston airport (from ...
How well does bootstrapping approximate the sampling distribution of an estimator? In Information Theory the typical way to quantify how "close" one distribution to another is to use KL-divergence Let's try to illustrate it with a highly skewed long-tail dataset - delays of plane a
9,530
How well does bootstrapping approximate the sampling distribution of an estimator?
Bootstrap is based on the convergence of the empirical cdf to the true cdf, that is, $$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)$$ converges (as $n$ goes to infinity) to $F(x)$ for every $x$. Hence convergence of the bootstrap distribution of $\hat{\theta}(X_1,...
How well does bootstrapping approximate the sampling distribution of an estimator?
Bootstrap is based on the convergence of the empirical cdf to the true cdf, that is, $$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)$$ converges
How well does bootstrapping approximate the sampling distribution of an estimator? Bootstrap is based on the convergence of the empirical cdf to the true cdf, that is, $$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)$$ converges (as $n$ goes to infinity) to $F(x)$ f...
How well does bootstrapping approximate the sampling distribution of an estimator? Bootstrap is based on the convergence of the empirical cdf to the true cdf, that is, $$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)$$ converges
9,531
Is hyperparameter tuning on sample of dataset a bad idea?
In addition to Jim's (+1) answer: For some classifiers, the hyper-parameter values are dependent on the number of training examples, for instance for a linear SVM, the primal optimization problem is $\mathrm{min} \frac12\|w\|^2 + C\sum_{i=1}^\ell \xi_i$ subject to $y_i(x_i\cdot w _ b) \geq 1 - \xi_i, \quad \mathrm{and...
Is hyperparameter tuning on sample of dataset a bad idea?
In addition to Jim's (+1) answer: For some classifiers, the hyper-parameter values are dependent on the number of training examples, for instance for a linear SVM, the primal optimization problem is
Is hyperparameter tuning on sample of dataset a bad idea? In addition to Jim's (+1) answer: For some classifiers, the hyper-parameter values are dependent on the number of training examples, for instance for a linear SVM, the primal optimization problem is $\mathrm{min} \frac12\|w\|^2 + C\sum_{i=1}^\ell \xi_i$ subject...
Is hyperparameter tuning on sample of dataset a bad idea? In addition to Jim's (+1) answer: For some classifiers, the hyper-parameter values are dependent on the number of training examples, for instance for a linear SVM, the primal optimization problem is
9,532
Is hyperparameter tuning on sample of dataset a bad idea?
Is hyperparameter tuning on sample of dataset a bad idea? A: Yes, because you risk overfitting (the hyperparameters) on that specific test set resulting from your chosen train-test split. Do I limit my classification accuracy? A: Yes, but common machine learning wisdom is: with your optimal hyperparameters, say $\la...
Is hyperparameter tuning on sample of dataset a bad idea?
Is hyperparameter tuning on sample of dataset a bad idea? A: Yes, because you risk overfitting (the hyperparameters) on that specific test set resulting from your chosen train-test split. Do I limit
Is hyperparameter tuning on sample of dataset a bad idea? Is hyperparameter tuning on sample of dataset a bad idea? A: Yes, because you risk overfitting (the hyperparameters) on that specific test set resulting from your chosen train-test split. Do I limit my classification accuracy? A: Yes, but common machine learn...
Is hyperparameter tuning on sample of dataset a bad idea? Is hyperparameter tuning on sample of dataset a bad idea? A: Yes, because you risk overfitting (the hyperparameters) on that specific test set resulting from your chosen train-test split. Do I limit
9,533
Is hyperparameter tuning on sample of dataset a bad idea?
This paper is about the topic of taking other/smaller datasets for the tuning of bigger datasets: https://papers.nips.cc/paper/5086-multi-task-bayesian-optimization.pdf I think it is not a bad idea in contrast to what Jim said.
Is hyperparameter tuning on sample of dataset a bad idea?
This paper is about the topic of taking other/smaller datasets for the tuning of bigger datasets: https://papers.nips.cc/paper/5086-multi-task-bayesian-optimization.pdf I think it is not a bad idea in
Is hyperparameter tuning on sample of dataset a bad idea? This paper is about the topic of taking other/smaller datasets for the tuning of bigger datasets: https://papers.nips.cc/paper/5086-multi-task-bayesian-optimization.pdf I think it is not a bad idea in contrast to what Jim said.
Is hyperparameter tuning on sample of dataset a bad idea? This paper is about the topic of taking other/smaller datasets for the tuning of bigger datasets: https://papers.nips.cc/paper/5086-multi-task-bayesian-optimization.pdf I think it is not a bad idea in
9,534
Is hyperparameter tuning on sample of dataset a bad idea?
I'll answer for artificial neural networks (ANNs). The hyperparameters of ANNs may define either its learning process (e.g., learning rate or mini-batch size) or its architecture (e.g., number of hidden units or layers). Tuning architectural hyperparameters on a subset of your training set is probably not a good idea ...
Is hyperparameter tuning on sample of dataset a bad idea?
I'll answer for artificial neural networks (ANNs). The hyperparameters of ANNs may define either its learning process (e.g., learning rate or mini-batch size) or its architecture (e.g., number of hidd
Is hyperparameter tuning on sample of dataset a bad idea? I'll answer for artificial neural networks (ANNs). The hyperparameters of ANNs may define either its learning process (e.g., learning rate or mini-batch size) or its architecture (e.g., number of hidden units or layers). Tuning architectural hyperparameters on ...
Is hyperparameter tuning on sample of dataset a bad idea? I'll answer for artificial neural networks (ANNs). The hyperparameters of ANNs may define either its learning process (e.g., learning rate or mini-batch size) or its architecture (e.g., number of hidd
9,535
Is hyperparameter tuning on sample of dataset a bad idea?
You can take a look at https://link.springer.com/chapter/10.1007/978-3-319-53480-0_27 in which we've investigated the effects of random sampling on SVM hyper-parameter tuning using 100 real-world datasets...
Is hyperparameter tuning on sample of dataset a bad idea?
You can take a look at https://link.springer.com/chapter/10.1007/978-3-319-53480-0_27 in which we've investigated the effects of random sampling on SVM hyper-parameter tuning using 100 real-world dat
Is hyperparameter tuning on sample of dataset a bad idea? You can take a look at https://link.springer.com/chapter/10.1007/978-3-319-53480-0_27 in which we've investigated the effects of random sampling on SVM hyper-parameter tuning using 100 real-world datasets...
Is hyperparameter tuning on sample of dataset a bad idea? You can take a look at https://link.springer.com/chapter/10.1007/978-3-319-53480-0_27 in which we've investigated the effects of random sampling on SVM hyper-parameter tuning using 100 real-world dat
9,536
Is hyperparameter tuning on sample of dataset a bad idea?
You can use hyperparameter optimization algorithms which support multifidelity evaluations, i.e., evaluations on sub-sets of your data in order to get a rough but useful estimate about optimal hyperparameter values for the entire dataset. Such approaches typically allow to the reduce the total computational cost needed...
Is hyperparameter tuning on sample of dataset a bad idea?
You can use hyperparameter optimization algorithms which support multifidelity evaluations, i.e., evaluations on sub-sets of your data in order to get a rough but useful estimate about optimal hyperpa
Is hyperparameter tuning on sample of dataset a bad idea? You can use hyperparameter optimization algorithms which support multifidelity evaluations, i.e., evaluations on sub-sets of your data in order to get a rough but useful estimate about optimal hyperparameter values for the entire dataset. Such approaches typical...
Is hyperparameter tuning on sample of dataset a bad idea? You can use hyperparameter optimization algorithms which support multifidelity evaluations, i.e., evaluations on sub-sets of your data in order to get a rough but useful estimate about optimal hyperpa
9,537
Introduction to machine learning for mathematicians
For what you describe, I highly recommend "Foundations of Machine Learning" by Mohri et.al. It is an undergraduate text, but it is for really good undergraduates. It is readable and it is the only place I have found what I would call a mathematical definition of machine learning (pac and weak pac). It is worth readi...
Introduction to machine learning for mathematicians
For what you describe, I highly recommend "Foundations of Machine Learning" by Mohri et.al. It is an undergraduate text, but it is for really good undergraduates. It is readable and it is the only p
Introduction to machine learning for mathematicians For what you describe, I highly recommend "Foundations of Machine Learning" by Mohri et.al. It is an undergraduate text, but it is for really good undergraduates. It is readable and it is the only place I have found what I would call a mathematical definition of mac...
Introduction to machine learning for mathematicians For what you describe, I highly recommend "Foundations of Machine Learning" by Mohri et.al. It is an undergraduate text, but it is for really good undergraduates. It is readable and it is the only p
9,538
Introduction to machine learning for mathematicians
I would recommend Elements of Statistical Learning (free PDF file). It has sufficient maths and a good introduction to all the relevant techniques - together with some insights on why the techniques work (and when they don't). Also Introduction to Statistical Learning (which is more practical - how to do it in R). It h...
Introduction to machine learning for mathematicians
I would recommend Elements of Statistical Learning (free PDF file). It has sufficient maths and a good introduction to all the relevant techniques - together with some insights on why the techniques w
Introduction to machine learning for mathematicians I would recommend Elements of Statistical Learning (free PDF file). It has sufficient maths and a good introduction to all the relevant techniques - together with some insights on why the techniques work (and when they don't). Also Introduction to Statistical Learning...
Introduction to machine learning for mathematicians I would recommend Elements of Statistical Learning (free PDF file). It has sufficient maths and a good introduction to all the relevant techniques - together with some insights on why the techniques w
9,539
Introduction to machine learning for mathematicians
You will probably like Learning With Kernels by Schölkopf and Smola. Most of Schölkopf's work is mathematically rigorous. That said, you are probably better off reading research papers instead of textbooks. Research papers contain full derivations and proofs of convergence, bounds on performance, etc. which are very o...
Introduction to machine learning for mathematicians
You will probably like Learning With Kernels by Schölkopf and Smola. Most of Schölkopf's work is mathematically rigorous. That said, you are probably better off reading research papers instead of tex
Introduction to machine learning for mathematicians You will probably like Learning With Kernels by Schölkopf and Smola. Most of Schölkopf's work is mathematically rigorous. That said, you are probably better off reading research papers instead of textbooks. Research papers contain full derivations and proofs of conve...
Introduction to machine learning for mathematicians You will probably like Learning With Kernels by Schölkopf and Smola. Most of Schölkopf's work is mathematically rigorous. That said, you are probably better off reading research papers instead of tex
9,540
Introduction to machine learning for mathematicians
I would suggest Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz. I admit that I read only small portions of it but I immediately noticed rigor with which author approached every problem and discussion.
Introduction to machine learning for mathematicians
I would suggest Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz. I admit that I read only small portions of it but I immediately noticed rigor with which author approa
Introduction to machine learning for mathematicians I would suggest Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz. I admit that I read only small portions of it but I immediately noticed rigor with which author approached every problem and discussion.
Introduction to machine learning for mathematicians I would suggest Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz. I admit that I read only small portions of it but I immediately noticed rigor with which author approa
9,541
Moving-average model error terms
MA Model Estimation: Let us assume a series with 100 time points, and say this is characterized by MA(1) model with no intercept. Then the model is given by $$y_t=\varepsilon_t-\theta\varepsilon_{t-1},\quad t=1,2,\cdots,100\quad (1)$$ The error term here is not observed. So to obtain this, Box et al. Time Series Analys...
Moving-average model error terms
MA Model Estimation: Let us assume a series with 100 time points, and say this is characterized by MA(1) model with no intercept. Then the model is given by $$y_t=\varepsilon_t-\theta\varepsilon_{t-1}
Moving-average model error terms MA Model Estimation: Let us assume a series with 100 time points, and say this is characterized by MA(1) model with no intercept. Then the model is given by $$y_t=\varepsilon_t-\theta\varepsilon_{t-1},\quad t=1,2,\cdots,100\quad (1)$$ The error term here is not observed. So to obtain th...
Moving-average model error terms MA Model Estimation: Let us assume a series with 100 time points, and say this is characterized by MA(1) model with no intercept. Then the model is given by $$y_t=\varepsilon_t-\theta\varepsilon_{t-1}
9,542
Moving-average model error terms
A Gaussian MA(q) model is defined (not only by Box and Jenkins!) as $$ Y_t = -\sum_{i=1}^q \vartheta_i e_{t-i} + \sigma e_t,\quad e_t\stackrel{\text{iid}}{\sim} \mathcal{N}(0,1) $$ so the MA(q) model is a "pure" error model, the degree $q$ defining how far the correlation goes back.
Moving-average model error terms
A Gaussian MA(q) model is defined (not only by Box and Jenkins!) as $$ Y_t = -\sum_{i=1}^q \vartheta_i e_{t-i} + \sigma e_t,\quad e_t\stackrel{\text{iid}}{\sim} \mathcal{N}(0,1) $$ so the MA(q) mode
Moving-average model error terms A Gaussian MA(q) model is defined (not only by Box and Jenkins!) as $$ Y_t = -\sum_{i=1}^q \vartheta_i e_{t-i} + \sigma e_t,\quad e_t\stackrel{\text{iid}}{\sim} \mathcal{N}(0,1) $$ so the MA(q) model is a "pure" error model, the degree $q$ defining how far the correlation goes back.
Moving-average model error terms A Gaussian MA(q) model is defined (not only by Box and Jenkins!) as $$ Y_t = -\sum_{i=1}^q \vartheta_i e_{t-i} + \sigma e_t,\quad e_t\stackrel{\text{iid}}{\sim} \mathcal{N}(0,1) $$ so the MA(q) mode
9,543
Moving-average model error terms
See my post here for an explanation of how to understand the disturbance terms in a MA series. You need different estimation techniques to estimate them. This is because you cannot first get the residuals of a linear regression and then include the lagged residual values as explanatory variables because the MA process...
Moving-average model error terms
See my post here for an explanation of how to understand the disturbance terms in a MA series. You need different estimation techniques to estimate them. This is because you cannot first get the resi
Moving-average model error terms See my post here for an explanation of how to understand the disturbance terms in a MA series. You need different estimation techniques to estimate them. This is because you cannot first get the residuals of a linear regression and then include the lagged residual values as explanatory...
Moving-average model error terms See my post here for an explanation of how to understand the disturbance terms in a MA series. You need different estimation techniques to estimate them. This is because you cannot first get the resi
9,544
Moving-average model error terms
You say "the observation $Y$ is first regressed against its previous values $Y_{t−1},...,Y_{t−n}$ and then one or more $Y−\hat{Y}$ values are used as the error terms for the MA model." What I say is that $Y$ is regressed against two predictor series $e_{t-1}$ and $e_{t−2}$ yielding an error process $e_t$ which will be ...
Moving-average model error terms
You say "the observation $Y$ is first regressed against its previous values $Y_{t−1},...,Y_{t−n}$ and then one or more $Y−\hat{Y}$ values are used as the error terms for the MA model." What I say is t
Moving-average model error terms You say "the observation $Y$ is first regressed against its previous values $Y_{t−1},...,Y_{t−n}$ and then one or more $Y−\hat{Y}$ values are used as the error terms for the MA model." What I say is that $Y$ is regressed against two predictor series $e_{t-1}$ and $e_{t−2}$ yielding an e...
Moving-average model error terms You say "the observation $Y$ is first regressed against its previous values $Y_{t−1},...,Y_{t−n}$ and then one or more $Y−\hat{Y}$ values are used as the error terms for the MA model." What I say is t
9,545
Moving-average model error terms
With Hannan–Rissanen (1982) algorithm to fit parameters for an ARIMA model you actually always do an AR regression as first step, even for an pure MA model: AR(m) model (with $m > max(p, q)$) is fitted to the data Compute error terms for all $t$: $\epsilon_t$ = $y_t - \hat{y}_t$ Regress $y_t$ on $y^{(d)}_{t-1},..,y^{(...
Moving-average model error terms
With Hannan–Rissanen (1982) algorithm to fit parameters for an ARIMA model you actually always do an AR regression as first step, even for an pure MA model: AR(m) model (with $m > max(p, q)$) is fitt
Moving-average model error terms With Hannan–Rissanen (1982) algorithm to fit parameters for an ARIMA model you actually always do an AR regression as first step, even for an pure MA model: AR(m) model (with $m > max(p, q)$) is fitted to the data Compute error terms for all $t$: $\epsilon_t$ = $y_t - \hat{y}_t$ Regres...
Moving-average model error terms With Hannan–Rissanen (1982) algorithm to fit parameters for an ARIMA model you actually always do an AR regression as first step, even for an pure MA model: AR(m) model (with $m > max(p, q)$) is fitt
9,546
Interpretation of ridge regularization in regression
Good questions! Yes, this is exactly correct. You can see ridge penalty as one possible way to deal with multicollinearity problem that arises when many predictors are highly correlated. Introducing ridge penalty effectively lowers these correlations. I think this is partly tradition, partly the fact that ridge regres...
Interpretation of ridge regularization in regression
Good questions! Yes, this is exactly correct. You can see ridge penalty as one possible way to deal with multicollinearity problem that arises when many predictors are highly correlated. Introducing
Interpretation of ridge regularization in regression Good questions! Yes, this is exactly correct. You can see ridge penalty as one possible way to deal with multicollinearity problem that arises when many predictors are highly correlated. Introducing ridge penalty effectively lowers these correlations. I think this i...
Interpretation of ridge regularization in regression Good questions! Yes, this is exactly correct. You can see ridge penalty as one possible way to deal with multicollinearity problem that arises when many predictors are highly correlated. Introducing
9,547
Interpretation of ridge regularization in regression
A further comment on question 4. Actually, ridge regression does pretty effectively deal with the small eigenvalues of $X^{T}X$ while mostly leaving the large eigenvalues alone. To see this, express the ridge regression estimator in terms of the singular value decomposition of $X$, $$X=\sum_{i=1}^{n} \sigma_{i}u_{...
Interpretation of ridge regularization in regression
A further comment on question 4. Actually, ridge regression does pretty effectively deal with the small eigenvalues of $X^{T}X$ while mostly leaving the large eigenvalues alone. To see this, expres
Interpretation of ridge regularization in regression A further comment on question 4. Actually, ridge regression does pretty effectively deal with the small eigenvalues of $X^{T}X$ while mostly leaving the large eigenvalues alone. To see this, express the ridge regression estimator in terms of the singular value dec...
Interpretation of ridge regularization in regression A further comment on question 4. Actually, ridge regression does pretty effectively deal with the small eigenvalues of $X^{T}X$ while mostly leaving the large eigenvalues alone. To see this, expres
9,548
Interpretation of ridge regularization in regression
Questions 1, 2 and 3 are linked. I like to think that yes, introducing a Ridge penalty in a linear regression model can be interpreted as a shrinkage on the eigen-values of $X$. In order to make this interpretation, one has first to make the assumption that $X$ is centered. This interpretation is based on the following...
Interpretation of ridge regularization in regression
Questions 1, 2 and 3 are linked. I like to think that yes, introducing a Ridge penalty in a linear regression model can be interpreted as a shrinkage on the eigen-values of $X$. In order to make this
Interpretation of ridge regularization in regression Questions 1, 2 and 3 are linked. I like to think that yes, introducing a Ridge penalty in a linear regression model can be interpreted as a shrinkage on the eigen-values of $X$. In order to make this interpretation, one has first to make the assumption that $X$ is ce...
Interpretation of ridge regularization in regression Questions 1, 2 and 3 are linked. I like to think that yes, introducing a Ridge penalty in a linear regression model can be interpreted as a shrinkage on the eigen-values of $X$. In order to make this
9,549
Introductory reading on Copulas
A concise introduction is T. Schmidt 2008 - Copulas and dependent measurement. Also noteworthy is Embrechts 2009 - Copulas - A personal view. For Schmidt I could not provide a better summary than the section titles. It provides basic definitions, intuition and examples. Discussion of sampling is bare-bone, and a brief...
Introductory reading on Copulas
A concise introduction is T. Schmidt 2008 - Copulas and dependent measurement. Also noteworthy is Embrechts 2009 - Copulas - A personal view. For Schmidt I could not provide a better summary than the
Introductory reading on Copulas A concise introduction is T. Schmidt 2008 - Copulas and dependent measurement. Also noteworthy is Embrechts 2009 - Copulas - A personal view. For Schmidt I could not provide a better summary than the section titles. It provides basic definitions, intuition and examples. Discussion of sa...
Introductory reading on Copulas A concise introduction is T. Schmidt 2008 - Copulas and dependent measurement. Also noteworthy is Embrechts 2009 - Copulas - A personal view. For Schmidt I could not provide a better summary than the
9,550
Introductory reading on Copulas
Chris Genest has another introductory paper "Everything You Always Wanted to Know about Copula Modeling but Were Afraid to Ask".
Introductory reading on Copulas
Chris Genest has another introductory paper "Everything You Always Wanted to Know about Copula Modeling but Were Afraid to Ask".
Introductory reading on Copulas Chris Genest has another introductory paper "Everything You Always Wanted to Know about Copula Modeling but Were Afraid to Ask".
Introductory reading on Copulas Chris Genest has another introductory paper "Everything You Always Wanted to Know about Copula Modeling but Were Afraid to Ask".
9,551
Introductory reading on Copulas
A good layperson introduction to copulas and its use in quantative fianance is http://archive.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all The concept of correlation of probabilities is illustrated by two elementary school students Alice and Britney. It also discusses how prices of credit default swaps ...
Introductory reading on Copulas
A good layperson introduction to copulas and its use in quantative fianance is http://archive.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all The concept of correlation of probabilities i
Introductory reading on Copulas A good layperson introduction to copulas and its use in quantative fianance is http://archive.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all The concept of correlation of probabilities is illustrated by two elementary school students Alice and Britney. It also discusses how...
Introductory reading on Copulas A good layperson introduction to copulas and its use in quantative fianance is http://archive.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all The concept of correlation of probabilities i
9,552
Introductory reading on Copulas
I recommend this paper as a must read: Li, David X. "On default correlation: A copula function approach." The Journal of Fixed Income 9.4 (2000): 43-54. Here's the PDF. It explains what copula is and how it can be used in the financial application. It's a nice easy read. This should be followed by an article By Felix S...
Introductory reading on Copulas
I recommend this paper as a must read: Li, David X. "On default correlation: A copula function approach." The Journal of Fixed Income 9.4 (2000): 43-54. Here's the PDF. It explains what copula is and
Introductory reading on Copulas I recommend this paper as a must read: Li, David X. "On default correlation: A copula function approach." The Journal of Fixed Income 9.4 (2000): 43-54. Here's the PDF. It explains what copula is and how it can be used in the financial application. It's a nice easy read. This should be f...
Introductory reading on Copulas I recommend this paper as a must read: Li, David X. "On default correlation: A copula function approach." The Journal of Fixed Income 9.4 (2000): 43-54. Here's the PDF. It explains what copula is and
9,553
Introductory reading on Copulas
Another good introduction is An introduction to copulas (Nelsen 2006).
Introductory reading on Copulas
Another good introduction is An introduction to copulas (Nelsen 2006).
Introductory reading on Copulas Another good introduction is An introduction to copulas (Nelsen 2006).
Introductory reading on Copulas Another good introduction is An introduction to copulas (Nelsen 2006).
9,554
What are some illustrative applications of empirical likelihood?
I can think of no better place than Owen's book to learn about empirical likelihood. One practical way to think about $L = L(p_1, \ldots, p_n)$ is as the likelihood for a multinomial distribution on the observed data points $x_1, \ldots, x_n$. The likelihood is thus a function of the probability vector $(p_1, \ldots, p...
What are some illustrative applications of empirical likelihood?
I can think of no better place than Owen's book to learn about empirical likelihood. One practical way to think about $L = L(p_1, \ldots, p_n)$ is as the likelihood for a multinomial distribution on t
What are some illustrative applications of empirical likelihood? I can think of no better place than Owen's book to learn about empirical likelihood. One practical way to think about $L = L(p_1, \ldots, p_n)$ is as the likelihood for a multinomial distribution on the observed data points $x_1, \ldots, x_n$. The likelih...
What are some illustrative applications of empirical likelihood? I can think of no better place than Owen's book to learn about empirical likelihood. One practical way to think about $L = L(p_1, \ldots, p_n)$ is as the likelihood for a multinomial distribution on t
9,555
What are some illustrative applications of empirical likelihood?
In econometrics, many applied papers start with the assumption that $$ E[g(X,\theta)] = 0 $$ where $X$ is a vector of data, $g$ is a known system of $q$ equations, and $\theta \in \Theta \subseteq \mathbb{R}^p$ is an unknown parameter, $q \geq p$. The function $g$ comes from an economic model. The goal is to estimate $...
What are some illustrative applications of empirical likelihood?
In econometrics, many applied papers start with the assumption that $$ E[g(X,\theta)] = 0 $$ where $X$ is a vector of data, $g$ is a known system of $q$ equations, and $\theta \in \Theta \subseteq \ma
What are some illustrative applications of empirical likelihood? In econometrics, many applied papers start with the assumption that $$ E[g(X,\theta)] = 0 $$ where $X$ is a vector of data, $g$ is a known system of $q$ equations, and $\theta \in \Theta \subseteq \mathbb{R}^p$ is an unknown parameter, $q \geq p$. The fun...
What are some illustrative applications of empirical likelihood? In econometrics, many applied papers start with the assumption that $$ E[g(X,\theta)] = 0 $$ where $X$ is a vector of data, $g$ is a known system of $q$ equations, and $\theta \in \Theta \subseteq \ma
9,556
What are some illustrative applications of empirical likelihood?
In survival analysis, the Kaplan-Meier curve is the most famous non-parametric estimator of the survival function $S(t) = Pr(T > t)$, where $T$ denotes the time-to-event random variable. Basically, $\hat{S}$ is a generalisation of the empirical distribution function which allows censoring. It can be derived heuristical...
What are some illustrative applications of empirical likelihood?
In survival analysis, the Kaplan-Meier curve is the most famous non-parametric estimator of the survival function $S(t) = Pr(T > t)$, where $T$ denotes the time-to-event random variable. Basically, $\
What are some illustrative applications of empirical likelihood? In survival analysis, the Kaplan-Meier curve is the most famous non-parametric estimator of the survival function $S(t) = Pr(T > t)$, where $T$ denotes the time-to-event random variable. Basically, $\hat{S}$ is a generalisation of the empirical distributi...
What are some illustrative applications of empirical likelihood? In survival analysis, the Kaplan-Meier curve is the most famous non-parametric estimator of the survival function $S(t) = Pr(T > t)$, where $T$ denotes the time-to-event random variable. Basically, $\
9,557
How to derive the probabilistic interpretation of the AUC?
First thing, let's try to define the area under the ROC curve formally. Some assumptions and definitions: We have a probabilistic classifier that outputs a "score" s(x), where x are the features, and s is a generic increasing monotonic function of the estimated probability p(class = 1|x). $f_{k}(s)$, with $k = \{0, 1\...
How to derive the probabilistic interpretation of the AUC?
First thing, let's try to define the area under the ROC curve formally. Some assumptions and definitions: We have a probabilistic classifier that outputs a "score" s(x), where x are the features, and
How to derive the probabilistic interpretation of the AUC? First thing, let's try to define the area under the ROC curve formally. Some assumptions and definitions: We have a probabilistic classifier that outputs a "score" s(x), where x are the features, and s is a generic increasing monotonic function of the estimate...
How to derive the probabilistic interpretation of the AUC? First thing, let's try to define the area under the ROC curve formally. Some assumptions and definitions: We have a probabilistic classifier that outputs a "score" s(x), where x are the features, and
9,558
How to derive the probabilistic interpretation of the AUC?
@alebu's answer is great. But its notation is nonstandard and uses 0 for the positive class and 1 for the negative class. Below are the results for the standard notation (0 for the negative class and 1 for the positive class): Pdf and cdf of the score for negative class: $f_0(s)$ and $F_0(s)$ Pdf and cdf of the score f...
How to derive the probabilistic interpretation of the AUC?
@alebu's answer is great. But its notation is nonstandard and uses 0 for the positive class and 1 for the negative class. Below are the results for the standard notation (0 for the negative class and
How to derive the probabilistic interpretation of the AUC? @alebu's answer is great. But its notation is nonstandard and uses 0 for the positive class and 1 for the negative class. Below are the results for the standard notation (0 for the negative class and 1 for the positive class): Pdf and cdf of the score for negat...
How to derive the probabilistic interpretation of the AUC? @alebu's answer is great. But its notation is nonstandard and uses 0 for the positive class and 1 for the negative class. Below are the results for the standard notation (0 for the negative class and
9,559
How to derive the probabilistic interpretation of the AUC?
The way to calculate AUC-ROC is to plot out the TPR and FPR as the threshold, $\tau$ is changed and calculate the area under that curve. But, why is this area under the curve the same as this probability? Let's assume the following: $A$ is the distribution of scores the model produces for data points that are actually...
How to derive the probabilistic interpretation of the AUC?
The way to calculate AUC-ROC is to plot out the TPR and FPR as the threshold, $\tau$ is changed and calculate the area under that curve. But, why is this area under the curve the same as this probabil
How to derive the probabilistic interpretation of the AUC? The way to calculate AUC-ROC is to plot out the TPR and FPR as the threshold, $\tau$ is changed and calculate the area under that curve. But, why is this area under the curve the same as this probability? Let's assume the following: $A$ is the distribution of ...
How to derive the probabilistic interpretation of the AUC? The way to calculate AUC-ROC is to plot out the TPR and FPR as the threshold, $\tau$ is changed and calculate the area under that curve. But, why is this area under the curve the same as this probabil
9,560
How to derive the probabilistic interpretation of the AUC?
Turns out I wrote a medium article just for that! Here it is : https://medium.com/@nathanaim/mathematics-behind-roc-auc-interpretation-e4e6f202a015 TL;DR : to go through the end of the demonstration, one needs to use the convolution theorem. If you don't want to change sites, here is the full trick. We want to show tha...
How to derive the probabilistic interpretation of the AUC?
Turns out I wrote a medium article just for that! Here it is : https://medium.com/@nathanaim/mathematics-behind-roc-auc-interpretation-e4e6f202a015 TL;DR : to go through the end of the demonstration,
How to derive the probabilistic interpretation of the AUC? Turns out I wrote a medium article just for that! Here it is : https://medium.com/@nathanaim/mathematics-behind-roc-auc-interpretation-e4e6f202a015 TL;DR : to go through the end of the demonstration, one needs to use the convolution theorem. If you don't want t...
How to derive the probabilistic interpretation of the AUC? Turns out I wrote a medium article just for that! Here it is : https://medium.com/@nathanaim/mathematics-behind-roc-auc-interpretation-e4e6f202a015 TL;DR : to go through the end of the demonstration,
9,561
Collinearity diagnostics problematic only when the interaction term is included
Yes, this is usually the case with non-centered interactions. A quick look at what happens to the correlation of two independent variables and their "interaction" set.seed(12345) a = rnorm(10000,20,2) b = rnorm(10000,10,2) cor(a,b) cor(a,a*b) > cor(a,b) [1] 0.01564907 > cor(a,a*b) [1] 0.4608877 And then when you cent...
Collinearity diagnostics problematic only when the interaction term is included
Yes, this is usually the case with non-centered interactions. A quick look at what happens to the correlation of two independent variables and their "interaction" set.seed(12345) a = rnorm(10000,20,2)
Collinearity diagnostics problematic only when the interaction term is included Yes, this is usually the case with non-centered interactions. A quick look at what happens to the correlation of two independent variables and their "interaction" set.seed(12345) a = rnorm(10000,20,2) b = rnorm(10000,10,2) cor(a,b) cor(a,a*...
Collinearity diagnostics problematic only when the interaction term is included Yes, this is usually the case with non-centered interactions. A quick look at what happens to the correlation of two independent variables and their "interaction" set.seed(12345) a = rnorm(10000,20,2)
9,562
Collinearity diagnostics problematic only when the interaction term is included
I've found the following publications on this topic useful: Robinson & Schumacker (2009): Interaction effects: centering, variance inflation factor, and interpretation issues 'The effects of predictor scaling on coefficients of regression equations (centered versus uncentered solutions and higher order interaction effe...
Collinearity diagnostics problematic only when the interaction term is included
I've found the following publications on this topic useful: Robinson & Schumacker (2009): Interaction effects: centering, variance inflation factor, and interpretation issues 'The effects of predictor
Collinearity diagnostics problematic only when the interaction term is included I've found the following publications on this topic useful: Robinson & Schumacker (2009): Interaction effects: centering, variance inflation factor, and interpretation issues 'The effects of predictor scaling on coefficients of regression e...
Collinearity diagnostics problematic only when the interaction term is included I've found the following publications on this topic useful: Robinson & Schumacker (2009): Interaction effects: centering, variance inflation factor, and interpretation issues 'The effects of predictor
9,563
In R, given an output from optim with a hessian matrix, how to calculate parameter confidence intervals using the hessian matrix?
If you are maximising a likelihood then the covariance matrix of the estimates is (asymptotically) the inverse of the negative of the Hessian. The standard errors are the square roots of the diagonal elements of the covariance (from elsewhere on the web!, from Prof. Thomas Lumley and Spencer Graves, Eng.). For a 95% ...
In R, given an output from optim with a hessian matrix, how to calculate parameter confidence interv
If you are maximising a likelihood then the covariance matrix of the estimates is (asymptotically) the inverse of the negative of the Hessian. The standard errors are the square roots of the diagonal
In R, given an output from optim with a hessian matrix, how to calculate parameter confidence intervals using the hessian matrix? If you are maximising a likelihood then the covariance matrix of the estimates is (asymptotically) the inverse of the negative of the Hessian. The standard errors are the square roots of the...
In R, given an output from optim with a hessian matrix, how to calculate parameter confidence interv If you are maximising a likelihood then the covariance matrix of the estimates is (asymptotically) the inverse of the negative of the Hessian. The standard errors are the square roots of the diagonal
9,564
Pseudo R squared formula for GLMs
There are a large number of pseudo-$R^2$s for GLiMs. The excellent UCLA statistics help site has a comprehensive overview of them here. The one you list is called McFadden's pseudo-$R^2$. Relative to UCLA's typology, it is like $R^2$ in the sense that it indexes the improvement of the fitted model over the null mode...
Pseudo R squared formula for GLMs
There are a large number of pseudo-$R^2$s for GLiMs. The excellent UCLA statistics help site has a comprehensive overview of them here. The one you list is called McFadden's pseudo-$R^2$. Relative
Pseudo R squared formula for GLMs There are a large number of pseudo-$R^2$s for GLiMs. The excellent UCLA statistics help site has a comprehensive overview of them here. The one you list is called McFadden's pseudo-$R^2$. Relative to UCLA's typology, it is like $R^2$ in the sense that it indexes the improvement of t...
Pseudo R squared formula for GLMs There are a large number of pseudo-$R^2$s for GLiMs. The excellent UCLA statistics help site has a comprehensive overview of them here. The one you list is called McFadden's pseudo-$R^2$. Relative
9,565
Pseudo R squared formula for GLMs
R gives null and residual deviance in the output to glm so that you can make exactly this sort of comparison (see the last two lines below). > x = log(1:10) > y = 1:10 > glm(y ~ x, family = poisson) >Call: glm(formula = y ~ x, family = poisson) Coefficients: (Intercept) x 5.564e-13 1.000e+00 D...
Pseudo R squared formula for GLMs
R gives null and residual deviance in the output to glm so that you can make exactly this sort of comparison (see the last two lines below). > x = log(1:10) > y = 1:10 > glm(y ~ x, family = poisson)
Pseudo R squared formula for GLMs R gives null and residual deviance in the output to glm so that you can make exactly this sort of comparison (see the last two lines below). > x = log(1:10) > y = 1:10 > glm(y ~ x, family = poisson) >Call: glm(formula = y ~ x, family = poisson) Coefficients: (Intercept) ...
Pseudo R squared formula for GLMs R gives null and residual deviance in the output to glm so that you can make exactly this sort of comparison (see the last two lines below). > x = log(1:10) > y = 1:10 > glm(y ~ x, family = poisson)
9,566
Pseudo R squared formula for GLMs
The formula you proposed have been proposed by Maddala (1983) and Magee (1990) to estimate R squared on logistic model. Therefore I don't think it's applicable to all glm model (see the book Modern Regression Methods by Thomas P. Ryan on page 266). If you make a fake data set, you will see that it's underestimate the ...
Pseudo R squared formula for GLMs
The formula you proposed have been proposed by Maddala (1983) and Magee (1990) to estimate R squared on logistic model. Therefore I don't think it's applicable to all glm model (see the book Modern Re
Pseudo R squared formula for GLMs The formula you proposed have been proposed by Maddala (1983) and Magee (1990) to estimate R squared on logistic model. Therefore I don't think it's applicable to all glm model (see the book Modern Regression Methods by Thomas P. Ryan on page 266). If you make a fake data set, you wil...
Pseudo R squared formula for GLMs The formula you proposed have been proposed by Maddala (1983) and Magee (1990) to estimate R squared on logistic model. Therefore I don't think it's applicable to all glm model (see the book Modern Re
9,567
Pseudo R squared formula for GLMs
The R package modEvA calculates D-Squared as 1 - (mod$deviance/mod$null.deviance) as mentioned by David J. Harris set.seed(1) data <- data.frame(y=rpois(n=10, lambda=exp(1 + 0.2 * x)), x=runif(n=10, min=0, max=1.5)) mod <- glm(y~x,data,family = poisson) 1- (mod$deviance/mod$null.deviance) [1] 0.01133757 library(modEv...
Pseudo R squared formula for GLMs
The R package modEvA calculates D-Squared as 1 - (mod$deviance/mod$null.deviance) as mentioned by David J. Harris set.seed(1) data <- data.frame(y=rpois(n=10, lambda=exp(1 + 0.2 * x)), x=runif(n=10, m
Pseudo R squared formula for GLMs The R package modEvA calculates D-Squared as 1 - (mod$deviance/mod$null.deviance) as mentioned by David J. Harris set.seed(1) data <- data.frame(y=rpois(n=10, lambda=exp(1 + 0.2 * x)), x=runif(n=10, min=0, max=1.5)) mod <- glm(y~x,data,family = poisson) 1- (mod$deviance/mod$null.devi...
Pseudo R squared formula for GLMs The R package modEvA calculates D-Squared as 1 - (mod$deviance/mod$null.deviance) as mentioned by David J. Harris set.seed(1) data <- data.frame(y=rpois(n=10, lambda=exp(1 + 0.2 * x)), x=runif(n=10, m
9,568
Correlation between OLS estimators for intercept and slope
Let me try it as follows (really not sure if that is useful intuition): Based on my above comment, the correlation will roughly be $$-\frac{E(X)}{\sqrt{E(X^2)}}$$ Thus, if $E(X)>0$ instead of $E(X)=0$, most data will be clustered to the right of zero. Thus, if the slope coefficient gets larger, the correlation formula ...
Correlation between OLS estimators for intercept and slope
Let me try it as follows (really not sure if that is useful intuition): Based on my above comment, the correlation will roughly be $$-\frac{E(X)}{\sqrt{E(X^2)}}$$ Thus, if $E(X)>0$ instead of $E(X)=0$
Correlation between OLS estimators for intercept and slope Let me try it as follows (really not sure if that is useful intuition): Based on my above comment, the correlation will roughly be $$-\frac{E(X)}{\sqrt{E(X^2)}}$$ Thus, if $E(X)>0$ instead of $E(X)=0$, most data will be clustered to the right of zero. Thus, if ...
Correlation between OLS estimators for intercept and slope Let me try it as follows (really not sure if that is useful intuition): Based on my above comment, the correlation will roughly be $$-\frac{E(X)}{\sqrt{E(X^2)}}$$ Thus, if $E(X)>0$ instead of $E(X)=0$
9,569
Correlation between OLS estimators for intercept and slope
You might like to follow Dougherty's Introduction to Econometrics, perhaps considering for now that $x$ is a non-stochastic variable, and defining the mean square deviation of $x$ to be $\DeclareMathOperator{\MSD}{MSD}\MSD(x) = \frac{1}{n} \sum_{i=1}^n (x_i - \bar{x})^2$. Note that the MSD is measured in the square of ...
Correlation between OLS estimators for intercept and slope
You might like to follow Dougherty's Introduction to Econometrics, perhaps considering for now that $x$ is a non-stochastic variable, and defining the mean square deviation of $x$ to be $\DeclareMathO
Correlation between OLS estimators for intercept and slope You might like to follow Dougherty's Introduction to Econometrics, perhaps considering for now that $x$ is a non-stochastic variable, and defining the mean square deviation of $x$ to be $\DeclareMathOperator{\MSD}{MSD}\MSD(x) = \frac{1}{n} \sum_{i=1}^n (x_i - \...
Correlation between OLS estimators for intercept and slope You might like to follow Dougherty's Introduction to Econometrics, perhaps considering for now that $x$ is a non-stochastic variable, and defining the mean square deviation of $x$ to be $\DeclareMathO
9,570
Interpretation of betas when there are multiple categorical variables
You are right about the interpretation of the betas when there is a single categorical variable with $k$ levels. If there were multiple categorical variables (and there were no interaction term), the intercept ($\hat\beta_0$) is the mean of the group that constitutes the reference level for both (all) categorical vari...
Interpretation of betas when there are multiple categorical variables
You are right about the interpretation of the betas when there is a single categorical variable with $k$ levels. If there were multiple categorical variables (and there were no interaction term), the
Interpretation of betas when there are multiple categorical variables You are right about the interpretation of the betas when there is a single categorical variable with $k$ levels. If there were multiple categorical variables (and there were no interaction term), the intercept ($\hat\beta_0$) is the mean of the grou...
Interpretation of betas when there are multiple categorical variables You are right about the interpretation of the betas when there is a single categorical variable with $k$ levels. If there were multiple categorical variables (and there were no interaction term), the
9,571
Interpretation of betas when there are multiple categorical variables
Actually as you correctly pointed out, in the case of a single categorical variable (with potentially more than 2 levels), $\hat{\beta}_0$ is indeed the mean of the reference and the other $\hat\beta$ are the difference between the mean of that level of the category and the mean of the reference. If we extend a bit yo...
Interpretation of betas when there are multiple categorical variables
Actually as you correctly pointed out, in the case of a single categorical variable (with potentially more than 2 levels), $\hat{\beta}_0$ is indeed the mean of the reference and the other $\hat\beta
Interpretation of betas when there are multiple categorical variables Actually as you correctly pointed out, in the case of a single categorical variable (with potentially more than 2 levels), $\hat{\beta}_0$ is indeed the mean of the reference and the other $\hat\beta$ are the difference between the mean of that leve...
Interpretation of betas when there are multiple categorical variables Actually as you correctly pointed out, in the case of a single categorical variable (with potentially more than 2 levels), $\hat{\beta}_0$ is indeed the mean of the reference and the other $\hat\beta
9,572
What is the difference between logistic and logit regression?
The logit is a link function / a transformation of a parameter. It is the logarithm of the odds. If we call the parameter $\pi$, it is defined as follows: $$ {\rm logit}(\pi) = \log\bigg(\frac{\pi}{1-\pi}\bigg) $$ The logistic function is the inverse of the logit. If we have a value, $x$, the logistic is: $$ {\rm lo...
What is the difference between logistic and logit regression?
The logit is a link function / a transformation of a parameter. It is the logarithm of the odds. If we call the parameter $\pi$, it is defined as follows: $$ {\rm logit}(\pi) = \log\bigg(\frac{\pi}{
What is the difference between logistic and logit regression? The logit is a link function / a transformation of a parameter. It is the logarithm of the odds. If we call the parameter $\pi$, it is defined as follows: $$ {\rm logit}(\pi) = \log\bigg(\frac{\pi}{1-\pi}\bigg) $$ The logistic function is the inverse of th...
What is the difference between logistic and logit regression? The logit is a link function / a transformation of a parameter. It is the logarithm of the odds. If we call the parameter $\pi$, it is defined as follows: $$ {\rm logit}(\pi) = \log\bigg(\frac{\pi}{
9,573
What is the difference between logistic and logit regression?
This answer applies for scikit-learn in python. Both logit from statsmodels and LogisticRegression from scikit-learn can be used to fit logistic regression models. However, there are some differences between the two methods. Logit from statsmodels provides more detailed statistical output, including p-values, confidenc...
What is the difference between logistic and logit regression?
This answer applies for scikit-learn in python. Both logit from statsmodels and LogisticRegression from scikit-learn can be used to fit logistic regression models. However, there are some differences
What is the difference between logistic and logit regression? This answer applies for scikit-learn in python. Both logit from statsmodels and LogisticRegression from scikit-learn can be used to fit logistic regression models. However, there are some differences between the two methods. Logit from statsmodels provides m...
What is the difference between logistic and logit regression? This answer applies for scikit-learn in python. Both logit from statsmodels and LogisticRegression from scikit-learn can be used to fit logistic regression models. However, there are some differences
9,574
Is a vague prior the same as a non-informative prior?
Gelman et al. (2003) say: there has long been a desire for prior distributions that can be guaranteed to play a minimal role in the posterior distribution. Such distributions are sometimes called 'reference prior distributions' and the prior density is described as vague, flat, or noninformative.[emphasis from origin...
Is a vague prior the same as a non-informative prior?
Gelman et al. (2003) say: there has long been a desire for prior distributions that can be guaranteed to play a minimal role in the posterior distribution. Such distributions are sometimes called 'r
Is a vague prior the same as a non-informative prior? Gelman et al. (2003) say: there has long been a desire for prior distributions that can be guaranteed to play a minimal role in the posterior distribution. Such distributions are sometimes called 'reference prior distributions' and the prior density is described a...
Is a vague prior the same as a non-informative prior? Gelman et al. (2003) say: there has long been a desire for prior distributions that can be guaranteed to play a minimal role in the posterior distribution. Such distributions are sometimes called 'r
9,575
Is a vague prior the same as a non-informative prior?
Definitely not, although they are frequently used interchangeably. A vague prior (relatively uninformed, not really favoring some values over others) on a parameter $\theta$ can actually induce a very informative prior on some other transformation $f(\theta)$. This is at least part of the motivation for Jeffreys' prior...
Is a vague prior the same as a non-informative prior?
Definitely not, although they are frequently used interchangeably. A vague prior (relatively uninformed, not really favoring some values over others) on a parameter $\theta$ can actually induce a very
Is a vague prior the same as a non-informative prior? Definitely not, although they are frequently used interchangeably. A vague prior (relatively uninformed, not really favoring some values over others) on a parameter $\theta$ can actually induce a very informative prior on some other transformation $f(\theta)$. This ...
Is a vague prior the same as a non-informative prior? Definitely not, although they are frequently used interchangeably. A vague prior (relatively uninformed, not really favoring some values over others) on a parameter $\theta$ can actually induce a very
9,576
Is a vague prior the same as a non-informative prior?
Lambert et al (2005) raise the question "How Vague is Vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS". They write: "We do not advocate the use of the term non-informative prior distribution as we consider all priors to contribute some information". I tend to agree ...
Is a vague prior the same as a non-informative prior?
Lambert et al (2005) raise the question "How Vague is Vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS". They write: "We do not advocate the use of
Is a vague prior the same as a non-informative prior? Lambert et al (2005) raise the question "How Vague is Vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS". They write: "We do not advocate the use of the term non-informative prior distribution as we consider all pr...
Is a vague prior the same as a non-informative prior? Lambert et al (2005) raise the question "How Vague is Vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS". They write: "We do not advocate the use of
9,577
Is a vague prior the same as a non-informative prior?
I suspect "vague prior" is used to mean a prior that is known to encode some small, but non-zero amount of knowledge regarding the true value of a parameter, whereas a "non-informative prior" would be used to mean complete ignorance regarding the value of that parameter. It would perhaps be used to show that the analy...
Is a vague prior the same as a non-informative prior?
I suspect "vague prior" is used to mean a prior that is known to encode some small, but non-zero amount of knowledge regarding the true value of a parameter, whereas a "non-informative prior" would be
Is a vague prior the same as a non-informative prior? I suspect "vague prior" is used to mean a prior that is known to encode some small, but non-zero amount of knowledge regarding the true value of a parameter, whereas a "non-informative prior" would be used to mean complete ignorance regarding the value of that param...
Is a vague prior the same as a non-informative prior? I suspect "vague prior" is used to mean a prior that is known to encode some small, but non-zero amount of knowledge regarding the true value of a parameter, whereas a "non-informative prior" would be
9,578
Is a vague prior the same as a non-informative prior?
Non-informative priors have different forms. These forms include vague priors and improper priors. So vague prior is part of non-informative priors.
Is a vague prior the same as a non-informative prior?
Non-informative priors have different forms. These forms include vague priors and improper priors. So vague prior is part of non-informative priors.
Is a vague prior the same as a non-informative prior? Non-informative priors have different forms. These forms include vague priors and improper priors. So vague prior is part of non-informative priors.
Is a vague prior the same as a non-informative prior? Non-informative priors have different forms. These forms include vague priors and improper priors. So vague prior is part of non-informative priors.
9,579
What are correct values for precision and recall in edge cases?
Given a confusion matrix: predicted (+) (-) --------- (+) | TP | FN | actual --------- (-) | FP | TN | --------- we know that: Precision = TP / (TP + FP) Recall = TP / (TP + FN) Lets consider the cases where the denominator is zero: TP+FN=0 : means...
What are correct values for precision and recall in edge cases?
Given a confusion matrix: predicted (+) (-) --------- (+) | TP | FN | actual --------- (-) | FP | TN | --------- we know that: Pre
What are correct values for precision and recall in edge cases? Given a confusion matrix: predicted (+) (-) --------- (+) | TP | FN | actual --------- (-) | FP | TN | --------- we know that: Precision = TP / (TP + FP) Recall = TP / (TP + FN) Lets co...
What are correct values for precision and recall in edge cases? Given a confusion matrix: predicted (+) (-) --------- (+) | TP | FN | actual --------- (-) | FP | TN | --------- we know that: Pre
9,580
What are correct values for precision and recall in edge cases?
Answer is Yes. The undefined edge cases occur when true positives (TP) are 0 since this is in the denominator of both P & R. In this case, Recall = 1 when FN=0, since 100% of the TP were discovered Precision = 1 when FP=0, since no there were no spurious results This is a reformulation of @mbq's comment.
What are correct values for precision and recall in edge cases?
Answer is Yes. The undefined edge cases occur when true positives (TP) are 0 since this is in the denominator of both P & R. In this case, Recall = 1 when FN=0, since 100% of the TP were discovered
What are correct values for precision and recall in edge cases? Answer is Yes. The undefined edge cases occur when true positives (TP) are 0 since this is in the denominator of both P & R. In this case, Recall = 1 when FN=0, since 100% of the TP were discovered Precision = 1 when FP=0, since no there were no spurious...
What are correct values for precision and recall in edge cases? Answer is Yes. The undefined edge cases occur when true positives (TP) are 0 since this is in the denominator of both P & R. In this case, Recall = 1 when FN=0, since 100% of the TP were discovered
9,581
What are correct values for precision and recall in edge cases?
I am familiar with different terminology. What you call precision I would positive predictive value (PPV). And what you call recall I would call sensitivity (Sens). : http://en.wikipedia.org/wiki/Receiver_operating_characteristic In the case of sensitivity (recall), if the denominator is zero (as Amro points out), ther...
What are correct values for precision and recall in edge cases?
I am familiar with different terminology. What you call precision I would positive predictive value (PPV). And what you call recall I would call sensitivity (Sens). : http://en.wikipedia.org/wiki/Rece
What are correct values for precision and recall in edge cases? I am familiar with different terminology. What you call precision I would positive predictive value (PPV). And what you call recall I would call sensitivity (Sens). : http://en.wikipedia.org/wiki/Receiver_operating_characteristic In the case of sensitivity...
What are correct values for precision and recall in edge cases? I am familiar with different terminology. What you call precision I would positive predictive value (PPV). And what you call recall I would call sensitivity (Sens). : http://en.wikipedia.org/wiki/Rece
9,582
What are correct values for precision and recall in edge cases?
That would depend on what you mean by "approach 0". If false positives and false negatives both approach zero at a faster rate than true positives, then yes to both questions. But otherwise, not necessarily.
What are correct values for precision and recall in edge cases?
That would depend on what you mean by "approach 0". If false positives and false negatives both approach zero at a faster rate than true positives, then yes to both questions. But otherwise, not neces
What are correct values for precision and recall in edge cases? That would depend on what you mean by "approach 0". If false positives and false negatives both approach zero at a faster rate than true positives, then yes to both questions. But otherwise, not necessarily.
What are correct values for precision and recall in edge cases? That would depend on what you mean by "approach 0". If false positives and false negatives both approach zero at a faster rate than true positives, then yes to both questions. But otherwise, not neces
9,583
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
Short answer is NO. The format in which the image is encoded has to do with its quality. Neural networks are essentially mathematical models that perform lots and lots of operations (matrix multiplications, element-wise additions and mapping functions). A neural network sees a Tensor as its input (i.e. a multi-dimen...
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
Short answer is NO. The format in which the image is encoded has to do with its quality. Neural networks are essentially mathematical models that perform lots and lots of operations (matrix multipli
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained? Short answer is NO. The format in which the image is encoded has to do with its quality. Neural networks are essentially mathematical models that perform lots and lots of operations (matrix multiplications, element-wise addit...
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained? Short answer is NO. The format in which the image is encoded has to do with its quality. Neural networks are essentially mathematical models that perform lots and lots of operations (matrix multipli
9,584
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
While Djib2011 answer is correct, I understand your question as more focused on how the image quality/properties affect neural network learning in general. There is only little research in this topic (afaik), but there might be more research on it in the future. I only found this article on it. The problem at the momen...
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
While Djib2011 answer is correct, I understand your question as more focused on how the image quality/properties affect neural network learning in general. There is only little research in this topic
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained? While Djib2011 answer is correct, I understand your question as more focused on how the image quality/properties affect neural network learning in general. There is only little research in this topic (afaik), but there might be...
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained? While Djib2011 answer is correct, I understand your question as more focused on how the image quality/properties affect neural network learning in general. There is only little research in this topic
9,585
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
This is a riff on the first answer from Djib2011. The short answer has to be no. Longer - Firstly photos are always encoded as a tensor as follows. An image is a number of pixels. If the photo is considered to have m rows and n columns, each pixel is specified by it's row and column location, that is by the pair (m...
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
This is a riff on the first answer from Djib2011. The short answer has to be no. Longer - Firstly photos are always encoded as a tensor as follows. An image is a number of pixels. If the photo is
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained? This is a riff on the first answer from Djib2011. The short answer has to be no. Longer - Firstly photos are always encoded as a tensor as follows. An image is a number of pixels. If the photo is considered to have m rows a...
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained? This is a riff on the first answer from Djib2011. The short answer has to be no. Longer - Firstly photos are always encoded as a tensor as follows. An image is a number of pixels. If the photo is
9,586
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
While changes in camera or image compression after training can be severe, if it is the same, the problem is much less. Of course with more noisy images the performance is less, but I never heard that standard JPEG compression would make a big difference. But it will depend on the application. If you change things afte...
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
While changes in camera or image compression after training can be severe, if it is the same, the problem is much less. Of course with more noisy images the performance is less, but I never heard that
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained? While changes in camera or image compression after training can be severe, if it is the same, the problem is much less. Of course with more noisy images the performance is less, but I never heard that standard JPEG compression ...
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained? While changes in camera or image compression after training can be severe, if it is the same, the problem is much less. Of course with more noisy images the performance is less, but I never heard that
9,587
In what order should you do linear regression diagnostics?
The process is iterative, but there is a natural order: You have to worry first about conditions that cause outright numerical errors. Multicollinearity is one of those, because it can produce unstable systems of equations potentially resulting in outright incorrect answers (to 16 decimal places...) Any problem here ...
In what order should you do linear regression diagnostics?
The process is iterative, but there is a natural order: You have to worry first about conditions that cause outright numerical errors. Multicollinearity is one of those, because it can produce unstab
In what order should you do linear regression diagnostics? The process is iterative, but there is a natural order: You have to worry first about conditions that cause outright numerical errors. Multicollinearity is one of those, because it can produce unstable systems of equations potentially resulting in outright inc...
In what order should you do linear regression diagnostics? The process is iterative, but there is a natural order: You have to worry first about conditions that cause outright numerical errors. Multicollinearity is one of those, because it can produce unstab
9,588
In what order should you do linear regression diagnostics?
I think it depends on the situation. If you don't expect any particular problems you can probably check these in any order. If you expect outliers and might have a reason to remove them after detecting them then check for outliers first. The other issues with the model could change after observations are removed. A...
In what order should you do linear regression diagnostics?
I think it depends on the situation. If you don't expect any particular problems you can probably check these in any order. If you expect outliers and might have a reason to remove them after detect
In what order should you do linear regression diagnostics? I think it depends on the situation. If you don't expect any particular problems you can probably check these in any order. If you expect outliers and might have a reason to remove them after detecting them then check for outliers first. The other issues wit...
In what order should you do linear regression diagnostics? I think it depends on the situation. If you don't expect any particular problems you can probably check these in any order. If you expect outliers and might have a reason to remove them after detect
9,589
How to determine quantiles (isolines?) of a multivariate normal distribution
The contour line is an ellipsoid. The reason is because you have to look at the argument of the exponential, in the pdf of the multivariate normal distribution: the isolines would be lines with the same argument. Then you get $$ ({\bf x}-\mu)^T\Sigma^{-1}({\bf x}-\mu) = c $$ where $\Sigma$ is the covariance matrix. Tha...
How to determine quantiles (isolines?) of a multivariate normal distribution
The contour line is an ellipsoid. The reason is because you have to look at the argument of the exponential, in the pdf of the multivariate normal distribution: the isolines would be lines with the sa
How to determine quantiles (isolines?) of a multivariate normal distribution The contour line is an ellipsoid. The reason is because you have to look at the argument of the exponential, in the pdf of the multivariate normal distribution: the isolines would be lines with the same argument. Then you get $$ ({\bf x}-\mu)^...
How to determine quantiles (isolines?) of a multivariate normal distribution The contour line is an ellipsoid. The reason is because you have to look at the argument of the exponential, in the pdf of the multivariate normal distribution: the isolines would be lines with the sa
9,590
How to determine quantiles (isolines?) of a multivariate normal distribution
You asked about multivariate normal, but started your question with asking about "quantile of a multivariate distribution" in general. From wording of your question and the example provided it seems that you are interested in highest density regions. They are defined by Hyndman (1996) as following Let $f(z)$ be the de...
How to determine quantiles (isolines?) of a multivariate normal distribution
You asked about multivariate normal, but started your question with asking about "quantile of a multivariate distribution" in general. From wording of your question and the example provided it seems t
How to determine quantiles (isolines?) of a multivariate normal distribution You asked about multivariate normal, but started your question with asking about "quantile of a multivariate distribution" in general. From wording of your question and the example provided it seems that you are interested in highest density r...
How to determine quantiles (isolines?) of a multivariate normal distribution You asked about multivariate normal, but started your question with asking about "quantile of a multivariate distribution" in general. From wording of your question and the example provided it seems t
9,591
How to determine quantiles (isolines?) of a multivariate normal distribution
The correct answer should be $-2*\ln(\alpha)$. There was a mistake in the calculation above. The corrected version: $$ \int_0^\sqrt{c} z e^{-z^2/2} =\int_{-c/2}^0e^sds=(1-e^{-c/2}) $$
How to determine quantiles (isolines?) of a multivariate normal distribution
The correct answer should be $-2*\ln(\alpha)$. There was a mistake in the calculation above. The corrected version: $$ \int_0^\sqrt{c} z e^{-z^2/2} =\int_{-c/2}^0e^sds=(1-e^{-c/2}) $$
How to determine quantiles (isolines?) of a multivariate normal distribution The correct answer should be $-2*\ln(\alpha)$. There was a mistake in the calculation above. The corrected version: $$ \int_0^\sqrt{c} z e^{-z^2/2} =\int_{-c/2}^0e^sds=(1-e^{-c/2}) $$
How to determine quantiles (isolines?) of a multivariate normal distribution The correct answer should be $-2*\ln(\alpha)$. There was a mistake in the calculation above. The corrected version: $$ \int_0^\sqrt{c} z e^{-z^2/2} =\int_{-c/2}^0e^sds=(1-e^{-c/2}) $$
9,592
How to determine quantiles (isolines?) of a multivariate normal distribution
You could draw an ellipses corresponding to Mahalanobis distances. library(chemometrics) data(glass) data(glass.grp) x=glass[,c(2,7)] require(robustbase) x.mcd=covMcd(x) drawMahal(x,center=x.mcd$center,covariance=x.mcd$cov,quantile=0.90) Or with circles around 95%, 75%, and 50% of data drawMahal(x,center=x.mcd$center...
How to determine quantiles (isolines?) of a multivariate normal distribution
You could draw an ellipses corresponding to Mahalanobis distances. library(chemometrics) data(glass) data(glass.grp) x=glass[,c(2,7)] require(robustbase) x.mcd=covMcd(x) drawMahal(x,center=x.mcd$cent
How to determine quantiles (isolines?) of a multivariate normal distribution You could draw an ellipses corresponding to Mahalanobis distances. library(chemometrics) data(glass) data(glass.grp) x=glass[,c(2,7)] require(robustbase) x.mcd=covMcd(x) drawMahal(x,center=x.mcd$center,covariance=x.mcd$cov,quantile=0.90) Or ...
How to determine quantiles (isolines?) of a multivariate normal distribution You could draw an ellipses corresponding to Mahalanobis distances. library(chemometrics) data(glass) data(glass.grp) x=glass[,c(2,7)] require(robustbase) x.mcd=covMcd(x) drawMahal(x,center=x.mcd$cent
9,593
How do I study the "correlation" between a continuous variable and a categorical variable?
For a moment, let's ignore the continuous/discrete issue. Basically correlation measures the strength of the linear relationship between variables, and you seem to be asking for an alternative way to measure the strength of the relationship. You might be interested in looking at some ideas from information theory. S...
How do I study the "correlation" between a continuous variable and a categorical variable?
For a moment, let's ignore the continuous/discrete issue. Basically correlation measures the strength of the linear relationship between variables, and you seem to be asking for an alternative way to
How do I study the "correlation" between a continuous variable and a categorical variable? For a moment, let's ignore the continuous/discrete issue. Basically correlation measures the strength of the linear relationship between variables, and you seem to be asking for an alternative way to measure the strength of the ...
How do I study the "correlation" between a continuous variable and a categorical variable? For a moment, let's ignore the continuous/discrete issue. Basically correlation measures the strength of the linear relationship between variables, and you seem to be asking for an alternative way to
9,594
How do I study the "correlation" between a continuous variable and a categorical variable?
If the categorical variable is ordinal and you bin the continuous variable into a few frequency intervals you can use Gamma. Also available for paired data put into ordinal form are Kendal's tau, Stuart's tau and Somers D. These are all available in SAS using Proc Freq. I don't know how they are computed using R fun...
How do I study the "correlation" between a continuous variable and a categorical variable?
If the categorical variable is ordinal and you bin the continuous variable into a few frequency intervals you can use Gamma. Also available for paired data put into ordinal form are Kendal's tau, Stu
How do I study the "correlation" between a continuous variable and a categorical variable? If the categorical variable is ordinal and you bin the continuous variable into a few frequency intervals you can use Gamma. Also available for paired data put into ordinal form are Kendal's tau, Stuart's tau and Somers D. Thes...
How do I study the "correlation" between a continuous variable and a categorical variable? If the categorical variable is ordinal and you bin the continuous variable into a few frequency intervals you can use Gamma. Also available for paired data put into ordinal form are Kendal's tau, Stu
9,595
How do I study the "correlation" between a continuous variable and a categorical variable?
A categorical variable is effectively just a set of indicator variable. It is a basic idea of measurement theory that such a variable is invariant to relabelling of the categories, so it does not make sense to use the numerical labelling of the categories in any measure of the relationship between another variable (e....
How do I study the "correlation" between a continuous variable and a categorical variable?
A categorical variable is effectively just a set of indicator variable. It is a basic idea of measurement theory that such a variable is invariant to relabelling of the categories, so it does not mak
How do I study the "correlation" between a continuous variable and a categorical variable? A categorical variable is effectively just a set of indicator variable. It is a basic idea of measurement theory that such a variable is invariant to relabelling of the categories, so it does not make sense to use the numerical ...
How do I study the "correlation" between a continuous variable and a categorical variable? A categorical variable is effectively just a set of indicator variable. It is a basic idea of measurement theory that such a variable is invariant to relabelling of the categories, so it does not mak
9,596
How do I study the "correlation" between a continuous variable and a categorical variable?
If $X$ is a continuous random variable and $Y$ is a categorical r.v., the observed correlation between $X$ and $Y$ can be measured by the point-biserial correlation coefficient, if $Y$ is dichotomous; the point-polyserial correlation coefficient, if $Y$ is polychotomous with ordinal categories. It should be noted, th...
How do I study the "correlation" between a continuous variable and a categorical variable?
If $X$ is a continuous random variable and $Y$ is a categorical r.v., the observed correlation between $X$ and $Y$ can be measured by the point-biserial correlation coefficient, if $Y$ is dichotomous
How do I study the "correlation" between a continuous variable and a categorical variable? If $X$ is a continuous random variable and $Y$ is a categorical r.v., the observed correlation between $X$ and $Y$ can be measured by the point-biserial correlation coefficient, if $Y$ is dichotomous; the point-polyserial correl...
How do I study the "correlation" between a continuous variable and a categorical variable? If $X$ is a continuous random variable and $Y$ is a categorical r.v., the observed correlation between $X$ and $Y$ can be measured by the point-biserial correlation coefficient, if $Y$ is dichotomous
9,597
How do I study the "correlation" between a continuous variable and a categorical variable?
R package mpmi has the ability to calculate mutual information for the mixed variable case, namely continuous and discrete. Although there are other statistical options like (point) biserial correlation coefficient to be useful here, it would be beneficial and highly recommended to calculate mutual information since it...
How do I study the "correlation" between a continuous variable and a categorical variable?
R package mpmi has the ability to calculate mutual information for the mixed variable case, namely continuous and discrete. Although there are other statistical options like (point) biserial correlati
How do I study the "correlation" between a continuous variable and a categorical variable? R package mpmi has the ability to calculate mutual information for the mixed variable case, namely continuous and discrete. Although there are other statistical options like (point) biserial correlation coefficient to be useful h...
How do I study the "correlation" between a continuous variable and a categorical variable? R package mpmi has the ability to calculate mutual information for the mixed variable case, namely continuous and discrete. Although there are other statistical options like (point) biserial correlati
9,598
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
One thing that may help move the debate forward is to acknowledge what makes people visually distinguish between background and foreground, taking lessons from cartography and apply it more generally to any statistical graphic. People may initially think that color is a good cue as to whether a specific object is in th...
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
One thing that may help move the debate forward is to acknowledge what makes people visually distinguish between background and foreground, taking lessons from cartography and apply it more generally
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis? One thing that may help move the debate forward is to acknowledge what makes people visually distinguish between background and foreground, taking lessons from cartography and apply it more generally to any statistical grap...
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis? One thing that may help move the debate forward is to acknowledge what makes people visually distinguish between background and foreground, taking lessons from cartography and apply it more generally
9,599
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
Professor Wickham wrote in the ggplot2 book: "We can still see the gridlines to aid in the judgement of position (Cleveland, 1993b), but they have little visual impact and we can easily "tune" them out. The grey background gives the plot a similar colour (in a typographical sense) to the remainder of the text, ...
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
Professor Wickham wrote in the ggplot2 book: "We can still see the gridlines to aid in the judgement of position (Cleveland, 1993b), but they have little visual impact and we can easily "tune" th
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis? Professor Wickham wrote in the ggplot2 book: "We can still see the gridlines to aid in the judgement of position (Cleveland, 1993b), but they have little visual impact and we can easily "tune" them out. The grey backgr...
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis? Professor Wickham wrote in the ggplot2 book: "We can still see the gridlines to aid in the judgement of position (Cleveland, 1993b), but they have little visual impact and we can easily "tune" th
9,600
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
While I tend to avoid the default grey background, perhaps one reason Hadley may have gone with the grey is to allow the user to use more light, saturated colors to display data, which may not appear as effective with a white background.
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
While I tend to avoid the default grey background, perhaps one reason Hadley may have gone with the grey is to allow the user to use more light, saturated colors to display data, which may not appear
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis? While I tend to avoid the default grey background, perhaps one reason Hadley may have gone with the grey is to allow the user to use more light, saturated colors to display data, which may not appear as effective with a whi...
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis? While I tend to avoid the default grey background, perhaps one reason Hadley may have gone with the grey is to allow the user to use more light, saturated colors to display data, which may not appear