idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
38,501
Can you use a percentage as an independent variable in multiple linear regression?
Percentages can be considered continuous on the interval [0,1]. There is no reason why percentages can't be independent variables in linear regression. In fact, there is no requirement that independent variables need to be continuous. Indicator variables are often used as independent variables in regressions.
Can you use a percentage as an independent variable in multiple linear regression?
Percentages can be considered continuous on the interval [0,1]. There is no reason why percentages can't be independent variables in linear regression. In fact, there is no requirement that independ
Can you use a percentage as an independent variable in multiple linear regression? Percentages can be considered continuous on the interval [0,1]. There is no reason why percentages can't be independent variables in linear regression. In fact, there is no requirement that independent variables need to be continuous. Indicator variables are often used as independent variables in regressions.
Can you use a percentage as an independent variable in multiple linear regression? Percentages can be considered continuous on the interval [0,1]. There is no reason why percentages can't be independent variables in linear regression. In fact, there is no requirement that independ
38,502
Can you use a percentage as an independent variable in multiple linear regression?
The assumption of normality to which you refer does not apply to any of the predictors (after all how could a binary predictor be normal?) nor does it apply to the outcome. What it applies to is the residuals from your model. So at this stage before you have fitted the model you do not know whether it holds or not. Similarly the usual check for homoscedasticity is based on looking at the residuals in a plot against the fitted values. The question of continuity is more subtle but no measured variable even if theoretically continuous is going to be so when actually measured to finite precision. If I was modelling length of stay I would be more worried about the skew and also the issue of whether some have been censored because they have not returned to work yet. Have you considered using a time to event model (also known as the Cox model or proportional hazards)? Another concern, depending on the rules in your jurisdiction, is that if maternity pay is at a certain level for $j$ moths, a lower level for $k$ months, and the stops, you will get bunching of values at $j$ and $k$ (I would have thought).
Can you use a percentage as an independent variable in multiple linear regression?
The assumption of normality to which you refer does not apply to any of the predictors (after all how could a binary predictor be normal?) nor does it apply to the outcome. What it applies to is the r
Can you use a percentage as an independent variable in multiple linear regression? The assumption of normality to which you refer does not apply to any of the predictors (after all how could a binary predictor be normal?) nor does it apply to the outcome. What it applies to is the residuals from your model. So at this stage before you have fitted the model you do not know whether it holds or not. Similarly the usual check for homoscedasticity is based on looking at the residuals in a plot against the fitted values. The question of continuity is more subtle but no measured variable even if theoretically continuous is going to be so when actually measured to finite precision. If I was modelling length of stay I would be more worried about the skew and also the issue of whether some have been censored because they have not returned to work yet. Have you considered using a time to event model (also known as the Cox model or proportional hazards)? Another concern, depending on the rules in your jurisdiction, is that if maternity pay is at a certain level for $j$ moths, a lower level for $k$ months, and the stops, you will get bunching of values at $j$ and $k$ (I would have thought).
Can you use a percentage as an independent variable in multiple linear regression? The assumption of normality to which you refer does not apply to any of the predictors (after all how could a binary predictor be normal?) nor does it apply to the outcome. What it applies to is the r
38,503
Can you use a percentage as an independent variable in multiple linear regression?
Suppose you have a model $$Y = B_1 X_1 + B_2 X_2 + E,$$ where $E \sim Nor(0,1)$ Let $X_3$, $X_4$ be the percentages and $S_1$, $S_2$ be the total of $X_1$, $X_2$ respectively then, $X_3 = X_1/S_1*100$ and $X_4 = X_2/S_2*100$. Then the model with percentage will be, $Y = B_3 X_3 + B_4 X_4 + E'$ The estimates will be $B = ((X'X)^{-1})X'Y$ and the relation between the estimates are, $B_3 = B_1/S_1*100$ and $B_4 = B_2/S_2*100$.
Can you use a percentage as an independent variable in multiple linear regression?
Suppose you have a model $$Y = B_1 X_1 + B_2 X_2 + E,$$ where $E \sim Nor(0,1)$ Let $X_3$, $X_4$ be the percentages and $S_1$, $S_2$ be the total of $X_1$, $X_2$ respectively then, $X_3 = X_1/S_1*100$
Can you use a percentage as an independent variable in multiple linear regression? Suppose you have a model $$Y = B_1 X_1 + B_2 X_2 + E,$$ where $E \sim Nor(0,1)$ Let $X_3$, $X_4$ be the percentages and $S_1$, $S_2$ be the total of $X_1$, $X_2$ respectively then, $X_3 = X_1/S_1*100$ and $X_4 = X_2/S_2*100$. Then the model with percentage will be, $Y = B_3 X_3 + B_4 X_4 + E'$ The estimates will be $B = ((X'X)^{-1})X'Y$ and the relation between the estimates are, $B_3 = B_1/S_1*100$ and $B_4 = B_2/S_2*100$.
Can you use a percentage as an independent variable in multiple linear regression? Suppose you have a model $$Y = B_1 X_1 + B_2 X_2 + E,$$ where $E \sim Nor(0,1)$ Let $X_3$, $X_4$ be the percentages and $S_1$, $S_2$ be the total of $X_1$, $X_2$ respectively then, $X_3 = X_1/S_1*100$
38,504
Fitting exponential (regression) model by MLE?
This has been answered on the R help list by Adelchi Azzalini: the important point is that the dispersion parameter (which is what distinguishes an exponential distribution from the more general Gamma distribution) does not affect the parameter estimates in a generalized linear model, only the standard errors of the parameters/confidence intervals/p-values etc.; in R an estimate of the dispersion parameter is automatically reported, but as Azzalini comments, summary.glm allows the user to specify the dispersion parameter. So, as stated by Azzalini, The Gamma family is parametrised in glm() by two parameters: mean and dispersion; the "dispersion" regulates the shape. So [one] must fit a GLM with the Gamma family, and then produce a "summary" with dispersion parameter set equal to 1, since this value corresponds to the exponential distribution in the Gamma family. In practice: fit <- glm(formula =..., family = Gamma(link="log")) summary(fit,dispersion=1) [Azzalini had family=Gamma, i.e. using the default inverse link; I changed it to specify the log link as in your question.]
Fitting exponential (regression) model by MLE?
This has been answered on the R help list by Adelchi Azzalini: the important point is that the dispersion parameter (which is what distinguishes an exponential distribution from the more general Gamma
Fitting exponential (regression) model by MLE? This has been answered on the R help list by Adelchi Azzalini: the important point is that the dispersion parameter (which is what distinguishes an exponential distribution from the more general Gamma distribution) does not affect the parameter estimates in a generalized linear model, only the standard errors of the parameters/confidence intervals/p-values etc.; in R an estimate of the dispersion parameter is automatically reported, but as Azzalini comments, summary.glm allows the user to specify the dispersion parameter. So, as stated by Azzalini, The Gamma family is parametrised in glm() by two parameters: mean and dispersion; the "dispersion" regulates the shape. So [one] must fit a GLM with the Gamma family, and then produce a "summary" with dispersion parameter set equal to 1, since this value corresponds to the exponential distribution in the Gamma family. In practice: fit <- glm(formula =..., family = Gamma(link="log")) summary(fit,dispersion=1) [Azzalini had family=Gamma, i.e. using the default inverse link; I changed it to specify the log link as in your question.]
Fitting exponential (regression) model by MLE? This has been answered on the R help list by Adelchi Azzalini: the important point is that the dispersion parameter (which is what distinguishes an exponential distribution from the more general Gamma
38,505
Fitting exponential (regression) model by MLE?
Using a GLM call as you suggest there is the easiest correct approach, but to actually make the Gamma into an exponential you can specify the dispersion to be 1. It will not change the fitted mean / coefficients, but it impacts the standard errors. [i.e. Your suggested call of summary(glm(y~x,family=Gamma(link="log"))) should give you what you want, but if you're interested in standard error of the fitted coefficients or significance of coefficients and so on under the exponential assumption, you'd add ,dispersion=1 before the final parenthesis. If you want to fit Gamma GLMs more generally, there are several useful helper functions in the package MASS that comes with R but is not loaded by default.] (Another alternative might be to consider using parametric survival models which also offer ways to fit exponential, Weibull and various other models.)
Fitting exponential (regression) model by MLE?
Using a GLM call as you suggest there is the easiest correct approach, but to actually make the Gamma into an exponential you can specify the dispersion to be 1. It will not change the fitted mean / c
Fitting exponential (regression) model by MLE? Using a GLM call as you suggest there is the easiest correct approach, but to actually make the Gamma into an exponential you can specify the dispersion to be 1. It will not change the fitted mean / coefficients, but it impacts the standard errors. [i.e. Your suggested call of summary(glm(y~x,family=Gamma(link="log"))) should give you what you want, but if you're interested in standard error of the fitted coefficients or significance of coefficients and so on under the exponential assumption, you'd add ,dispersion=1 before the final parenthesis. If you want to fit Gamma GLMs more generally, there are several useful helper functions in the package MASS that comes with R but is not loaded by default.] (Another alternative might be to consider using parametric survival models which also offer ways to fit exponential, Weibull and various other models.)
Fitting exponential (regression) model by MLE? Using a GLM call as you suggest there is the easiest correct approach, but to actually make the Gamma into an exponential you can specify the dispersion to be 1. It will not change the fitted mean / c
38,506
Book recommendations for probability
I suggest you a couple of books that I admit I never had the occasion to study. These would have been my reference if I specialized in probability: Ash, Dade - "Probability and Measure Theory" Billingsley - "Probability and Measure" I think (2) is more popular.
Book recommendations for probability
I suggest you a couple of books that I admit I never had the occasion to study. These would have been my reference if I specialized in probability: Ash, Dade - "Probability and Measure Theory" Billi
Book recommendations for probability I suggest you a couple of books that I admit I never had the occasion to study. These would have been my reference if I specialized in probability: Ash, Dade - "Probability and Measure Theory" Billingsley - "Probability and Measure" I think (2) is more popular.
Book recommendations for probability I suggest you a couple of books that I admit I never had the occasion to study. These would have been my reference if I specialized in probability: Ash, Dade - "Probability and Measure Theory" Billi
38,507
Book recommendations for probability
Foundations of Modern Probability by Olav Kallenberg meets all your criteria. It is quite concise and mathematically rigorous and as one reviewer puts it "without any non-mathematical distractions".
Book recommendations for probability
Foundations of Modern Probability by Olav Kallenberg meets all your criteria. It is quite concise and mathematically rigorous and as one reviewer puts it "without any non-mathematical distractions".
Book recommendations for probability Foundations of Modern Probability by Olav Kallenberg meets all your criteria. It is quite concise and mathematically rigorous and as one reviewer puts it "without any non-mathematical distractions".
Book recommendations for probability Foundations of Modern Probability by Olav Kallenberg meets all your criteria. It is quite concise and mathematically rigorous and as one reviewer puts it "without any non-mathematical distractions".
38,508
Book recommendations for probability
I think Amir Dembo's notes are pretty stellar. He updates them each time he teaches the course, but even then they have really good proofs and exercises. He also has notes on stochastic processes. William's Probability with Martingales is also good... but only the parts on martingales are good. Durrett's book is decent. I am of the persuasion that stochastic processes should be done in depth as its own course, and for the Oksendal "Stochastic Differential Equations" is easier and more insightful than Karatzas "Brownian Motion and Stochastic Calculus" which is tougher but more thorough, together making a good combo. I came out the other side still wanting to learn Malliavin calculus (still haven't gotten around to it) but feeling ready to do so.
Book recommendations for probability
I think Amir Dembo's notes are pretty stellar. He updates them each time he teaches the course, but even then they have really good proofs and exercises. He also has notes on stochastic processes. Wil
Book recommendations for probability I think Amir Dembo's notes are pretty stellar. He updates them each time he teaches the course, but even then they have really good proofs and exercises. He also has notes on stochastic processes. William's Probability with Martingales is also good... but only the parts on martingales are good. Durrett's book is decent. I am of the persuasion that stochastic processes should be done in depth as its own course, and for the Oksendal "Stochastic Differential Equations" is easier and more insightful than Karatzas "Brownian Motion and Stochastic Calculus" which is tougher but more thorough, together making a good combo. I came out the other side still wanting to learn Malliavin calculus (still haven't gotten around to it) but feeling ready to do so.
Book recommendations for probability I think Amir Dembo's notes are pretty stellar. He updates them each time he teaches the course, but even then they have really good proofs and exercises. He also has notes on stochastic processes. Wil
38,509
Book recommendations for probability
I know these are no books, but nonetheless I think these materials are quite useful: At MIT they offer various courses for free. Some of these courses might also contain books. Overview of free probability and statistics courses at MIT This is the best online course of advanced Theory of probability. It is made by Scott Sheffield, who is the most famous probability professor at MIT. Theory of probability
Book recommendations for probability
I know these are no books, but nonetheless I think these materials are quite useful: At MIT they offer various courses for free. Some of these courses might also contain books. Overview of free probab
Book recommendations for probability I know these are no books, but nonetheless I think these materials are quite useful: At MIT they offer various courses for free. Some of these courses might also contain books. Overview of free probability and statistics courses at MIT This is the best online course of advanced Theory of probability. It is made by Scott Sheffield, who is the most famous probability professor at MIT. Theory of probability
Book recommendations for probability I know these are no books, but nonetheless I think these materials are quite useful: At MIT they offer various courses for free. Some of these courses might also contain books. Overview of free probab
38,510
Book recommendations for probability
Weiss, A Course in Probability covers at least nearly all of these topics. It's also very readable. There's a strange problem with probability and stats textbooks where the notation of explanation is exceptionally shoddy and non-rigorous. This book usually doesn't suffer from that deficiency. But occasionally it punts on topics that would require a familiarity with measure-theoretic concepts. http://www.amazon.com/Course-Probability-Neil-A-Weiss/dp/0201774712
Book recommendations for probability
Weiss, A Course in Probability covers at least nearly all of these topics. It's also very readable. There's a strange problem with probability and stats textbooks where the notation of explanation is
Book recommendations for probability Weiss, A Course in Probability covers at least nearly all of these topics. It's also very readable. There's a strange problem with probability and stats textbooks where the notation of explanation is exceptionally shoddy and non-rigorous. This book usually doesn't suffer from that deficiency. But occasionally it punts on topics that would require a familiarity with measure-theoretic concepts. http://www.amazon.com/Course-Probability-Neil-A-Weiss/dp/0201774712
Book recommendations for probability Weiss, A Course in Probability covers at least nearly all of these topics. It's also very readable. There's a strange problem with probability and stats textbooks where the notation of explanation is
38,511
Book recommendations for probability
I wholeheartedly recommend Parzen's Modern Probability Theory and its Applications. This is a classic written by someone who has made enormous contributions to statistics, in e.g. non-parametric density estimation, and is very easy to read. Moreover, it covers almost all of your required topics (I only can't remember if more advanced forms of the CLT are discussed) and presents a lot of motivating examples. The cherry on top is that since this is an old book, you can buy it for 3 dollars on Amazon or even get it for free on the internet.
Book recommendations for probability
I wholeheartedly recommend Parzen's Modern Probability Theory and its Applications. This is a classic written by someone who has made enormous contributions to statistics, in e.g. non-parametric densi
Book recommendations for probability I wholeheartedly recommend Parzen's Modern Probability Theory and its Applications. This is a classic written by someone who has made enormous contributions to statistics, in e.g. non-parametric density estimation, and is very easy to read. Moreover, it covers almost all of your required topics (I only can't remember if more advanced forms of the CLT are discussed) and presents a lot of motivating examples. The cherry on top is that since this is an old book, you can buy it for 3 dollars on Amazon or even get it for free on the internet.
Book recommendations for probability I wholeheartedly recommend Parzen's Modern Probability Theory and its Applications. This is a classic written by someone who has made enormous contributions to statistics, in e.g. non-parametric densi
38,512
Book recommendations for probability
Probability Theory: The Logic of Science by E. T. Jaynes Its a classic for a reason. I only can recommend it since it takes a more practical approach. Also contains a bunch of exercises. PDF of older versions should be available (amazon link)
Book recommendations for probability
Probability Theory: The Logic of Science by E. T. Jaynes Its a classic for a reason. I only can recommend it since it takes a more practical approach. Also contains a bunch of exercises. PDF of older
Book recommendations for probability Probability Theory: The Logic of Science by E. T. Jaynes Its a classic for a reason. I only can recommend it since it takes a more practical approach. Also contains a bunch of exercises. PDF of older versions should be available (amazon link)
Book recommendations for probability Probability Theory: The Logic of Science by E. T. Jaynes Its a classic for a reason. I only can recommend it since it takes a more practical approach. Also contains a bunch of exercises. PDF of older
38,513
Ground-truth definition
The term "ground truth" was coined in the geological/earth sciences to describe validation of data by going out in the field and checking "on the ground". It has been adopted in other fields to express the notion of data that is "known" to be correct. In my personal experience it is widely used in biometrics and computer vision. The term "ground truth error" is also in wide use, illustrating the fact that what we "know" is not always correct. See @article {Dictionary.com2015, title = {Dictionary.com's 21st Century Lexicon}, month = {Aug}, day = {18}, year = {2015}, url = {http://dictionary.reference.com/browse/ground truth}, } for an online definition. See @book{krig2014computer, title={Computer Vision Metrics: Survey, Taxonomy, and Analysis}, author={Krig, Scott}, year={2014}, publisher={Apress} } Chapter 7, "Ground Truth Data, Content, Metrics and Analysis" for a discussion of ground truth in Computer Vision -- available in print and eBook formats. There is an interesting blog at thegroundtruthproject.org NASA has a glossary of term that includes ground truth -- see http://podaac.jpl.nasa.gov/Glossary
Ground-truth definition
The term "ground truth" was coined in the geological/earth sciences to describe validation of data by going out in the field and checking "on the ground". It has been adopted in other fields to expre
Ground-truth definition The term "ground truth" was coined in the geological/earth sciences to describe validation of data by going out in the field and checking "on the ground". It has been adopted in other fields to express the notion of data that is "known" to be correct. In my personal experience it is widely used in biometrics and computer vision. The term "ground truth error" is also in wide use, illustrating the fact that what we "know" is not always correct. See @article {Dictionary.com2015, title = {Dictionary.com's 21st Century Lexicon}, month = {Aug}, day = {18}, year = {2015}, url = {http://dictionary.reference.com/browse/ground truth}, } for an online definition. See @book{krig2014computer, title={Computer Vision Metrics: Survey, Taxonomy, and Analysis}, author={Krig, Scott}, year={2014}, publisher={Apress} } Chapter 7, "Ground Truth Data, Content, Metrics and Analysis" for a discussion of ground truth in Computer Vision -- available in print and eBook formats. There is an interesting blog at thegroundtruthproject.org NASA has a glossary of term that includes ground truth -- see http://podaac.jpl.nasa.gov/Glossary
Ground-truth definition The term "ground truth" was coined in the geological/earth sciences to describe validation of data by going out in the field and checking "on the ground". It has been adopted in other fields to expre
38,514
Ground-truth definition
In most cases it is used as 'real true' e.g. scikit in Python, with specific examples such as image recognition eg Nunez-Iglesias et al. PLOS One, character recognition Luis von Ahn et al. Science. But how close is the 'real true' to a fixed value can depend on the complexity of input and whether "reference data can be less accurate than recognition system being evaluated" Lopresti and Nagy and a search for ground-truthing issues could yield further results eg this overview of symbol recognition. (Whereas assumed vs. validated would refer largely to a particular hypothesis/implementation.)
Ground-truth definition
In most cases it is used as 'real true' e.g. scikit in Python, with specific examples such as image recognition eg Nunez-Iglesias et al. PLOS One, character recognition Luis von Ahn et al. Science.
Ground-truth definition In most cases it is used as 'real true' e.g. scikit in Python, with specific examples such as image recognition eg Nunez-Iglesias et al. PLOS One, character recognition Luis von Ahn et al. Science. But how close is the 'real true' to a fixed value can depend on the complexity of input and whether "reference data can be less accurate than recognition system being evaluated" Lopresti and Nagy and a search for ground-truthing issues could yield further results eg this overview of symbol recognition. (Whereas assumed vs. validated would refer largely to a particular hypothesis/implementation.)
Ground-truth definition In most cases it is used as 'real true' e.g. scikit in Python, with specific examples such as image recognition eg Nunez-Iglesias et al. PLOS One, character recognition Luis von Ahn et al. Science.
38,515
Ground-truth definition
This is not exactly a definition, but a brief, nuanced description of ground truth in machine learning by James Kobielus at IBM: http://www.ibmbigdatahub.com/blog/ground-truth-agile-machine-learning Within machine learning, I would call ground truth a human-defined truth or an external truth rather than an epistemological truth or actual truth. Ground truth is the foundation of supervised machine learning.
Ground-truth definition
This is not exactly a definition, but a brief, nuanced description of ground truth in machine learning by James Kobielus at IBM: http://www.ibmbigdatahub.com/blog/ground-truth-agile-machine-learning W
Ground-truth definition This is not exactly a definition, but a brief, nuanced description of ground truth in machine learning by James Kobielus at IBM: http://www.ibmbigdatahub.com/blog/ground-truth-agile-machine-learning Within machine learning, I would call ground truth a human-defined truth or an external truth rather than an epistemological truth or actual truth. Ground truth is the foundation of supervised machine learning.
Ground-truth definition This is not exactly a definition, but a brief, nuanced description of ground truth in machine learning by James Kobielus at IBM: http://www.ibmbigdatahub.com/blog/ground-truth-agile-machine-learning W
38,516
Ground-truth definition
Meaning “according to the “assumed””. The assumed is the ground, as taken the only referral voltage in electrical systems. The term was adopted by computer scientists working in machine learning who were inherited from electrical engineers.
Ground-truth definition
Meaning “according to the “assumed””. The assumed is the ground, as taken the only referral voltage in electrical systems. The term was adopted by computer scientists working in machine learning who w
Ground-truth definition Meaning “according to the “assumed””. The assumed is the ground, as taken the only referral voltage in electrical systems. The term was adopted by computer scientists working in machine learning who were inherited from electrical engineers.
Ground-truth definition Meaning “according to the “assumed””. The assumed is the ground, as taken the only referral voltage in electrical systems. The term was adopted by computer scientists working in machine learning who w
38,517
How to interpret this QQ plot?
This QQ plot has the following salient features: The stairstep pattern, in which only specific, separated heights ("sample quantiles") are attained, shows the data values are discrete. Almost all are whole numbers from $3$ through $21$. A few half-integers appear. Evidently some form of rounding has occurred. Because the extreme "theoretical quantiles" are at $\pm 3.2$ (roughly), there must be around $1400$ data shown. This is because the extremes for this much Normally distributed data would have Z-scores about $\pm 3.2$. (This estimate of $1400$ is rough, but it's in the right ballpark.) There is a large number of values at the minimum of $3$, far more than any other value. This is characteristic of left censoring, whereby any value less than a threshold ($3$) is replaced by an indicator that it is less than that threshold--and, for plotting purposes, all such values are plotted at the threshold. (For more on what censoring does to probability plots, see the analysis at https://stats.stackexchange.com/a/30749.) Apart from this "spike" at $3$, the rest of the points come fairly close to following the diagonal reference line. This suggests the remaining data are not too far from Normally distributed. A closer look, though, shows the remaining points are initially slightly lower than the reference line (for values between $5$ and $10$) and then slightly greater (for values between $13$ and $20$) before returning to the line at the end (value $21$). This "curvature" indicates a certain form of non-normality. This particular kind of curvature is consistent with data that are starting to follow an extreme-value distribution. Specifically, consider the following data-generation mechanism: Collect $k\ge 1$ independent, identically distributed Normal variates and retain just the largest of them. Do that $n = 1400$ times. Left-censor the data at a threshold of $3$. Record their values to two or three decimal places. Round the values to the nearest integer--but don't round any value that is exactly a half-integer (that is, ends in $.500$). If we set $k=50$ or thereabouts and adjust the mean and standard deviation of those underlying Normal variates to be $\mu = -10$ and $\sigma = 7.5$, we can produce random versions of this QQ plot and most of them are practically indistinguishable from it. (This is an extremely rough estimate; $k$ could be anywhere between $8$ and $200$ or so, and different values of $k$ would have to be matched with different values of $\mu$ and $\sigma$.) Here are the first six such versions I produced: What you do with this interpretation depends on your understanding of the data and what you want to learn from them. I make no claim that the data actually were created in such a way, but only that their distribution is remarkably like this one. This is R code to reproduce the figure (and generate many more like it if you wish). k <- 50 mu <- -10 sigma <- 7.5 threshold <- 3 n <- 1400 # # Round most values to the nearest integer, occasionally # to a half-integer. # rnd <- function(x, prec=300) { y <- round(x * prec) / prec ifelse(2*y == floor(2*y), y, round(y)) } q <- c(0.25, 0.95) # Used to draw a reference line par(mfcol=c(2,3)) set.seed(17) invisible(replicate(6, { # Generate data z <- apply(matrix(rnorm(n*k), k), 2, max) # Max-normal distribution y <- mu + sigma * z # Scale and recenter it x <- rnd(pmax(y, threshold)) # Censor and round the values # Plot them qqnorm(x, cex=0.8) m <- median(x) s <- diff(quantile(x, q)) / diff(qnorm(q)) abline(c(m, s)) #hist(x) # Histogram of the data #qqnorm(y) # QQ plot of the uncensored, unrounded data }))
How to interpret this QQ plot?
This QQ plot has the following salient features: The stairstep pattern, in which only specific, separated heights ("sample quantiles") are attained, shows the data values are discrete. Almost all ar
How to interpret this QQ plot? This QQ plot has the following salient features: The stairstep pattern, in which only specific, separated heights ("sample quantiles") are attained, shows the data values are discrete. Almost all are whole numbers from $3$ through $21$. A few half-integers appear. Evidently some form of rounding has occurred. Because the extreme "theoretical quantiles" are at $\pm 3.2$ (roughly), there must be around $1400$ data shown. This is because the extremes for this much Normally distributed data would have Z-scores about $\pm 3.2$. (This estimate of $1400$ is rough, but it's in the right ballpark.) There is a large number of values at the minimum of $3$, far more than any other value. This is characteristic of left censoring, whereby any value less than a threshold ($3$) is replaced by an indicator that it is less than that threshold--and, for plotting purposes, all such values are plotted at the threshold. (For more on what censoring does to probability plots, see the analysis at https://stats.stackexchange.com/a/30749.) Apart from this "spike" at $3$, the rest of the points come fairly close to following the diagonal reference line. This suggests the remaining data are not too far from Normally distributed. A closer look, though, shows the remaining points are initially slightly lower than the reference line (for values between $5$ and $10$) and then slightly greater (for values between $13$ and $20$) before returning to the line at the end (value $21$). This "curvature" indicates a certain form of non-normality. This particular kind of curvature is consistent with data that are starting to follow an extreme-value distribution. Specifically, consider the following data-generation mechanism: Collect $k\ge 1$ independent, identically distributed Normal variates and retain just the largest of them. Do that $n = 1400$ times. Left-censor the data at a threshold of $3$. Record their values to two or three decimal places. Round the values to the nearest integer--but don't round any value that is exactly a half-integer (that is, ends in $.500$). If we set $k=50$ or thereabouts and adjust the mean and standard deviation of those underlying Normal variates to be $\mu = -10$ and $\sigma = 7.5$, we can produce random versions of this QQ plot and most of them are practically indistinguishable from it. (This is an extremely rough estimate; $k$ could be anywhere between $8$ and $200$ or so, and different values of $k$ would have to be matched with different values of $\mu$ and $\sigma$.) Here are the first six such versions I produced: What you do with this interpretation depends on your understanding of the data and what you want to learn from them. I make no claim that the data actually were created in such a way, but only that their distribution is remarkably like this one. This is R code to reproduce the figure (and generate many more like it if you wish). k <- 50 mu <- -10 sigma <- 7.5 threshold <- 3 n <- 1400 # # Round most values to the nearest integer, occasionally # to a half-integer. # rnd <- function(x, prec=300) { y <- round(x * prec) / prec ifelse(2*y == floor(2*y), y, round(y)) } q <- c(0.25, 0.95) # Used to draw a reference line par(mfcol=c(2,3)) set.seed(17) invisible(replicate(6, { # Generate data z <- apply(matrix(rnorm(n*k), k), 2, max) # Max-normal distribution y <- mu + sigma * z # Scale and recenter it x <- rnd(pmax(y, threshold)) # Censor and round the values # Plot them qqnorm(x, cex=0.8) m <- median(x) s <- diff(quantile(x, q)) / diff(qnorm(q)) abline(c(m, s)) #hist(x) # Histogram of the data #qqnorm(y) # QQ plot of the uncensored, unrounded data }))
How to interpret this QQ plot? This QQ plot has the following salient features: The stairstep pattern, in which only specific, separated heights ("sample quantiles") are attained, shows the data values are discrete. Almost all ar
38,518
How to interpret this QQ plot?
(As Nick Cox also suggests) the distribution is right skew and discrete, but to the right of the spike at 3, is roughly similar to a standard normal truncated below -1 (which is right skew), but with a shorter right tail. I've made some additional comments on the diagram below: Here's a frequency plot (a sample pmf) that would yield a Q-Q plot roughly similar to yours:
How to interpret this QQ plot?
(As Nick Cox also suggests) the distribution is right skew and discrete, but to the right of the spike at 3, is roughly similar to a standard normal truncated below -1 (which is right skew), but with
How to interpret this QQ plot? (As Nick Cox also suggests) the distribution is right skew and discrete, but to the right of the spike at 3, is roughly similar to a standard normal truncated below -1 (which is right skew), but with a shorter right tail. I've made some additional comments on the diagram below: Here's a frequency plot (a sample pmf) that would yield a Q-Q plot roughly similar to yours:
How to interpret this QQ plot? (As Nick Cox also suggests) the distribution is right skew and discrete, but to the right of the spike at 3, is roughly similar to a standard normal truncated below -1 (which is right skew), but with
38,519
How to interpret this QQ plot?
Your data are positively skewed, meaning skewed to the right. "Right" or "left" is a matter of the longer, more stretched out, tail in the distribution. The terminology presupposes that you are (imagining) looking at a conventional histogram with a horizontal magnitude scale. But clearly you have integer values between 3 and 21, hence the appearance of an irregular staircase, except that there are values such as 4.5. You have a prominent spike of values at 3: that should not come as a surprise to you, but we can't tell you why. Similarly, if these are counts, then the absence of 0, 1 and 2 may (or may not) be worth comment. It's possible, however, that numeric measures of skewness may be negative as a side-effect of the spike. The values are reminiscent of grades on a test in which most students did poorly, but few were utterly abysmal, and some messy answers provoked compromise marks. Values in the data that are the same must be plotted at the same horizontal level at various levels on the $y$ axis. The average over samples of the same size from a true Gaussian distribution would all be distinct, so the values on the $x$ axis must be distinct. The spike alone means that you can't call this distribution "normal". If you thought this distribution would be normal, you need to review your thinking.
How to interpret this QQ plot?
Your data are positively skewed, meaning skewed to the right. "Right" or "left" is a matter of the longer, more stretched out, tail in the distribution. The terminology presupposes that you are (imag
How to interpret this QQ plot? Your data are positively skewed, meaning skewed to the right. "Right" or "left" is a matter of the longer, more stretched out, tail in the distribution. The terminology presupposes that you are (imagining) looking at a conventional histogram with a horizontal magnitude scale. But clearly you have integer values between 3 and 21, hence the appearance of an irregular staircase, except that there are values such as 4.5. You have a prominent spike of values at 3: that should not come as a surprise to you, but we can't tell you why. Similarly, if these are counts, then the absence of 0, 1 and 2 may (or may not) be worth comment. It's possible, however, that numeric measures of skewness may be negative as a side-effect of the spike. The values are reminiscent of grades on a test in which most students did poorly, but few were utterly abysmal, and some messy answers provoked compromise marks. Values in the data that are the same must be plotted at the same horizontal level at various levels on the $y$ axis. The average over samples of the same size from a true Gaussian distribution would all be distinct, so the values on the $x$ axis must be distinct. The spike alone means that you can't call this distribution "normal". If you thought this distribution would be normal, you need to review your thinking.
How to interpret this QQ plot? Your data are positively skewed, meaning skewed to the right. "Right" or "left" is a matter of the longer, more stretched out, tail in the distribution. The terminology presupposes that you are (imag
38,520
Why convolutional neural networks belong to deep learning?
First, mind that deep learning is a buzz term. There is not even a consensus of a formal definition in the research community. A discussion of the term does not lead anywhere, really. It's just a word. That being said, convolutional nets are deep because they rely on multiple layers of feature extraction, as you said. They extract features from the input to predict an outcome. What you refer to is a "generative" approach, i.e. the features are used to create the observation (a picture, not a class label). That is what made deep learning popular, but it is in no way limited to that.
Why convolutional neural networks belong to deep learning?
First, mind that deep learning is a buzz term. There is not even a consensus of a formal definition in the research community. A discussion of the term does not lead anywhere, really. It's just a wor
Why convolutional neural networks belong to deep learning? First, mind that deep learning is a buzz term. There is not even a consensus of a formal definition in the research community. A discussion of the term does not lead anywhere, really. It's just a word. That being said, convolutional nets are deep because they rely on multiple layers of feature extraction, as you said. They extract features from the input to predict an outcome. What you refer to is a "generative" approach, i.e. the features are used to create the observation (a picture, not a class label). That is what made deep learning popular, but it is in no way limited to that.
Why convolutional neural networks belong to deep learning? First, mind that deep learning is a buzz term. There is not even a consensus of a formal definition in the research community. A discussion of the term does not lead anywhere, really. It's just a wor
38,521
Why convolutional neural networks belong to deep learning?
The deep learning is an approach where you have a lot of relatively simple layers. You increase learning capabilities by increasing the number of layers, as opposed to increased complexity of layers. You could for instance come up with very fancy output functions, maybe nonlinear functions of inputs or complicated connections. Instead you stick with simple things like ReLU and liner combination and softmax, but stack a lot of layers one on top of other. That's why CNN perfectly fits into this very generic and rather vague definition of deep learning. Look at CNN's components, they are usually very simple MAX, convolutions etc.
Why convolutional neural networks belong to deep learning?
The deep learning is an approach where you have a lot of relatively simple layers. You increase learning capabilities by increasing the number of layers, as opposed to increased complexity of layers.
Why convolutional neural networks belong to deep learning? The deep learning is an approach where you have a lot of relatively simple layers. You increase learning capabilities by increasing the number of layers, as opposed to increased complexity of layers. You could for instance come up with very fancy output functions, maybe nonlinear functions of inputs or complicated connections. Instead you stick with simple things like ReLU and liner combination and softmax, but stack a lot of layers one on top of other. That's why CNN perfectly fits into this very generic and rather vague definition of deep learning. Look at CNN's components, they are usually very simple MAX, convolutions etc.
Why convolutional neural networks belong to deep learning? The deep learning is an approach where you have a lot of relatively simple layers. You increase learning capabilities by increasing the number of layers, as opposed to increased complexity of layers.
38,522
Why convolutional neural networks belong to deep learning?
Deep learning is a generic term that refers to the fact that a deep neural network has at least one hidden layer.
Why convolutional neural networks belong to deep learning?
Deep learning is a generic term that refers to the fact that a deep neural network has at least one hidden layer.
Why convolutional neural networks belong to deep learning? Deep learning is a generic term that refers to the fact that a deep neural network has at least one hidden layer.
Why convolutional neural networks belong to deep learning? Deep learning is a generic term that refers to the fact that a deep neural network has at least one hidden layer.
38,523
Why convolutional neural networks belong to deep learning?
Very late but I think the real answer is more historical. Historically, deep learning refers to networks that use backpropagation in contrast to a lot of other types of neural network (Kohonen, mono-layer perceptron, oscillatory network, chaotic network, etc., etc.) that are a lot of research branches. When deep learning appears, before being the well-known hype terms used today, it was just a branch among the other and its specificity was to use backpropagation (in fact, it is its only specificity). When CNN appears, due to the fact that it also uses backpropagation, it was seen as an extension of the deep learning with flavours. The fact that there are in the same family can also be seen through the fact that it is common to meld both methods.
Why convolutional neural networks belong to deep learning?
Very late but I think the real answer is more historical. Historically, deep learning refers to networks that use backpropagation in contrast to a lot of other types of neural network (Kohonen, mono-l
Why convolutional neural networks belong to deep learning? Very late but I think the real answer is more historical. Historically, deep learning refers to networks that use backpropagation in contrast to a lot of other types of neural network (Kohonen, mono-layer perceptron, oscillatory network, chaotic network, etc., etc.) that are a lot of research branches. When deep learning appears, before being the well-known hype terms used today, it was just a branch among the other and its specificity was to use backpropagation (in fact, it is its only specificity). When CNN appears, due to the fact that it also uses backpropagation, it was seen as an extension of the deep learning with flavours. The fact that there are in the same family can also be seen through the fact that it is common to meld both methods.
Why convolutional neural networks belong to deep learning? Very late but I think the real answer is more historical. Historically, deep learning refers to networks that use backpropagation in contrast to a lot of other types of neural network (Kohonen, mono-l
38,524
How is the first column of the matrix orthogonal to all the others
The test of orthogonality is the dot product. The column of ones doesn't change any of the values in the other columns. So you are left with the summation of the distance of every point in any of the remaining columns to the mean of the column, which is zero by definition. $$\begin{bmatrix}1&1\cdots&1\end{bmatrix} \begin{bmatrix}(X_{11}−\bar{X}_1)\\(X_{12}−\bar{X}_1)\\\vdots\\(X_{1n}−\bar{X}_1)\end{bmatrix}= \sum_{i=1}^{n}(X_{1i}−\bar{X}_1) = n\,(\bar{X}_{1} -\bar{X}_{1})=0 $$ Since, $$\sum_{i=1}^{n}(X_{1i})=n\bar{X}_{1}.$$ Same for other columns.
How is the first column of the matrix orthogonal to all the others
The test of orthogonality is the dot product. The column of ones doesn't change any of the values in the other columns. So you are left with the summation of the distance of every point in any of the
How is the first column of the matrix orthogonal to all the others The test of orthogonality is the dot product. The column of ones doesn't change any of the values in the other columns. So you are left with the summation of the distance of every point in any of the remaining columns to the mean of the column, which is zero by definition. $$\begin{bmatrix}1&1\cdots&1\end{bmatrix} \begin{bmatrix}(X_{11}−\bar{X}_1)\\(X_{12}−\bar{X}_1)\\\vdots\\(X_{1n}−\bar{X}_1)\end{bmatrix}= \sum_{i=1}^{n}(X_{1i}−\bar{X}_1) = n\,(\bar{X}_{1} -\bar{X}_{1})=0 $$ Since, $$\sum_{i=1}^{n}(X_{1i})=n\bar{X}_{1}.$$ Same for other columns.
How is the first column of the matrix orthogonal to all the others The test of orthogonality is the dot product. The column of ones doesn't change any of the values in the other columns. So you are left with the summation of the distance of every point in any of the
38,525
How is the first column of the matrix orthogonal to all the others
Two vectors $\mathbf{u}, \mathbf{v}$ are said to be orthogonal iff $$ \langle \mathbf{u}, \mathbf{v} \rangle = 0 $$ If you do this inner product $r$ times with $\mathbf{v}$ being column 1, and $\mathbf{u}$ being columns $2, ..., r+1 $ respectively, you just have to add the column elements. The orthogonality follows from the properties of the arithmetic mean. If you recall, its second property is that the sum of the deviations of the sample from the mean is zero. Proof Let $\mathbf{x} = (x_1, \dots, x_n)'$ be the observed sample of the r.v. $X$ and let $\bar{x} = n^{-1} \mathbf{x}'\boldsymbol 1$ be the sample mean. Then $$ \sum_{i=1}^n (x_i - \bar{x}) = n \bar{x} - n \bar{x} = 0$$ In your case you just have samples $\mathbf{x}_1 = (x_{11}, \dots, x_{n1})', \dots, \mathbf{x}_r = (x_{1r}, \dots, x_{nr})'$ and the corresponding means are $\bar{x}_1, \dots, \bar{x}_r $. You can apply the same property taking care of adding the sample subscript: $$ \sum_{i=1}^n (x_{ij} - \bar{x}_j) = n \bar{x}_j - n \bar{x}_j = 0 \qquad j=1,\dots, r.$$ I'll change the notation of your question to make it clearer. Hope this will be useful!
How is the first column of the matrix orthogonal to all the others
Two vectors $\mathbf{u}, \mathbf{v}$ are said to be orthogonal iff $$ \langle \mathbf{u}, \mathbf{v} \rangle = 0 $$ If you do this inner product $r$ times with $\mathbf{v}$ being column 1, and $\mathb
How is the first column of the matrix orthogonal to all the others Two vectors $\mathbf{u}, \mathbf{v}$ are said to be orthogonal iff $$ \langle \mathbf{u}, \mathbf{v} \rangle = 0 $$ If you do this inner product $r$ times with $\mathbf{v}$ being column 1, and $\mathbf{u}$ being columns $2, ..., r+1 $ respectively, you just have to add the column elements. The orthogonality follows from the properties of the arithmetic mean. If you recall, its second property is that the sum of the deviations of the sample from the mean is zero. Proof Let $\mathbf{x} = (x_1, \dots, x_n)'$ be the observed sample of the r.v. $X$ and let $\bar{x} = n^{-1} \mathbf{x}'\boldsymbol 1$ be the sample mean. Then $$ \sum_{i=1}^n (x_i - \bar{x}) = n \bar{x} - n \bar{x} = 0$$ In your case you just have samples $\mathbf{x}_1 = (x_{11}, \dots, x_{n1})', \dots, \mathbf{x}_r = (x_{1r}, \dots, x_{nr})'$ and the corresponding means are $\bar{x}_1, \dots, \bar{x}_r $. You can apply the same property taking care of adding the sample subscript: $$ \sum_{i=1}^n (x_{ij} - \bar{x}_j) = n \bar{x}_j - n \bar{x}_j = 0 \qquad j=1,\dots, r.$$ I'll change the notation of your question to make it clearer. Hope this will be useful!
How is the first column of the matrix orthogonal to all the others Two vectors $\mathbf{u}, \mathbf{v}$ are said to be orthogonal iff $$ \langle \mathbf{u}, \mathbf{v} \rangle = 0 $$ If you do this inner product $r$ times with $\mathbf{v}$ being column 1, and $\mathb
38,526
Visualizing variability from graph
The point is presumably elementary, namely that cases with CHD occur at almost any age, but this seems banal even to non-medical people. But statistically, variability means much more than "variation exists"; it is something to be quantified. With no context and nothing else said, the main problem with the graph seems other than stated. It's that the reader has no way of telling whether each data point is one person or numerous people and no way of comparing frequencies, because people with the same age and status on CHD will inevitably be plotted at exactly at the same point. Overplotting is the main weakness of this graph. It's reasonable enough to record age to the nearest year but a more informative graph would show fractions with CHD at each age and absolute numbers too. Quite how best to do that depends on the numbers involved. With modest frequencies dot or stripplots showing points stacked by age might be feasible. With much larger frequencies, paired histograms would be a one possibility. Detail: Editing the OP's post reveals a caption that the graph represents 100 subjects. I've left my comments above as first written because I think them fair on what was visibly presented to everyone except those happening to edit the post. P.S. It's a strong convention to represent presence-absence, yes or no, etc. binary states with 0 and 1. A major reason for that is that means of 0s and 1s then represent proportion present (yes, etc.). The OP suggests that there might be coding such as 1 and 1.01 to which the answer is Yes, in principle, but there would be no reason for such coding stronger than the advantages of 0,1. In any case, graphs like this should always be drawn with a scale suitable to distinguish different states. So use of binary coding is reasonable and does not itself make the graph problematic; indeed the next step is to show fractions (proportions, probabilities) which can be done consistently with that scale.
Visualizing variability from graph
The point is presumably elementary, namely that cases with CHD occur at almost any age, but this seems banal even to non-medical people. But statistically, variability means much more than "variation
Visualizing variability from graph The point is presumably elementary, namely that cases with CHD occur at almost any age, but this seems banal even to non-medical people. But statistically, variability means much more than "variation exists"; it is something to be quantified. With no context and nothing else said, the main problem with the graph seems other than stated. It's that the reader has no way of telling whether each data point is one person or numerous people and no way of comparing frequencies, because people with the same age and status on CHD will inevitably be plotted at exactly at the same point. Overplotting is the main weakness of this graph. It's reasonable enough to record age to the nearest year but a more informative graph would show fractions with CHD at each age and absolute numbers too. Quite how best to do that depends on the numbers involved. With modest frequencies dot or stripplots showing points stacked by age might be feasible. With much larger frequencies, paired histograms would be a one possibility. Detail: Editing the OP's post reveals a caption that the graph represents 100 subjects. I've left my comments above as first written because I think them fair on what was visibly presented to everyone except those happening to edit the post. P.S. It's a strong convention to represent presence-absence, yes or no, etc. binary states with 0 and 1. A major reason for that is that means of 0s and 1s then represent proportion present (yes, etc.). The OP suggests that there might be coding such as 1 and 1.01 to which the answer is Yes, in principle, but there would be no reason for such coding stronger than the advantages of 0,1. In any case, graphs like this should always be drawn with a scale suitable to distinguish different states. So use of binary coding is reasonable and does not itself make the graph problematic; indeed the next step is to show fractions (proportions, probabilities) which can be done consistently with that scale.
Visualizing variability from graph The point is presumably elementary, namely that cases with CHD occur at almost any age, but this seems banal even to non-medical people. But statistically, variability means much more than "variation
38,527
Visualizing variability from graph
By "variability," the authors mean any reasonable measure of the dispersion of CHD conditional on age. Study this by slicing the data into narrow age groups (as shown by different colors in the right hand scatterplot), computing the dispersion of the CHD values within each age group, and plotting those dispersions against age (shown in the left hand dot chart). Because CHD is binary and encoded with zeros and ones, it is a Bernoulli variable. The CHD values within any age group $i$ are completely summarized by their count $n_i$ and the count of (say) the ones, $k_i$, which thereby has a Binomial distribution with (unknown) probability $p_i = \Pr(1)$. Although there are many ways to estimate $p_i$, we needn't be fussy; the proportion $\hat p_i = k_i / n_i$ will do nicely. An appropriate measure of the dispersion of CHD then is the estimated standard deviation $\sqrt{\hat p_i(1-\hat p_i)}$. It can vary from $0$ (when $\hat p_i$ is close to $0$ or $1$) up to $1/2$ (attained when $\hat p_i = 1/2$). The full range of possible standard deviations is shown on the horizontal axis in the left plot. Clearly all of them are in the high (rightmost) range, explaining and justifying the assessment that "the variability at all ages is large." Hosmer and Lemeshow go on to analyze these data into eight age groups rather than the eleven shown here. The conclusion of consistently large variability begins to break down with more age groups: we can see in the right hand plot that all CHD values are constant at the very lowest and very highest ages, exhibiting no variability at all. That is to be expected: when we use many groups, some groups will have few values, resulting in high uncertainty concerning the true dispersion within each group. The authors, by limiting the number of groups, maintain fairly high counts $n_i$ within each group, thereby achieving a robust picture of the dispersion of CHD conditional on age. A more sophisticated, but slightly more opaque, method to estimate the conditional dispersion is to smooth CHD against Age using a local estimator of the mean. This smooth can be converted to an estimator of the dispersion using the same formula as before: I have highlighted (in red and by thickening the line) the "large" standard deviations--that is, those greater than the middle value of $1/4$. These data are available in the file chdage.dat found at ftp://ftp.wiley.com/public/sci_tech_med/logistic/alr.zip. The R code used to create these plots is reproduced below for those who would like to experiment with them. # # Applied Logistic Regression, Table 1.1 # folder <- "F:/Research/ALR/logistic/" # Location of the data file on your system x <- read.table(paste0(folder, "chdage.dat"), col.names=c("Id", "Age", "CHD")) # # Specify age grouping. # n.groups <- 11 k <- 5 # Should be relatively prime to n.groups and near n.groups/2 colors <- rainbow(n.groups) colors <- colors[(1:n.groups * k) %% n.groups + 1] # # Study dispersion ("variability") of CHD by age. # breaks <- quantile(x$Age, (0:n.groups)/n.groups) #$ x$AgeGroup <- cut(x$Age, breaks) s <- aggregate(x$CHD, by=list(x$AgeGroup), function(y) sqrt(mean(y)*(1-mean(y)))) dotchart(s$x, s$Group.1, xlim=c(0, 0.525), pch=16, col=colors, cex=min(1, 10/n.groups), xlab="Standard Deviation", main="Variation in CHD by Age Group", cex.main=0.8) # # Plot the raw data. # names(colors) <- levels(x$AgeGroup) plot(jitter(x$Age), x$CHD, yaxp=c(0, 1, 1), ylim=c(0,1)+c(-1,1)*0.05, cex=1.25, col=colors[x$AgeGroup], xlab="Age (years, jittered)", ylab="CHD", main="CHD vs. Age", cex.main=0.8) abline(v = breaks, lty=1, col="Gray") # # Plot the smoothed dispersion. # CHD.smooth <- lowess(x$Age, x$CHD) CHD.smooth$y <- pmin(1, pmax(0, CHD.smooth$y)) CHD.sd <- sqrt(CHD.smooth$y * (1-CHD.smooth$y)) large <- CHD.sd > 1/4 plot(CHD.smooth$x, CHD.sd, type="l", lwd=2, col="Gray", xlab="Age", ylab="Standard Deviation", main="Smoothed Dispersion of CHD", cex.main=0.75, cex.lab=0.75, cex.axis=0.75) #$ lines(CHD.smooth$x[large], CHD.sd[large], lwd=3, col="Red")
Visualizing variability from graph
By "variability," the authors mean any reasonable measure of the dispersion of CHD conditional on age. Study this by slicing the data into narrow age groups (as shown by different colors in the right
Visualizing variability from graph By "variability," the authors mean any reasonable measure of the dispersion of CHD conditional on age. Study this by slicing the data into narrow age groups (as shown by different colors in the right hand scatterplot), computing the dispersion of the CHD values within each age group, and plotting those dispersions against age (shown in the left hand dot chart). Because CHD is binary and encoded with zeros and ones, it is a Bernoulli variable. The CHD values within any age group $i$ are completely summarized by their count $n_i$ and the count of (say) the ones, $k_i$, which thereby has a Binomial distribution with (unknown) probability $p_i = \Pr(1)$. Although there are many ways to estimate $p_i$, we needn't be fussy; the proportion $\hat p_i = k_i / n_i$ will do nicely. An appropriate measure of the dispersion of CHD then is the estimated standard deviation $\sqrt{\hat p_i(1-\hat p_i)}$. It can vary from $0$ (when $\hat p_i$ is close to $0$ or $1$) up to $1/2$ (attained when $\hat p_i = 1/2$). The full range of possible standard deviations is shown on the horizontal axis in the left plot. Clearly all of them are in the high (rightmost) range, explaining and justifying the assessment that "the variability at all ages is large." Hosmer and Lemeshow go on to analyze these data into eight age groups rather than the eleven shown here. The conclusion of consistently large variability begins to break down with more age groups: we can see in the right hand plot that all CHD values are constant at the very lowest and very highest ages, exhibiting no variability at all. That is to be expected: when we use many groups, some groups will have few values, resulting in high uncertainty concerning the true dispersion within each group. The authors, by limiting the number of groups, maintain fairly high counts $n_i$ within each group, thereby achieving a robust picture of the dispersion of CHD conditional on age. A more sophisticated, but slightly more opaque, method to estimate the conditional dispersion is to smooth CHD against Age using a local estimator of the mean. This smooth can be converted to an estimator of the dispersion using the same formula as before: I have highlighted (in red and by thickening the line) the "large" standard deviations--that is, those greater than the middle value of $1/4$. These data are available in the file chdage.dat found at ftp://ftp.wiley.com/public/sci_tech_med/logistic/alr.zip. The R code used to create these plots is reproduced below for those who would like to experiment with them. # # Applied Logistic Regression, Table 1.1 # folder <- "F:/Research/ALR/logistic/" # Location of the data file on your system x <- read.table(paste0(folder, "chdage.dat"), col.names=c("Id", "Age", "CHD")) # # Specify age grouping. # n.groups <- 11 k <- 5 # Should be relatively prime to n.groups and near n.groups/2 colors <- rainbow(n.groups) colors <- colors[(1:n.groups * k) %% n.groups + 1] # # Study dispersion ("variability") of CHD by age. # breaks <- quantile(x$Age, (0:n.groups)/n.groups) #$ x$AgeGroup <- cut(x$Age, breaks) s <- aggregate(x$CHD, by=list(x$AgeGroup), function(y) sqrt(mean(y)*(1-mean(y)))) dotchart(s$x, s$Group.1, xlim=c(0, 0.525), pch=16, col=colors, cex=min(1, 10/n.groups), xlab="Standard Deviation", main="Variation in CHD by Age Group", cex.main=0.8) # # Plot the raw data. # names(colors) <- levels(x$AgeGroup) plot(jitter(x$Age), x$CHD, yaxp=c(0, 1, 1), ylim=c(0,1)+c(-1,1)*0.05, cex=1.25, col=colors[x$AgeGroup], xlab="Age (years, jittered)", ylab="CHD", main="CHD vs. Age", cex.main=0.8) abline(v = breaks, lty=1, col="Gray") # # Plot the smoothed dispersion. # CHD.smooth <- lowess(x$Age, x$CHD) CHD.smooth$y <- pmin(1, pmax(0, CHD.smooth$y)) CHD.sd <- sqrt(CHD.smooth$y * (1-CHD.smooth$y)) large <- CHD.sd > 1/4 plot(CHD.smooth$x, CHD.sd, type="l", lwd=2, col="Gray", xlab="Age", ylab="Standard Deviation", main="Smoothed Dispersion of CHD", cex.main=0.75, cex.lab=0.75, cex.axis=0.75) #$ lines(CHD.smooth$x[large], CHD.sd[large], lwd=3, col="Red")
Visualizing variability from graph By "variability," the authors mean any reasonable measure of the dispersion of CHD conditional on age. Study this by slicing the data into narrow age groups (as shown by different colors in the right
38,528
Visualizing variability from graph
I believe the point they are trying to make is that, for any given age bin (say 40-45) there are about equal numbers of CHD=0 and CHD=1. This indicates that CHD is not very predictable from age, so it has a high variability. If you wanted to quantify this, you could use something like binary entropy - the closer that p(CHD=0) is to 1/2, the more entropy/variability there is for that age bin.
Visualizing variability from graph
I believe the point they are trying to make is that, for any given age bin (say 40-45) there are about equal numbers of CHD=0 and CHD=1. This indicates that CHD is not very predictable from age, so it
Visualizing variability from graph I believe the point they are trying to make is that, for any given age bin (say 40-45) there are about equal numbers of CHD=0 and CHD=1. This indicates that CHD is not very predictable from age, so it has a high variability. If you wanted to quantify this, you could use something like binary entropy - the closer that p(CHD=0) is to 1/2, the more entropy/variability there is for that age bin.
Visualizing variability from graph I believe the point they are trying to make is that, for any given age bin (say 40-45) there are about equal numbers of CHD=0 and CHD=1. This indicates that CHD is not very predictable from age, so it
38,529
forecast using arima models [closed]
In the future please provide a reproducible example for a question such as yours, as I don't have any idea on the characteristics of your data set. As @Irishstat mentions your data might not have a trend/pattern and could have a level shift. Expanding my comment: arima(0,1,1) is a simple exponential smoothing. The level of the forecast would be flat i.e., last value of the actual. Below is an example illustrating my comment. library("fma") library("forecast") ## Without Drift fit.m <- Arima(eggs,order = c(0,1,1)) forecast.m <- plot(forecast(fit.m,h=10)) #with Drift fit.t <- Arima(eggs,order = c(0,1,1),include.drift=TRUE) forecast.t <- plot(forecast(fit.t,h=10)) As you can see the first model without drift does not capture the downward trend. The forecast is flat. forecast.m$mean Time Series: Start = 1994 End = 2003 Frequency = 1 [1] 62.87244 62.87244 62.87244 62.87244 62.87244 62.87244 62.87244 62.87244 62.87244 62.87244 The second model with drift term captures the downward trend as it includes drift term: forecast.t$mean Time Series: Start = 1994 End = 2003 Frequency = 1 [1] 60.13606 57.75869 55.38132 53.00396 50.62659 48.24922 45.87186 43.49449 41.11712 38.73976
forecast using arima models [closed]
In the future please provide a reproducible example for a question such as yours, as I don't have any idea on the characteristics of your data set. As @Irishstat mentions your data might not have a tr
forecast using arima models [closed] In the future please provide a reproducible example for a question such as yours, as I don't have any idea on the characteristics of your data set. As @Irishstat mentions your data might not have a trend/pattern and could have a level shift. Expanding my comment: arima(0,1,1) is a simple exponential smoothing. The level of the forecast would be flat i.e., last value of the actual. Below is an example illustrating my comment. library("fma") library("forecast") ## Without Drift fit.m <- Arima(eggs,order = c(0,1,1)) forecast.m <- plot(forecast(fit.m,h=10)) #with Drift fit.t <- Arima(eggs,order = c(0,1,1),include.drift=TRUE) forecast.t <- plot(forecast(fit.t,h=10)) As you can see the first model without drift does not capture the downward trend. The forecast is flat. forecast.m$mean Time Series: Start = 1994 End = 2003 Frequency = 1 [1] 62.87244 62.87244 62.87244 62.87244 62.87244 62.87244 62.87244 62.87244 62.87244 62.87244 The second model with drift term captures the downward trend as it includes drift term: forecast.t$mean Time Series: Start = 1994 End = 2003 Frequency = 1 [1] 60.13606 57.75869 55.38132 53.00396 50.62659 48.24922 45.87186 43.49449 41.11712 38.73976
forecast using arima models [closed] In the future please provide a reproducible example for a question such as yours, as I don't have any idea on the characteristics of your data set. As @Irishstat mentions your data might not have a tr
38,530
forecast using arima models [closed]
As you note, the forecasts for your $\text{ARIMA}(0,1,1)$ are constant. Nothing is amiss. Let's begin by considering the differenced series. This is the simple case of an $\text{MA}(1)$ model with $0$ mean: $y(t) = \varepsilon(t) + θ\, \varepsilon(t-1)$. If we're at time $T$, we have for the forecast of the observation at $T+1$: $E[y_{T+1|T}] = E[\varepsilon_{T+1|T}] + θ E[\varepsilon_{T|T}] = 0 + θ \, \hat{\varepsilon}_{T|T}$ The last term, $\hat{\varepsilon}$, is a function of the data, but its exact form doesn't matter for our present purpose; it's just some forecasted value. Now, let's forecast the next one: $E[y_{T+2|T}] = E[\varepsilon_{T+2|T}] + θ E[\varepsilon_{T+1|T}] = 0 + θ\cdot 0 = 0$ ... and similarly, all subsequent forecasts are $0$. Now for an integrated $\text{MA}$, those are the predicted differences of the series. So for the undifferenced predictions, once you have the first forecast, all additional forecasts are equal to it. Similarly, if you had, for example, an $\text{ARIMA}(0,1,2)$ model, then the second forecast would differ from the first and then all subsequent forecasts would be constant.
forecast using arima models [closed]
As you note, the forecasts for your $\text{ARIMA}(0,1,1)$ are constant. Nothing is amiss. Let's begin by considering the differenced series. This is the simple case of an $\text{MA}(1)$ model with $0$
forecast using arima models [closed] As you note, the forecasts for your $\text{ARIMA}(0,1,1)$ are constant. Nothing is amiss. Let's begin by considering the differenced series. This is the simple case of an $\text{MA}(1)$ model with $0$ mean: $y(t) = \varepsilon(t) + θ\, \varepsilon(t-1)$. If we're at time $T$, we have for the forecast of the observation at $T+1$: $E[y_{T+1|T}] = E[\varepsilon_{T+1|T}] + θ E[\varepsilon_{T|T}] = 0 + θ \, \hat{\varepsilon}_{T|T}$ The last term, $\hat{\varepsilon}$, is a function of the data, but its exact form doesn't matter for our present purpose; it's just some forecasted value. Now, let's forecast the next one: $E[y_{T+2|T}] = E[\varepsilon_{T+2|T}] + θ E[\varepsilon_{T+1|T}] = 0 + θ\cdot 0 = 0$ ... and similarly, all subsequent forecasts are $0$. Now for an integrated $\text{MA}$, those are the predicted differences of the series. So for the undifferenced predictions, once you have the first forecast, all additional forecasts are equal to it. Similarly, if you had, for example, an $\text{ARIMA}(0,1,2)$ model, then the second forecast would differ from the first and then all subsequent forecasts would be constant.
forecast using arima models [closed] As you note, the forecasts for your $\text{ARIMA}(0,1,1)$ are constant. Nothing is amiss. Let's begin by considering the differenced series. This is the simple case of an $\text{MA}(1)$ model with $0$
38,531
forecast using arima models [closed]
A characteristic of the model (simple exponential smoothing) that you are using is that forecasts are identical for every period in the future irrespective of the particular value of the ma(1) coefficient. This would also be true for a model that simply had a constant (mean model) or a recent level shift (last local mean)
forecast using arima models [closed]
A characteristic of the model (simple exponential smoothing) that you are using is that forecasts are identical for every period in the future irrespective of the particular value of the ma(1) coeffic
forecast using arima models [closed] A characteristic of the model (simple exponential smoothing) that you are using is that forecasts are identical for every period in the future irrespective of the particular value of the ma(1) coefficient. This would also be true for a model that simply had a constant (mean model) or a recent level shift (last local mean)
forecast using arima models [closed] A characteristic of the model (simple exponential smoothing) that you are using is that forecasts are identical for every period in the future irrespective of the particular value of the ma(1) coeffic
38,532
forecast using arima models [closed]
I think you need to convert the data to time-series format. The forecast is flat because the model is not able to know the forecast is for which month/year/time period and the patterns in it to learn from.
forecast using arima models [closed]
I think you need to convert the data to time-series format. The forecast is flat because the model is not able to know the forecast is for which month/year/time period and the patterns in it to learn
forecast using arima models [closed] I think you need to convert the data to time-series format. The forecast is flat because the model is not able to know the forecast is for which month/year/time period and the patterns in it to learn from.
forecast using arima models [closed] I think you need to convert the data to time-series format. The forecast is flat because the model is not able to know the forecast is for which month/year/time period and the patterns in it to learn
38,533
Regression that creates $x \log(x)$ functions?
It's quite straightforward. Simply create a new variable, $x_1 = x\ln(x)$ then fit a linear regression $E(y)=b+cx_1$. Here's an example (the code is in R but I'll give the data I generate so you can try it in your favourite line-fitting routine): #generate some data: set.seed(29384702) x = runif(20,.05,6) y = 11.-0.5*x*log(x)+rnorm(20) plot(x,y) Here's the data (rounded): x y 2.7994 10.536 2.9113 9.748 4.7754 5.681 3.4272 8.663 4.2275 8.347 5.5773 5.404 0.2158 11.270 4.8779 7.118 5.1431 7.634 4.6209 9.550 2.0105 10.805 2.2227 9.918 1.2500 11.497 3.5105 8.611 0.1927 12.197 4.5592 7.179 3.7578 7.755 2.4346 10.739 3.8396 7.204 2.7911 8.978 So as I said, we make a new x-variable: x1 = x*log(x) This makes the relationship linear in $x_1$: and fit what is now linear regression: yxfit = lm(y~x1) Now let's plot that fitted curve: xnew = seq(0.01,6.01,.1) newx1 = data.frame(x1=xnew*log(xnew)) predyx = predict(yxfit,newdata=newx1) lines(predyx~xnew,col=2) producing: We can do it as easily in something else. Here's a plot of the result of fitting the same model in Excel: Other functions of $x$ The same trick works for any functional fit of the form $E(y)=b+cg(x)$, by letting $x_1=g(x)$. A much wider variety of functions can be generated by considering models of the form $E(y)=\beta_0+\beta_1f_1(x)+\beta_2f_2(x)+...+\beta_kf_k(x)+\varepsilon$, which may be fitted by ordinary multiple regression as long as care is taken to avoid multicollinearity. You may be interested to see here where a sinusoidal model, and then a more complicated periodic model are fitted using linear regression. One thing you should be aware of with fitting curved models, such as fitting a function of the form $ax^b$ say, is the assumption about the variation of the points about the mean; it can affect the suitability of some of those choices of model - at least for some purposes - as well as the efficiency of the estimates. Whenever the $y$ variable is transformed to linearize a model you change the assumptions you make about the variation about the model (and note also that if your fit is approximating the expected value on the transformed-y scale, when you transform it back, it's no longer an expectation). You should make sure that what is being done to fit the model makes sense for your data.
Regression that creates $x \log(x)$ functions?
It's quite straightforward. Simply create a new variable, $x_1 = x\ln(x)$ then fit a linear regression $E(y)=b+cx_1$. Here's an example (the code is in R but I'll give the data I generate so you can t
Regression that creates $x \log(x)$ functions? It's quite straightforward. Simply create a new variable, $x_1 = x\ln(x)$ then fit a linear regression $E(y)=b+cx_1$. Here's an example (the code is in R but I'll give the data I generate so you can try it in your favourite line-fitting routine): #generate some data: set.seed(29384702) x = runif(20,.05,6) y = 11.-0.5*x*log(x)+rnorm(20) plot(x,y) Here's the data (rounded): x y 2.7994 10.536 2.9113 9.748 4.7754 5.681 3.4272 8.663 4.2275 8.347 5.5773 5.404 0.2158 11.270 4.8779 7.118 5.1431 7.634 4.6209 9.550 2.0105 10.805 2.2227 9.918 1.2500 11.497 3.5105 8.611 0.1927 12.197 4.5592 7.179 3.7578 7.755 2.4346 10.739 3.8396 7.204 2.7911 8.978 So as I said, we make a new x-variable: x1 = x*log(x) This makes the relationship linear in $x_1$: and fit what is now linear regression: yxfit = lm(y~x1) Now let's plot that fitted curve: xnew = seq(0.01,6.01,.1) newx1 = data.frame(x1=xnew*log(xnew)) predyx = predict(yxfit,newdata=newx1) lines(predyx~xnew,col=2) producing: We can do it as easily in something else. Here's a plot of the result of fitting the same model in Excel: Other functions of $x$ The same trick works for any functional fit of the form $E(y)=b+cg(x)$, by letting $x_1=g(x)$. A much wider variety of functions can be generated by considering models of the form $E(y)=\beta_0+\beta_1f_1(x)+\beta_2f_2(x)+...+\beta_kf_k(x)+\varepsilon$, which may be fitted by ordinary multiple regression as long as care is taken to avoid multicollinearity. You may be interested to see here where a sinusoidal model, and then a more complicated periodic model are fitted using linear regression. One thing you should be aware of with fitting curved models, such as fitting a function of the form $ax^b$ say, is the assumption about the variation of the points about the mean; it can affect the suitability of some of those choices of model - at least for some purposes - as well as the efficiency of the estimates. Whenever the $y$ variable is transformed to linearize a model you change the assumptions you make about the variation about the model (and note also that if your fit is approximating the expected value on the transformed-y scale, when you transform it back, it's no longer an expectation). You should make sure that what is being done to fit the model makes sense for your data.
Regression that creates $x \log(x)$ functions? It's quite straightforward. Simply create a new variable, $x_1 = x\ln(x)$ then fit a linear regression $E(y)=b+cx_1$. Here's an example (the code is in R but I'll give the data I generate so you can t
38,534
Kalman filter equation derivation
There is a simple, straightforward derivation that starts with the assumptions of the Kalman filter and requires a little Algebra to arrive at the update and extrapolation equations as well as some properties regarding the measurement residuals (difference between the predicted state and the measurement). To start, the Kalman Filter is a linear, unbiased estimator that uses a predictor/corrector process to estimate the state given a sequence of measurements. This means that the general process involves predicting the state and then correcting the state based upon the difference between that prediction and the observed measurement (also known as the residual). The question becomes how to update the state prediction with the observed measurement such that the resulting state estimate is: (1) a linear combination of the predicted state "x" and the observed measurement "z" and (2) has an error with zero mean (unbiased). Base upon these assumptions, the Kalman Filter can be derived. State and Measurement Model Notation and Assumptions The state dynamics model for the state vector $\bar x_k$ at time $k$ is given by the state transition matrix $F_{k-1}$ and the state vector $\bar x_{k-1}$ at a previous time $k-1$. The state dynamics model also includes process noise given by $\bar v_{k-1}$ at time $k-1$. The measurement model for the measurement vector $\bar z_k$ at time $k$ is given by the observation matrix $H_k$ and the state vector $\bar x_k$ at time $k$. The measurement model also includes measurement noise given by $\bar w_k$ at time $k$. The Kalman Filter derivation is easier if we make the Linear Gaussian assumptions and assume that the measurement noise and process noises are statistically independent (uncorrelated): State Estimation and Error Notiations Now, we wish to find the state estimate $\hat x$ given a time series of measurements and define the following notation: $\hat x_{k|k}$ is the state estimate at time $k$ after updating the Kalman Filter with all measurements through time $k$. That is, it is the updated/filtered state estimate. $\hat x_{k|k-1}$ is the state estimate at time $k$ after updating the Kalman Filter with all but the most recent measurement. That is, it is the predicted state estimate. $\tilde x_{j|k}$ is the estimation error in the state, which is given by: $\tilde x_{j|k} = x_j - \hat x_{j|k}$ $P_{k|k}$ is the state estimate error covariance matrix at time $k$ after updating the Kalman Filter with all measurements through time $k$. That is, it is the error covariance for the updated/filtered state estimate. $P_{k|k-1}$ is the state estimate at time $k$ after updating the Kalman Filter with all but the most recent measurement. That is, it is the error covariance for the predicted state estimate. $P_{j|k}$ is the state estimate error covariance matrix, which is given by: $P_{j|k} = E[\tilde x_{j|k} \tilde x_{j|k}^{\prime}]$ The predicted measurement that is predicted by the Kalman Filter is found by taking the expectation of the measurement model with the zero mean measurement noise assumption: $\hat z_{k|k-1} = E[\bar z_k] = E[H_k \bar x_k + \bar w_k] = H_k E[\bar x_k] + E[\bar w_k] = H_k \hat x_{k|k-1}$ Finally, the residual vector is the difference between the observed measurement $z_k$ at time $k$ and the predicted measurement: $\eta_k = z_k - \hat z_{k|k-1} = H_k \hat x_{k|k-1}$ Kalman Filter Derivation We assume that the updated state estimate is a linear combination of the predicted state estimate and the observed measurement as: and we wish to find the weights (gains) $K^{\prime}_k$ and $K_k$ that produce an unbiased estimate with a minimum state estimate error covariance. Unbiased Estimate Assumption Applying the unbiased estimation error assumption, we have that: and with $E[\tilde x_{k|k}] = 0$, this results in: which results in: Substituting this relationship between $K^{\prime}_k$ and $K_k$ back into the linear combination assumption, we have: where $K_k$ is called the Kalman Gain. Minimizing the State Estimate Error Covariance We start by computing the algebraic form of the updated covariance matrix: We then compute the trace of the error covariance $Tr[P_{k|k}]$ and minimize it by: (1) computing the matrix derivative with respect to the Kalman Gain $K_k$ and (2) setting this matrix equation to zero. The solution for the Kalman Gain $K_k$ is given by: $\frac{\partial Tr[P_{k|k}]}{\partial K_k}$ = 0 results in: Kalman Update From the above derivation, the Kalman Update equations are given as: where Kalman Extrapolation The extrapolation equations are simply a result of applying the system dynamics model and applying the definition of the error covariance matrix: Residual Covariance The residual covariance is given by applying the formal definition of the expectation of the quadratic form of the residual vector $\eta_k$:
Kalman filter equation derivation
There is a simple, straightforward derivation that starts with the assumptions of the Kalman filter and requires a little Algebra to arrive at the update and extrapolation equations as well as some pr
Kalman filter equation derivation There is a simple, straightforward derivation that starts with the assumptions of the Kalman filter and requires a little Algebra to arrive at the update and extrapolation equations as well as some properties regarding the measurement residuals (difference between the predicted state and the measurement). To start, the Kalman Filter is a linear, unbiased estimator that uses a predictor/corrector process to estimate the state given a sequence of measurements. This means that the general process involves predicting the state and then correcting the state based upon the difference between that prediction and the observed measurement (also known as the residual). The question becomes how to update the state prediction with the observed measurement such that the resulting state estimate is: (1) a linear combination of the predicted state "x" and the observed measurement "z" and (2) has an error with zero mean (unbiased). Base upon these assumptions, the Kalman Filter can be derived. State and Measurement Model Notation and Assumptions The state dynamics model for the state vector $\bar x_k$ at time $k$ is given by the state transition matrix $F_{k-1}$ and the state vector $\bar x_{k-1}$ at a previous time $k-1$. The state dynamics model also includes process noise given by $\bar v_{k-1}$ at time $k-1$. The measurement model for the measurement vector $\bar z_k$ at time $k$ is given by the observation matrix $H_k$ and the state vector $\bar x_k$ at time $k$. The measurement model also includes measurement noise given by $\bar w_k$ at time $k$. The Kalman Filter derivation is easier if we make the Linear Gaussian assumptions and assume that the measurement noise and process noises are statistically independent (uncorrelated): State Estimation and Error Notiations Now, we wish to find the state estimate $\hat x$ given a time series of measurements and define the following notation: $\hat x_{k|k}$ is the state estimate at time $k$ after updating the Kalman Filter with all measurements through time $k$. That is, it is the updated/filtered state estimate. $\hat x_{k|k-1}$ is the state estimate at time $k$ after updating the Kalman Filter with all but the most recent measurement. That is, it is the predicted state estimate. $\tilde x_{j|k}$ is the estimation error in the state, which is given by: $\tilde x_{j|k} = x_j - \hat x_{j|k}$ $P_{k|k}$ is the state estimate error covariance matrix at time $k$ after updating the Kalman Filter with all measurements through time $k$. That is, it is the error covariance for the updated/filtered state estimate. $P_{k|k-1}$ is the state estimate at time $k$ after updating the Kalman Filter with all but the most recent measurement. That is, it is the error covariance for the predicted state estimate. $P_{j|k}$ is the state estimate error covariance matrix, which is given by: $P_{j|k} = E[\tilde x_{j|k} \tilde x_{j|k}^{\prime}]$ The predicted measurement that is predicted by the Kalman Filter is found by taking the expectation of the measurement model with the zero mean measurement noise assumption: $\hat z_{k|k-1} = E[\bar z_k] = E[H_k \bar x_k + \bar w_k] = H_k E[\bar x_k] + E[\bar w_k] = H_k \hat x_{k|k-1}$ Finally, the residual vector is the difference between the observed measurement $z_k$ at time $k$ and the predicted measurement: $\eta_k = z_k - \hat z_{k|k-1} = H_k \hat x_{k|k-1}$ Kalman Filter Derivation We assume that the updated state estimate is a linear combination of the predicted state estimate and the observed measurement as: and we wish to find the weights (gains) $K^{\prime}_k$ and $K_k$ that produce an unbiased estimate with a minimum state estimate error covariance. Unbiased Estimate Assumption Applying the unbiased estimation error assumption, we have that: and with $E[\tilde x_{k|k}] = 0$, this results in: which results in: Substituting this relationship between $K^{\prime}_k$ and $K_k$ back into the linear combination assumption, we have: where $K_k$ is called the Kalman Gain. Minimizing the State Estimate Error Covariance We start by computing the algebraic form of the updated covariance matrix: We then compute the trace of the error covariance $Tr[P_{k|k}]$ and minimize it by: (1) computing the matrix derivative with respect to the Kalman Gain $K_k$ and (2) setting this matrix equation to zero. The solution for the Kalman Gain $K_k$ is given by: $\frac{\partial Tr[P_{k|k}]}{\partial K_k}$ = 0 results in: Kalman Update From the above derivation, the Kalman Update equations are given as: where Kalman Extrapolation The extrapolation equations are simply a result of applying the system dynamics model and applying the definition of the error covariance matrix: Residual Covariance The residual covariance is given by applying the formal definition of the expectation of the quadratic form of the residual vector $\eta_k$:
Kalman filter equation derivation There is a simple, straightforward derivation that starts with the assumptions of the Kalman filter and requires a little Algebra to arrive at the update and extrapolation equations as well as some pr
38,535
Kalman filter equation derivation
A derivation is given here https://missingueverymoment.wordpress.com/2019/12/02/derivation-of-kalman-filter/ Basically it assumes the linearly dependent relations, and it minimize the square of prediction error to obtain the form of Kalman gain. Matrix derivation identities are also given in the site.
Kalman filter equation derivation
A derivation is given here https://missingueverymoment.wordpress.com/2019/12/02/derivation-of-kalman-filter/ Basically it assumes the linearly dependent relations, and it minimize the square of predic
Kalman filter equation derivation A derivation is given here https://missingueverymoment.wordpress.com/2019/12/02/derivation-of-kalman-filter/ Basically it assumes the linearly dependent relations, and it minimize the square of prediction error to obtain the form of Kalman gain. Matrix derivation identities are also given in the site.
Kalman filter equation derivation A derivation is given here https://missingueverymoment.wordpress.com/2019/12/02/derivation-of-kalman-filter/ Basically it assumes the linearly dependent relations, and it minimize the square of predic
38,536
Kalman filter equation derivation
I think you want $p(\boldsymbol{X}_t|\boldsymbol{X}_{t-1} = N(A\boldsymbol{X}_{t-1} + \mu_p,\ldots)$ in your second equation. Regarding easy-to-follow derivations of the filter, there are many in textbooks such as Durbin & Koopman or Anderson-Moore. You might also look the Kalman filter at the Wikipedia.
Kalman filter equation derivation
I think you want $p(\boldsymbol{X}_t|\boldsymbol{X}_{t-1} = N(A\boldsymbol{X}_{t-1} + \mu_p,\ldots)$ in your second equation. Regarding easy-to-follow derivations of the filter, there are many in text
Kalman filter equation derivation I think you want $p(\boldsymbol{X}_t|\boldsymbol{X}_{t-1} = N(A\boldsymbol{X}_{t-1} + \mu_p,\ldots)$ in your second equation. Regarding easy-to-follow derivations of the filter, there are many in textbooks such as Durbin & Koopman or Anderson-Moore. You might also look the Kalman filter at the Wikipedia.
Kalman filter equation derivation I think you want $p(\boldsymbol{X}_t|\boldsymbol{X}_{t-1} = N(A\boldsymbol{X}_{t-1} + \mu_p,\ldots)$ in your second equation. Regarding easy-to-follow derivations of the filter, there are many in text
38,537
Kalman filter equation derivation
I would like to add some intuition towards the Kalman gain The Kalman gain is given by $$K = \Sigma_pH^T(H\Sigma_pH^T + \Sigma_m)^{-1}$$ A useful way to look at this is $$K = \frac{\Sigma_pH^T}{H\Sigma_pH^T + \Sigma_m}$$ The intuition behind this is that if $\Sigma_m$ was infinitely large, i.e. $\lim\limits_{\Sigma_m \to \infty}$, i.e. our sensors have little credibility, $K \approx0$, meaning we completely discard the sensor observation. On the other hand, if $\Sigma_m$ was 0, i.e. our sensor observations were absolutely reliable and accurate, $K$ becomes $\frac{\Sigma_pH^T}{A\Sigma_pH^T} \approx H^{-1}$ which means we convert our sensor measurement (more accurately, the discrepency between the sensor measurement and our expected sensor results) back into state space and add it to our current state estimation.
Kalman filter equation derivation
I would like to add some intuition towards the Kalman gain The Kalman gain is given by $$K = \Sigma_pH^T(H\Sigma_pH^T + \Sigma_m)^{-1}$$ A useful way to look at this is $$K = \frac{\Sigma_pH^T}{H\Sigm
Kalman filter equation derivation I would like to add some intuition towards the Kalman gain The Kalman gain is given by $$K = \Sigma_pH^T(H\Sigma_pH^T + \Sigma_m)^{-1}$$ A useful way to look at this is $$K = \frac{\Sigma_pH^T}{H\Sigma_pH^T + \Sigma_m}$$ The intuition behind this is that if $\Sigma_m$ was infinitely large, i.e. $\lim\limits_{\Sigma_m \to \infty}$, i.e. our sensors have little credibility, $K \approx0$, meaning we completely discard the sensor observation. On the other hand, if $\Sigma_m$ was 0, i.e. our sensor observations were absolutely reliable and accurate, $K$ becomes $\frac{\Sigma_pH^T}{A\Sigma_pH^T} \approx H^{-1}$ which means we convert our sensor measurement (more accurately, the discrepency between the sensor measurement and our expected sensor results) back into state space and add it to our current state estimation.
Kalman filter equation derivation I would like to add some intuition towards the Kalman gain The Kalman gain is given by $$K = \Sigma_pH^T(H\Sigma_pH^T + \Sigma_m)^{-1}$$ A useful way to look at this is $$K = \frac{\Sigma_pH^T}{H\Sigm
38,538
Kalman filter equation derivation
Just a small correction to the excellent answer above. The residual vector can be assessed either before the Kalman correction or after. The pre-fit residual vector is \begin{equation} {\eta}_{k|k-1} = H_k (x_k - \hat{x}_{k|k-1}) + w_k = H_k \tilde{x}_{k|k-1} + w_k \end{equation} and equivalently the post-fit residual \begin{equation} {\eta}_{k|k} = H_k (x_k - \hat{x}_{k|k}) + w_k = H_k \tilde{x}_{k|k} + w_k \end{equation} By taking the expectation of the outer product of each residual vector with itself, one arrives at the definition of the residual covariance as stated above.
Kalman filter equation derivation
Just a small correction to the excellent answer above. The residual vector can be assessed either before the Kalman correction or after. The pre-fit residual vector is \begin{equation} {\eta}_{k|k-1}
Kalman filter equation derivation Just a small correction to the excellent answer above. The residual vector can be assessed either before the Kalman correction or after. The pre-fit residual vector is \begin{equation} {\eta}_{k|k-1} = H_k (x_k - \hat{x}_{k|k-1}) + w_k = H_k \tilde{x}_{k|k-1} + w_k \end{equation} and equivalently the post-fit residual \begin{equation} {\eta}_{k|k} = H_k (x_k - \hat{x}_{k|k}) + w_k = H_k \tilde{x}_{k|k} + w_k \end{equation} By taking the expectation of the outer product of each residual vector with itself, one arrives at the definition of the residual covariance as stated above.
Kalman filter equation derivation Just a small correction to the excellent answer above. The residual vector can be assessed either before the Kalman correction or after. The pre-fit residual vector is \begin{equation} {\eta}_{k|k-1}
38,539
Usable estimators for parameters in Gumbel distribution
The Gumbel distribution The Gumbel distribution is often used to model the distribution of extreme values. It is one of the three particular cases of the more generalized extreme value distribution (GEV), namely when the parameter of the GEV $\xi$ equals $0$ (that's why the Gumbel distribution is sometimes called "type I extreme value distribution"). The case of parameter fitting using maximum likelihood estimation (MLE) of a Gumbel distribution is discussed in Stuart Coles' book An Introduction to Statistical Modeling of Extreme Values (pages 55 ff.). The Gumbel distribution's CDF is $$ F_{X}(x)=\exp\left({-\exp{\left(-\frac{x-\mu}{\sigma}\right)}}\right), \quad x\in \mathbb{R} $$ with $\mu \in \mathbb{R}$ and $\sigma >0$. The PDF is given by the expression $$ f_{X}(x)=\frac{1}{\sigma}\exp{\left(-\frac{x-\mu}{\sigma} \right)}\cdot \exp{\left\{ -\exp{\left(-\frac{x-\mu}{\sigma}\right)}\right\}} $$ Finally, the quantile function (i.e. inverse CDF) for probability $p$ is given by $$ Q(p)=\mu-\sigma\cdot\log{\left(-\log{\left(p\right)}\right)}. $$ We will need those definitions later in R. Parameter estimation Given that $X_1,...,X_n$ are iid variables following a Gumbel distribution, the log-likelihood is $$ \ell(\mu,\sigma)=-n\log{(\sigma)}-\sum_{i=1}^{n}\left(\frac{x_{i}-\mu}{\sigma}\right)-\sum_{i=1}^{n}\exp{\left\{-\left(\frac{x_{i}-\mu}{\sigma}\right) \right\}}. $$ The log-likelihood can be maximized using standard numerical optimization algorithms. In their book Statistical Distributions, Forbes et al. (2010) provide the MLE estimates for $\mu$ and $\sigma$. Namely, the estimators $\hat{\mu},\hat{\sigma}$ are the solutions of the simultaneous equations $$ \begin{align} \hat{\mu} &=-\hat{\sigma}\log{\left[\frac{1}{n}\sum_{i=1}^{n}\exp{\left(-\frac{x_{i}}{\hat{\sigma}}\right)} \right]}\\ \hat{\sigma} &= \bar{x}-\frac{\sum_{i=1}^{n} x_{i}\exp{\left(-\frac{x_{i}}{\hat{\sigma}}\right)}}{ \sum_{i=1}^{n} \exp{\left(-\frac{x_{i}}{\hat{\sigma}}\right)}}\\ \end{align} $$ where $\bar{x}$ denotes the sample mean. The maximum likelihood estimation can be done in R (other statistical packages such as Stata and SAS provide similar capacities). Because the Gumbel distribution is not available by default in R, we have to define the CDF, PDF and quantile function, which is straightforward. Here is an example R-script that does the trick: #================================================================================= # Load package #================================================================================= library(fitdistrplus) #================================================================================= # Define the PDF, CDF and quantile function for the Gumbel distribution #================================================================================= dgumbel <- function(x,mu,s){ # PDF exp((mu - x)/s - exp((mu - x)/s))/s } pgumbel <- function(q,mu,s){ # CDF exp(-exp(-((q - mu)/s))) } qgumbel <- function(p, mu, s){ # quantile function mu-s*log(-log(p)) } #================================================================================= # Some data (annual maximum mean daily flows ("annual floods")) #================================================================================= flood.data <- c(312,590,248,670,365,770,465,545,315,115,232,260,655,675, 455,1020,700,570,853,395,926,99,680,121,976,916,921,191, 187,377,128,582,744,710,520,672,645,655,918,512,255,1126, 1386,1394,600,950,731,700,1407,1284,165,1496,809) #================================================================================= # Fit the Gumbel distribution using maximum likelihood estimation (MLE) # Make some diagnostic plots #================================================================================= gumbel.fit <- fitdist(flood.data, "gumbel", start=list(mu=5, s=5), method="mle") summary(gumbel.fit) Fitting of the distribution ' gumbel ' by maximum likelihood Parameters : estimate Std. Error mu 471.6864 43.33664 s 298.8155 32.11813 Loglikelihood: -385.1877 AIC: 774.3754 BIC: 778.316 Correlation matrix: mu s mu 1.0000000 0.3208292 s 0.3208292 1.0000000 gofstat(gumbel.fit, discrete=FALSE) # goodness-of-fit statistics Goodness-of-fit statistics 1-mle-gumbel Kolmogorov-Smirnov statistic 0.09956968 Cramer-von Mises statistic 0.08826106 Anderson-Darling statistic 0.53360850 Goodness-of-fit criteria 1-mle-gumbel Aikake's Information Criterion 774.3754 Bayesian Information Criterion 778.3160 # Plot the fit par(cex=1.2, bg="white") plot(gumbel.fit, lwd=2, col="steelblue") The maximum likelihood estimates are $\hat{\mu} = 471.69, \hat{\sigma}=298.82$ with respective standard errors $\widehat{\mathrm{SE}}_{\hat{\mu}}=43.34, \widehat{\mathrm{SE}}_{\hat{\sigma}}=32.12$. The fit looks reasonable (there are some hints of systematic deviations) and the package fitdistrplus provides the estimated standard errors of the parameters and goodness-of-fit statistics.
Usable estimators for parameters in Gumbel distribution
The Gumbel distribution The Gumbel distribution is often used to model the distribution of extreme values. It is one of the three particular cases of the more generalized extreme value distribution (G
Usable estimators for parameters in Gumbel distribution The Gumbel distribution The Gumbel distribution is often used to model the distribution of extreme values. It is one of the three particular cases of the more generalized extreme value distribution (GEV), namely when the parameter of the GEV $\xi$ equals $0$ (that's why the Gumbel distribution is sometimes called "type I extreme value distribution"). The case of parameter fitting using maximum likelihood estimation (MLE) of a Gumbel distribution is discussed in Stuart Coles' book An Introduction to Statistical Modeling of Extreme Values (pages 55 ff.). The Gumbel distribution's CDF is $$ F_{X}(x)=\exp\left({-\exp{\left(-\frac{x-\mu}{\sigma}\right)}}\right), \quad x\in \mathbb{R} $$ with $\mu \in \mathbb{R}$ and $\sigma >0$. The PDF is given by the expression $$ f_{X}(x)=\frac{1}{\sigma}\exp{\left(-\frac{x-\mu}{\sigma} \right)}\cdot \exp{\left\{ -\exp{\left(-\frac{x-\mu}{\sigma}\right)}\right\}} $$ Finally, the quantile function (i.e. inverse CDF) for probability $p$ is given by $$ Q(p)=\mu-\sigma\cdot\log{\left(-\log{\left(p\right)}\right)}. $$ We will need those definitions later in R. Parameter estimation Given that $X_1,...,X_n$ are iid variables following a Gumbel distribution, the log-likelihood is $$ \ell(\mu,\sigma)=-n\log{(\sigma)}-\sum_{i=1}^{n}\left(\frac{x_{i}-\mu}{\sigma}\right)-\sum_{i=1}^{n}\exp{\left\{-\left(\frac{x_{i}-\mu}{\sigma}\right) \right\}}. $$ The log-likelihood can be maximized using standard numerical optimization algorithms. In their book Statistical Distributions, Forbes et al. (2010) provide the MLE estimates for $\mu$ and $\sigma$. Namely, the estimators $\hat{\mu},\hat{\sigma}$ are the solutions of the simultaneous equations $$ \begin{align} \hat{\mu} &=-\hat{\sigma}\log{\left[\frac{1}{n}\sum_{i=1}^{n}\exp{\left(-\frac{x_{i}}{\hat{\sigma}}\right)} \right]}\\ \hat{\sigma} &= \bar{x}-\frac{\sum_{i=1}^{n} x_{i}\exp{\left(-\frac{x_{i}}{\hat{\sigma}}\right)}}{ \sum_{i=1}^{n} \exp{\left(-\frac{x_{i}}{\hat{\sigma}}\right)}}\\ \end{align} $$ where $\bar{x}$ denotes the sample mean. The maximum likelihood estimation can be done in R (other statistical packages such as Stata and SAS provide similar capacities). Because the Gumbel distribution is not available by default in R, we have to define the CDF, PDF and quantile function, which is straightforward. Here is an example R-script that does the trick: #================================================================================= # Load package #================================================================================= library(fitdistrplus) #================================================================================= # Define the PDF, CDF and quantile function for the Gumbel distribution #================================================================================= dgumbel <- function(x,mu,s){ # PDF exp((mu - x)/s - exp((mu - x)/s))/s } pgumbel <- function(q,mu,s){ # CDF exp(-exp(-((q - mu)/s))) } qgumbel <- function(p, mu, s){ # quantile function mu-s*log(-log(p)) } #================================================================================= # Some data (annual maximum mean daily flows ("annual floods")) #================================================================================= flood.data <- c(312,590,248,670,365,770,465,545,315,115,232,260,655,675, 455,1020,700,570,853,395,926,99,680,121,976,916,921,191, 187,377,128,582,744,710,520,672,645,655,918,512,255,1126, 1386,1394,600,950,731,700,1407,1284,165,1496,809) #================================================================================= # Fit the Gumbel distribution using maximum likelihood estimation (MLE) # Make some diagnostic plots #================================================================================= gumbel.fit <- fitdist(flood.data, "gumbel", start=list(mu=5, s=5), method="mle") summary(gumbel.fit) Fitting of the distribution ' gumbel ' by maximum likelihood Parameters : estimate Std. Error mu 471.6864 43.33664 s 298.8155 32.11813 Loglikelihood: -385.1877 AIC: 774.3754 BIC: 778.316 Correlation matrix: mu s mu 1.0000000 0.3208292 s 0.3208292 1.0000000 gofstat(gumbel.fit, discrete=FALSE) # goodness-of-fit statistics Goodness-of-fit statistics 1-mle-gumbel Kolmogorov-Smirnov statistic 0.09956968 Cramer-von Mises statistic 0.08826106 Anderson-Darling statistic 0.53360850 Goodness-of-fit criteria 1-mle-gumbel Aikake's Information Criterion 774.3754 Bayesian Information Criterion 778.3160 # Plot the fit par(cex=1.2, bg="white") plot(gumbel.fit, lwd=2, col="steelblue") The maximum likelihood estimates are $\hat{\mu} = 471.69, \hat{\sigma}=298.82$ with respective standard errors $\widehat{\mathrm{SE}}_{\hat{\mu}}=43.34, \widehat{\mathrm{SE}}_{\hat{\sigma}}=32.12$. The fit looks reasonable (there are some hints of systematic deviations) and the package fitdistrplus provides the estimated standard errors of the parameters and goodness-of-fit statistics.
Usable estimators for parameters in Gumbel distribution The Gumbel distribution The Gumbel distribution is often used to model the distribution of extreme values. It is one of the three particular cases of the more generalized extreme value distribution (G
38,540
Expert forecasting software evaluation
The last review of SCA and AUTOBOX was done in 1995 (see here). We had approached Prof. Len Tashman the editor of Foresight for a head-to-head review of AUTOBOX vs SAS but SAS's representative replied in the negative in July 2010: ARIMA modeling is an interesting subject, but not one I would want to focus on, or one I think of as central for modern forecasting methods. Even with the scope of the comparison explicitly limited to ARIMA functionality, there is still an imbalance in an Autobox versus SAS comparison. Autobox is a niche product that is all about automatic ARIMA modeling. On the other hand, SAS/ETS offers a broad range of tools for econometrics and time series modeling—our PROC ARIMA is just one of twenty-seven ETS procedures. For forecasting, our primary offering is SAS Forecast Server, not SAS/ETS. In turn, SAS/ETS and Forecast Server are just two of the dozens of offerings in the SAS product line. So even if the article is specifically limited to ARIMA, I’m afraid a “SAS” versus Autobox comparison would still come across a little like comparing a Notepad alternative to “Microsoft”. To correct the above: AUTOBOX is and always has been about automatic and non-automatic modelling of both ARIMA and Transfer Functions. In this way experts can use their expertise much like SCA and SAS or at their option use the expert heuristics within AUTOBOX as a productivity aid. There is a fairly recent review of AUTOBOX in 2010 (see here) that was very thorough. I am one of the developers of AUTOBOX. In case you need more help please feel free to contact me and/or pose additional questions.
Expert forecasting software evaluation
The last review of SCA and AUTOBOX was done in 1995 (see here). We had approached Prof. Len Tashman the editor of Foresight for a head-to-head review of AUTOBOX vs SAS but SAS's representative replied
Expert forecasting software evaluation The last review of SCA and AUTOBOX was done in 1995 (see here). We had approached Prof. Len Tashman the editor of Foresight for a head-to-head review of AUTOBOX vs SAS but SAS's representative replied in the negative in July 2010: ARIMA modeling is an interesting subject, but not one I would want to focus on, or one I think of as central for modern forecasting methods. Even with the scope of the comparison explicitly limited to ARIMA functionality, there is still an imbalance in an Autobox versus SAS comparison. Autobox is a niche product that is all about automatic ARIMA modeling. On the other hand, SAS/ETS offers a broad range of tools for econometrics and time series modeling—our PROC ARIMA is just one of twenty-seven ETS procedures. For forecasting, our primary offering is SAS Forecast Server, not SAS/ETS. In turn, SAS/ETS and Forecast Server are just two of the dozens of offerings in the SAS product line. So even if the article is specifically limited to ARIMA, I’m afraid a “SAS” versus Autobox comparison would still come across a little like comparing a Notepad alternative to “Microsoft”. To correct the above: AUTOBOX is and always has been about automatic and non-automatic modelling of both ARIMA and Transfer Functions. In this way experts can use their expertise much like SCA and SAS or at their option use the expert heuristics within AUTOBOX as a productivity aid. There is a fairly recent review of AUTOBOX in 2010 (see here) that was very thorough. I am one of the developers of AUTOBOX. In case you need more help please feel free to contact me and/or pose additional questions.
Expert forecasting software evaluation The last review of SCA and AUTOBOX was done in 1995 (see here). We had approached Prof. Len Tashman the editor of Foresight for a head-to-head review of AUTOBOX vs SAS but SAS's representative replied
38,541
Expert forecasting software evaluation
The only proper evaluation of automatic forecasting software is in head-to-head competition on real data. The last large-scale competition like that was done in 2000 in the M3 competition. The results are publicly available: http://www.forecastingprinciples.com/paperpdf/Makridakia-The%20M3%20Competition.pdf. Draw your own conclusions about what automatic software is the best. Software and automatic methods introduced since 2000 are not listed there. There are some additional comparisons published in the International Journal of Forecasting from time to time.
Expert forecasting software evaluation
The only proper evaluation of automatic forecasting software is in head-to-head competition on real data. The last large-scale competition like that was done in 2000 in the M3 competition. The results
Expert forecasting software evaluation The only proper evaluation of automatic forecasting software is in head-to-head competition on real data. The last large-scale competition like that was done in 2000 in the M3 competition. The results are publicly available: http://www.forecastingprinciples.com/paperpdf/Makridakia-The%20M3%20Competition.pdf. Draw your own conclusions about what automatic software is the best. Software and automatic methods introduced since 2000 are not listed there. There are some additional comparisons published in the International Journal of Forecasting from time to time.
Expert forecasting software evaluation The only proper evaluation of automatic forecasting software is in head-to-head competition on real data. The last large-scale competition like that was done in 2000 in the M3 competition. The results
38,542
Expert forecasting software evaluation
This is not an answer but rather a comment to @Irishstat. This is a lengthy comment so I'm writing this as an answer. @Irishstat, I'm really surprised why an editor of a reputable journal like Foresight need to get permission from a SAS representative to do a comparison between two software. I would consider that this is a scientific empirical research that anyone who has access to SAS forecasting software and Autobox should be able to do, especially in universities. I was reading the golden rule of forecasting article by Professor Armstrong who recommends empirical evidence based adoption of methods, software/tools. I would think that a journal like Foresight, should be able to do an independent comparison of the forecasting software in the interest of forecasting community and not specifically tied to a forecasting vendor, provided adequate funding. I have to applaud autobox and forecast pro for competing in the forecasting competition such as M3. I know there is an M4 competition, I'm not sure if SAS competed in M4. Do we know if and when the M4 compettiion results will be published ?
Expert forecasting software evaluation
This is not an answer but rather a comment to @Irishstat. This is a lengthy comment so I'm writing this as an answer. @Irishstat, I'm really surprised why an editor of a reputable journal like Foresig
Expert forecasting software evaluation This is not an answer but rather a comment to @Irishstat. This is a lengthy comment so I'm writing this as an answer. @Irishstat, I'm really surprised why an editor of a reputable journal like Foresight need to get permission from a SAS representative to do a comparison between two software. I would consider that this is a scientific empirical research that anyone who has access to SAS forecasting software and Autobox should be able to do, especially in universities. I was reading the golden rule of forecasting article by Professor Armstrong who recommends empirical evidence based adoption of methods, software/tools. I would think that a journal like Foresight, should be able to do an independent comparison of the forecasting software in the interest of forecasting community and not specifically tied to a forecasting vendor, provided adequate funding. I have to applaud autobox and forecast pro for competing in the forecasting competition such as M3. I know there is an M4 competition, I'm not sure if SAS competed in M4. Do we know if and when the M4 compettiion results will be published ?
Expert forecasting software evaluation This is not an answer but rather a comment to @Irishstat. This is a lengthy comment so I'm writing this as an answer. @Irishstat, I'm really surprised why an editor of a reputable journal like Foresig
38,543
Why Normalization (Standardization) values should be smaller than $1$?
Rescaling the input features is just a linear transformation. There's no right or wrong way of rescaling outside a problem context. If you want to map the range 1 - 100 to the range 1 - 10 linearly you should do: $$ x \leftarrow \frac{x - 1}{99} \times 9 + 1 $$ This maps 1 to 1 and 100 to 10 and it will make the durations have the same range as the other features. One problem with the method above is if all the durations are clustered between say 40, with just very few outliers close to 100 then most of the range won't be used. Calculating the z-score of each individual feature may be preferable: $$ x \leftarrow \frac{x - \text{mean(x)}}{\text{stddev}(x)} $$ as the transformed features will all have mean 0 and standard deviation 1 and should be more comparable.
Why Normalization (Standardization) values should be smaller than $1$?
Rescaling the input features is just a linear transformation. There's no right or wrong way of rescaling outside a problem context. If you want to map the range 1 - 100 to the range 1 - 10 linearly yo
Why Normalization (Standardization) values should be smaller than $1$? Rescaling the input features is just a linear transformation. There's no right or wrong way of rescaling outside a problem context. If you want to map the range 1 - 100 to the range 1 - 10 linearly you should do: $$ x \leftarrow \frac{x - 1}{99} \times 9 + 1 $$ This maps 1 to 1 and 100 to 10 and it will make the durations have the same range as the other features. One problem with the method above is if all the durations are clustered between say 40, with just very few outliers close to 100 then most of the range won't be used. Calculating the z-score of each individual feature may be preferable: $$ x \leftarrow \frac{x - \text{mean(x)}}{\text{stddev}(x)} $$ as the transformed features will all have mean 0 and standard deviation 1 and should be more comparable.
Why Normalization (Standardization) values should be smaller than $1$? Rescaling the input features is just a linear transformation. There's no right or wrong way of rescaling outside a problem context. If you want to map the range 1 - 100 to the range 1 - 10 linearly yo
38,544
Why Normalization (Standardization) values should be smaller than $1$?
One way of standardizing variables is to turn each value into a z-score, by taking $\frac{x - \bar{x}}{sd}$ Doing this, you would only have to do it once; however, this will not result in a range of -1 to 1, the result can be any number. But most values will be between -1 and 1.
Why Normalization (Standardization) values should be smaller than $1$?
One way of standardizing variables is to turn each value into a z-score, by taking $\frac{x - \bar{x}}{sd}$ Doing this, you would only have to do it once; however, this will not result in a range of
Why Normalization (Standardization) values should be smaller than $1$? One way of standardizing variables is to turn each value into a z-score, by taking $\frac{x - \bar{x}}{sd}$ Doing this, you would only have to do it once; however, this will not result in a range of -1 to 1, the result can be any number. But most values will be between -1 and 1.
Why Normalization (Standardization) values should be smaller than $1$? One way of standardizing variables is to turn each value into a z-score, by taking $\frac{x - \bar{x}}{sd}$ Doing this, you would only have to do it once; however, this will not result in a range of
38,545
Comparing two vectors from negative binomial distribution in R
Your dependent variable is a count ("number of dots counted on the image of a cell"). Asking whether the distribution of counts is similar in two groups is conceptually the same as asking whether group membership matters for the distribution of counts. I suggest a Poisson regression as a first step where you model the dot count with group membership. In a second step, one might then try to decide whether the Poisson assumption of "conditional variance = conditional mean" is violated, suggesting a move to a quasi-Poisson model, to a Poisson-model with heteroscedasticity-consistent (HC) standard error estimates, or to a negative binomial model. Given data c.dots and w.dots as in the OP's example 1: We first create a data frame with predicted variable Y= number of dots and predictor X= factor with group membership. Then we run a standard Poisson regression > dotsDf <- data.frame(Y=c(c.dots, w.dots), + X=factor(rep(c("c", "w"), c(length(c.dots), length(w.dots))))) > glmFitP <- glm(Y ~ X, family=poisson(link="log"), data=dotsDf) > summary(glmFitP) # Poisson model Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.9289 0.1125 -8.256 < 2e-16 *** Xw 0.8455 0.1345 6.286 3.26e-10 *** (Dispersion parameter for poisson family taken to be 1) Null deviance: 509.01 on 399 degrees of freedom Residual deviance: 465.90 on 398 degrees of freedom AIC: 862.16 This indicates a significant predictor "group membership = w" resulting from dummy coding the grouping factor (2 groups => 1 dummy predictor, c is the reference level). For comparison, we can run the quasi-Poisson model that has an extra dispersion parameter for the conditional variance. > glmFitQP <- glm(Y ~ X, family=quasipoisson(link="log"), data=dotsDf) > summary(glmFitQP) # quasi-Poisson model Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.9289 0.1298 -7.158 3.96e-12 *** Xw 0.8455 0.1551 5.450 8.85e-08 *** (Dispersion parameter for quasipoisson family taken to be 1.330164) Null deviance: 509.01 on 399 degrees of freedom Residual deviance: 465.90 on 398 degrees of freedom AIC: NA The parameter estimates are the same, but the standard errors of these estimates is slightly larger. The estimated dispersion parameter is slightly larger than 1 (the value in the Poisson-model), indicating some overdispersion. An alternative approach is to use a Poisson model with HC-consistent standard errors: > library(sandwich) # for vcovHC() > library(lmtest) # for coeftest() > hcSE <- vcovHC(glmFitP, type="HC0") # HC-consistent standard errors > coeftest(glmFitP, vcov=hcSE) z test of coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.92887 0.14084 -6.5952 4.246e-11 *** Xw 0.84549 0.16033 5.2735 1.339e-07 *** Again, somewhat larger standard errors. Now the negative binomial model: > library(MASS) # for glm.nb() > glmFitNB <- glm.nb(Y ~ X, data=dotsDf) > summary(glmFitNB) # negative binomial model Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.9289 0.1192 -7.791 6.66e-15 *** Xw 0.8455 0.1456 5.806 6.40e-09 *** (Dispersion parameter for Negative Binomial(3.212) family taken to be 1) Null deviance: 427.19 on 399 degrees of freedom Residual deviance: 391.20 on 398 degrees of freedom AIC: 856.87 Theta: 3.21 Std. Err.: 1.46 2 x log-likelihood: -850.867 You can test the negative binomial model against the Poisson model in a likelihood ratio test for the model comparison: > library(pscl) # for odTest() > odTest(glmFitNB) Likelihood ratio test of H0: Poisson, as restricted NB model: n.b., the distribution of the test-statistic under H0 is non-standard Critical value of test statistic at the alpha= 0.05 level: 2.7055 Chi-Square Test Statistic = 7.2978 p-value = 0.003452 The result here indicates that the data are unlikely to come from a Poisson model. For the OP's example 2, all these tests are non-significant. Note that I slightly shortened the output from glm() and glm.nb().
Comparing two vectors from negative binomial distribution in R
Your dependent variable is a count ("number of dots counted on the image of a cell"). Asking whether the distribution of counts is similar in two groups is conceptually the same as asking whether grou
Comparing two vectors from negative binomial distribution in R Your dependent variable is a count ("number of dots counted on the image of a cell"). Asking whether the distribution of counts is similar in two groups is conceptually the same as asking whether group membership matters for the distribution of counts. I suggest a Poisson regression as a first step where you model the dot count with group membership. In a second step, one might then try to decide whether the Poisson assumption of "conditional variance = conditional mean" is violated, suggesting a move to a quasi-Poisson model, to a Poisson-model with heteroscedasticity-consistent (HC) standard error estimates, or to a negative binomial model. Given data c.dots and w.dots as in the OP's example 1: We first create a data frame with predicted variable Y= number of dots and predictor X= factor with group membership. Then we run a standard Poisson regression > dotsDf <- data.frame(Y=c(c.dots, w.dots), + X=factor(rep(c("c", "w"), c(length(c.dots), length(w.dots))))) > glmFitP <- glm(Y ~ X, family=poisson(link="log"), data=dotsDf) > summary(glmFitP) # Poisson model Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.9289 0.1125 -8.256 < 2e-16 *** Xw 0.8455 0.1345 6.286 3.26e-10 *** (Dispersion parameter for poisson family taken to be 1) Null deviance: 509.01 on 399 degrees of freedom Residual deviance: 465.90 on 398 degrees of freedom AIC: 862.16 This indicates a significant predictor "group membership = w" resulting from dummy coding the grouping factor (2 groups => 1 dummy predictor, c is the reference level). For comparison, we can run the quasi-Poisson model that has an extra dispersion parameter for the conditional variance. > glmFitQP <- glm(Y ~ X, family=quasipoisson(link="log"), data=dotsDf) > summary(glmFitQP) # quasi-Poisson model Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.9289 0.1298 -7.158 3.96e-12 *** Xw 0.8455 0.1551 5.450 8.85e-08 *** (Dispersion parameter for quasipoisson family taken to be 1.330164) Null deviance: 509.01 on 399 degrees of freedom Residual deviance: 465.90 on 398 degrees of freedom AIC: NA The parameter estimates are the same, but the standard errors of these estimates is slightly larger. The estimated dispersion parameter is slightly larger than 1 (the value in the Poisson-model), indicating some overdispersion. An alternative approach is to use a Poisson model with HC-consistent standard errors: > library(sandwich) # for vcovHC() > library(lmtest) # for coeftest() > hcSE <- vcovHC(glmFitP, type="HC0") # HC-consistent standard errors > coeftest(glmFitP, vcov=hcSE) z test of coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.92887 0.14084 -6.5952 4.246e-11 *** Xw 0.84549 0.16033 5.2735 1.339e-07 *** Again, somewhat larger standard errors. Now the negative binomial model: > library(MASS) # for glm.nb() > glmFitNB <- glm.nb(Y ~ X, data=dotsDf) > summary(glmFitNB) # negative binomial model Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.9289 0.1192 -7.791 6.66e-15 *** Xw 0.8455 0.1456 5.806 6.40e-09 *** (Dispersion parameter for Negative Binomial(3.212) family taken to be 1) Null deviance: 427.19 on 399 degrees of freedom Residual deviance: 391.20 on 398 degrees of freedom AIC: 856.87 Theta: 3.21 Std. Err.: 1.46 2 x log-likelihood: -850.867 You can test the negative binomial model against the Poisson model in a likelihood ratio test for the model comparison: > library(pscl) # for odTest() > odTest(glmFitNB) Likelihood ratio test of H0: Poisson, as restricted NB model: n.b., the distribution of the test-statistic under H0 is non-standard Critical value of test statistic at the alpha= 0.05 level: 2.7055 Chi-Square Test Statistic = 7.2978 p-value = 0.003452 The result here indicates that the data are unlikely to come from a Poisson model. For the OP's example 2, all these tests are non-significant. Note that I slightly shortened the output from glm() and glm.nb().
Comparing two vectors from negative binomial distribution in R Your dependent variable is a count ("number of dots counted on the image of a cell"). Asking whether the distribution of counts is similar in two groups is conceptually the same as asking whether grou
38,546
Comparing two vectors from negative binomial distribution in R
First of all, +1 for @caracal's answer. One other possibility would be as follows: fit a negbin distribution to both datasets (e.g., using fitdistr in the MASS package). The parameter estimates are (asymptotically) normally distributed, and we can get the covariance estimates this way: vcov(fitdistr(w.dots,densfun="negative binomial")) vcov(fitdistr(c.dots,densfun="negative binomial")) Unfortunately, I couldn't find a good way to test the null hypothesis "both vectors are generated by the same negbin parameter vector", which should be a multivariate generalization of the standard one-dimensional two-sample t-test with pooled variances. This may help, but I don't have access. Update: I did think of something, see below. However, what we can do is bootstrap the fits and plot the bootstrapped fits: library(boot) library(MASS) n.boot <- 1000 w.fit.boot <- boot(w.dots,R=n.boot, statistic=function(xx,index)fitdistr(xx[index],densfun="negative binomial")$estimate) c.fit.boot <- boot(c.dots,R=n.boot, statistic=function(xx,index)fitdistr(xx[index],densfun="negative binomial")$estimate) plot(c.fit.boot$t,pch=21,bg="black",xlab="size",ylab="mu",log="xy",cex=0.5, xlim=range(rbind(w.fit.boot$t,c.fit.boot$t)[,1]), ylim=range(rbind(w.fit.boot$t,c.fit.boot$t)[,2])) abline(v=c.fit.boot$t0[1]); abline(h=c.fit.boot$t0[2]) points(w.fit.boot$t,pch=21,bg="red",col="red",cex=0.5) abline(v=w.fit.boot$t0[1],col="red"); abline(h=w.fit.boot$t0[2],col="red") legend(x="bottomright",inset=.01,pch=21,col=c("black","red"),pt.bg=c("black","red"), legend=c("c.dots","w.dots")) Each dot corresponds to one bootstrapped fitted negbin parameter vector. The horizontal and vertical lines give the fitted parameters for the original data. Results: It looks to me like the data in example 1 pretty obviously do not come from the same distribution, while those in example 2 might well do so. Of course, all this is subject to the negbin distribution being correct. However, one could do a similar exercise with a Poisson model or other ways of accounting for overdispersion. Update: now, we want to test the amount of overlap between the joint distributions of the negbin parameter estimates based on the two original vectors. Let's start with a simple one-dimensional example. Assume we have estimated means and standard deviations of two normal densities. Under the null hypothesis, these two densities are identical. So one possible test statistic is the amount of overlap under the density curves: means <- c(1,2) sds <- c(.2,.3) xx <- seq(means[1]-3*sds[1],means[2]+3*sds[2],by=.001) plot(xx,dnorm(xx,means[1],sds[1]),type="l",xlab="",ylab="") lines(xx,dnorm(xx,means[2],sds[2])) polygon(x=c(xx,rev(xx)),y=c(rep(0,length(xx)), rev(pmin(dnorm(xx,means[1],sds[1]),dnorm(xx,means[2],sds[2])))),col="grey") We therefore calculate this area using simple numerical integration: sum(pmin(dnorm(xx,means[1],sds[1]),dnorm(xx,means[2],sds[2])))*mean(diff(xx)) [1] 0.04443207 We can easily generalize this approach to two dimensions. In this case, we have to consider the estimated means and their covariance ellipses and overlay a two-dimensional grid for integration. I'll use the bootstrapped values above to determine the grid dimensions. library(mvtnorm) nn <- 100 xx <- seq(min(c(c.fit.boot$t[,1],w.fit.boot$t[,1])), max(c(c.fit.boot$t[,1],w.fit.boot$t[,1])),length.out=nn) yy <- seq(min(c(c.fit.boot$t[,2],w.fit.boot$t[,2])), max(c(c.fit.boot$t[,2],w.fit.boot$t[,2])),length.out=nn) integral <- matrix(NA,nrow=nn,ncol=nn) for ( ii in 1:nn ) { for ( jj in 1:nn ) { integral[ii,jj] <- min(dmvnorm(c(xx[ii],yy[jj]),mean=c.fit$estimate,sigma=vcov(c.fit)), dmvnorm(c(xx[ii],yy[jj]),mean=w.fit$estimate,sigma=vcov(w.fit))) } } sum(integral)*mean(diff(xx))*mean(diff(yy)) [1] 6.673166e-05 # for the data in example 1 Of course, both the one- and the two-dimensional integration can be carried out with pen & paper and/or a computer algrebra system. integrate() in R works for the one-dimensional case, I imagine that there are tools in R to integrate two-dimensional functions. In addition, I assume I will get a few downvotes for the double loop above, but I couldn't get a vectorized function (outer() or mapply()) to work - any comments would be welcome. Finally, an equally spaced grid is probably not the most accurate way to do this kind of integration. A fully vectorized implementation is straight forward, 'integral' doesn't have to be a 2D matrix: library(mvtnorm) nn <- 100 xx <- seq(min(c(c.fit.boot$t[,1],w.fit.boot$t[,1])), max(c(c.fit.boot$t[,1],w.fit.boot$t[,1])),length.out=nn) yy <- seq(min(c(c.fit.boot$t[,2],w.fit.boot$t[,2])), max(c(c.fit.boot$t[,2],w.fit.boot$t[,2])),length.out=nn) #vectorized: xy_pair <- cbind(rep(xx, each=nn, times=1), rep(yy, each=1, times=nn)) integral <- min(dmvnorm(xy_pair,mean=c.fit$estimate,sigma=vcov(c.fit)), dmvnorm(xy_pair,mean=w.fit$estimate,sigma=vcov(w.fit))) sum(integral)*mean(diff(xx))*mean(diff(yy))
Comparing two vectors from negative binomial distribution in R
First of all, +1 for @caracal's answer. One other possibility would be as follows: fit a negbin distribution to both datasets (e.g., using fitdistr in the MASS package). The parameter estimates are (a
Comparing two vectors from negative binomial distribution in R First of all, +1 for @caracal's answer. One other possibility would be as follows: fit a negbin distribution to both datasets (e.g., using fitdistr in the MASS package). The parameter estimates are (asymptotically) normally distributed, and we can get the covariance estimates this way: vcov(fitdistr(w.dots,densfun="negative binomial")) vcov(fitdistr(c.dots,densfun="negative binomial")) Unfortunately, I couldn't find a good way to test the null hypothesis "both vectors are generated by the same negbin parameter vector", which should be a multivariate generalization of the standard one-dimensional two-sample t-test with pooled variances. This may help, but I don't have access. Update: I did think of something, see below. However, what we can do is bootstrap the fits and plot the bootstrapped fits: library(boot) library(MASS) n.boot <- 1000 w.fit.boot <- boot(w.dots,R=n.boot, statistic=function(xx,index)fitdistr(xx[index],densfun="negative binomial")$estimate) c.fit.boot <- boot(c.dots,R=n.boot, statistic=function(xx,index)fitdistr(xx[index],densfun="negative binomial")$estimate) plot(c.fit.boot$t,pch=21,bg="black",xlab="size",ylab="mu",log="xy",cex=0.5, xlim=range(rbind(w.fit.boot$t,c.fit.boot$t)[,1]), ylim=range(rbind(w.fit.boot$t,c.fit.boot$t)[,2])) abline(v=c.fit.boot$t0[1]); abline(h=c.fit.boot$t0[2]) points(w.fit.boot$t,pch=21,bg="red",col="red",cex=0.5) abline(v=w.fit.boot$t0[1],col="red"); abline(h=w.fit.boot$t0[2],col="red") legend(x="bottomright",inset=.01,pch=21,col=c("black","red"),pt.bg=c("black","red"), legend=c("c.dots","w.dots")) Each dot corresponds to one bootstrapped fitted negbin parameter vector. The horizontal and vertical lines give the fitted parameters for the original data. Results: It looks to me like the data in example 1 pretty obviously do not come from the same distribution, while those in example 2 might well do so. Of course, all this is subject to the negbin distribution being correct. However, one could do a similar exercise with a Poisson model or other ways of accounting for overdispersion. Update: now, we want to test the amount of overlap between the joint distributions of the negbin parameter estimates based on the two original vectors. Let's start with a simple one-dimensional example. Assume we have estimated means and standard deviations of two normal densities. Under the null hypothesis, these two densities are identical. So one possible test statistic is the amount of overlap under the density curves: means <- c(1,2) sds <- c(.2,.3) xx <- seq(means[1]-3*sds[1],means[2]+3*sds[2],by=.001) plot(xx,dnorm(xx,means[1],sds[1]),type="l",xlab="",ylab="") lines(xx,dnorm(xx,means[2],sds[2])) polygon(x=c(xx,rev(xx)),y=c(rep(0,length(xx)), rev(pmin(dnorm(xx,means[1],sds[1]),dnorm(xx,means[2],sds[2])))),col="grey") We therefore calculate this area using simple numerical integration: sum(pmin(dnorm(xx,means[1],sds[1]),dnorm(xx,means[2],sds[2])))*mean(diff(xx)) [1] 0.04443207 We can easily generalize this approach to two dimensions. In this case, we have to consider the estimated means and their covariance ellipses and overlay a two-dimensional grid for integration. I'll use the bootstrapped values above to determine the grid dimensions. library(mvtnorm) nn <- 100 xx <- seq(min(c(c.fit.boot$t[,1],w.fit.boot$t[,1])), max(c(c.fit.boot$t[,1],w.fit.boot$t[,1])),length.out=nn) yy <- seq(min(c(c.fit.boot$t[,2],w.fit.boot$t[,2])), max(c(c.fit.boot$t[,2],w.fit.boot$t[,2])),length.out=nn) integral <- matrix(NA,nrow=nn,ncol=nn) for ( ii in 1:nn ) { for ( jj in 1:nn ) { integral[ii,jj] <- min(dmvnorm(c(xx[ii],yy[jj]),mean=c.fit$estimate,sigma=vcov(c.fit)), dmvnorm(c(xx[ii],yy[jj]),mean=w.fit$estimate,sigma=vcov(w.fit))) } } sum(integral)*mean(diff(xx))*mean(diff(yy)) [1] 6.673166e-05 # for the data in example 1 Of course, both the one- and the two-dimensional integration can be carried out with pen & paper and/or a computer algrebra system. integrate() in R works for the one-dimensional case, I imagine that there are tools in R to integrate two-dimensional functions. In addition, I assume I will get a few downvotes for the double loop above, but I couldn't get a vectorized function (outer() or mapply()) to work - any comments would be welcome. Finally, an equally spaced grid is probably not the most accurate way to do this kind of integration. A fully vectorized implementation is straight forward, 'integral' doesn't have to be a 2D matrix: library(mvtnorm) nn <- 100 xx <- seq(min(c(c.fit.boot$t[,1],w.fit.boot$t[,1])), max(c(c.fit.boot$t[,1],w.fit.boot$t[,1])),length.out=nn) yy <- seq(min(c(c.fit.boot$t[,2],w.fit.boot$t[,2])), max(c(c.fit.boot$t[,2],w.fit.boot$t[,2])),length.out=nn) #vectorized: xy_pair <- cbind(rep(xx, each=nn, times=1), rep(yy, each=1, times=nn)) integral <- min(dmvnorm(xy_pair,mean=c.fit$estimate,sigma=vcov(c.fit)), dmvnorm(xy_pair,mean=w.fit$estimate,sigma=vcov(w.fit))) sum(integral)*mean(diff(xx))*mean(diff(yy))
Comparing two vectors from negative binomial distribution in R First of all, +1 for @caracal's answer. One other possibility would be as follows: fit a negbin distribution to both datasets (e.g., using fitdistr in the MASS package). The parameter estimates are (a
38,547
Comparing two vectors from negative binomial distribution in R
The present answer assumes that both distributions are actually Negative Binomial (NB), and also assumes the two samples to be independent. In this framework, you have a model with four parameters, two for each distribution, and want to test a restriction with two constraints on the parameters. This can be done by standard Maximum Likelihood (ML) theory. The Likelihood Ratio test (LR) is straightforward: the fitdistr function from MASS used above by Stephan Kolassa can provide the maximised likelihoods needed in the LR. Another test is known as score test or Lagrange Multiplier test (LM); it has nice theoretical properties but needs an information matrix and a score vector from therestricted ML estimation. A third possibility would be a Wald test. Let $ \boldsymbol{\psi}_{\textrm{c}} := [r_{\textrm{c}},\,\pi_{\textrm{c}}]'$ be the vector of parameters for the "c" sample, with similar notation for the "w". Here $r$ stands for size, while $\pi$ stands for prob in R i.e., the probability of success. The full parameter vector is $\boldsymbol{\theta} := [\boldsymbol{\psi}_{\textrm{c}}',\, \boldsymbol{\psi}_{\textrm{w}}']'$. You want to test the hypothesis $H_0:\,\boldsymbol{\psi}_{\textrm{c}} = \boldsymbol{\psi}_{\textrm{w}}$. The LM test is based on the approximate distribution $$ \mathbf{U}'\mathbf{I}^{-1}\mathbf{U} \sim \chi^2(d) $$ where the degree of freedom $d$ is the number of scalar restrictions, here $2$. The vector $\mathbf{U}$ and the matrix $\mathbf{I}$ are the score vector and the information matrix at the restricted estimate, say $\widehat{\boldsymbol{\theta}}_{0}= [{\widehat{\boldsymbol{\psi}}_{0}}',\, {\widehat{\boldsymbol{\psi}}_{0}}']'$. Here $\mathbf{U}$ is a vector of length $4$ and $\mathbf{I}$ is $4 \times 4$. The LR statistic can here be obtained by noticing that $\mathbf{I}$ is block diagonal with two $2\times 2$ blocks leading to two contributions arising from the two samples. The score and observed information matrix can be used in closed form. With the fitdistr function, the parametrisation differs from above and uses $\mu:= r\times (1-\pi)/\pi$. Rather than a parameter change, we can with a little more effort use a concentrated log-likelihood. The two tests here have very small $p$-values hence one should clearly reject the null hypothesis. Using simulations, it seems that the information matrix $\mathbf{I}$ can in some rare cases be ill-conditioned, and the LM test statistic can then unduly become negative. In this cases at least, the LR test must be preferred to the LM test. ## fit NB distr from a sample X using concentrated logLik fitNB <- function(X) { n <- length(X) loglik.conc <- function(r) { prob <- n*r / (sum(X) + n*r) sum( lgamma(r + X) - lgamma(r) - lgamma(X + 1) + r * log(prob) + X * log(1 - prob) ) } ## find 'r' with an 1D optim... res <- optimize(f = loglik.conc, interval = c(0.001, 1000), maximum = TRUE) r <- res$maximum[1] params <- c(size = r, prob = n*r / (sum(X) + n*r)) attr(params, "logLik") <- res$objective[1] params } ## compute score vector and info matrix at params 'psi' using closed forms scoreAndInfo <- function(psi, X) { size <- psi[1]; prob <- psi[2] n <- length(X) U <- c(sum(digamma(size + X) - digamma(size) + log(prob)), sum(size / prob - X / (1-prob) )) I <- matrix(c(- sum(trigamma(size + X) - trigamma(size)), -n / prob, -n / prob, sum( size / prob^2 + X / (1-prob)^2)), nrow = 2, ncol = 2) names(U) <- rownames(I) <- colnames(I) <- c("size", "prob") LM <- as.numeric(t(U) %*% solve(I) %*% U) list(score = U, info = I, LM = LM) } ## continuing on the question code a is for "all" c.fit <- fitNB(X = c.dots) w.fit <- fitNB(X = w.dots) a.fit <- fitNB(X = c(c.dots, w.dots)) ## LR test and p.value D.LR <- 2 * ( attr(c.fit, "logLik") + attr(w.fit, "logLik") ) - 2 * attr(a.fit, "logLik") p.LR <- pchisq(D.LR, df = 2, lower.tail = FALSE) ## use restricted parameter estimate to compute the LM contributions c.sI <- scoreAndInfo(psi = a.fit, X = c.dots) w.sI <- scoreAndInfo(psi = a.fit, X = w.dots) D.LM <- c.sI$LM + w.sI$LM p.LM <- pchisq(D.LM, df = 2, lower.tail = FALSE)
Comparing two vectors from negative binomial distribution in R
The present answer assumes that both distributions are actually Negative Binomial (NB), and also assumes the two samples to be independent. In this framework, you have a model with four parameters, tw
Comparing two vectors from negative binomial distribution in R The present answer assumes that both distributions are actually Negative Binomial (NB), and also assumes the two samples to be independent. In this framework, you have a model with four parameters, two for each distribution, and want to test a restriction with two constraints on the parameters. This can be done by standard Maximum Likelihood (ML) theory. The Likelihood Ratio test (LR) is straightforward: the fitdistr function from MASS used above by Stephan Kolassa can provide the maximised likelihoods needed in the LR. Another test is known as score test or Lagrange Multiplier test (LM); it has nice theoretical properties but needs an information matrix and a score vector from therestricted ML estimation. A third possibility would be a Wald test. Let $ \boldsymbol{\psi}_{\textrm{c}} := [r_{\textrm{c}},\,\pi_{\textrm{c}}]'$ be the vector of parameters for the "c" sample, with similar notation for the "w". Here $r$ stands for size, while $\pi$ stands for prob in R i.e., the probability of success. The full parameter vector is $\boldsymbol{\theta} := [\boldsymbol{\psi}_{\textrm{c}}',\, \boldsymbol{\psi}_{\textrm{w}}']'$. You want to test the hypothesis $H_0:\,\boldsymbol{\psi}_{\textrm{c}} = \boldsymbol{\psi}_{\textrm{w}}$. The LM test is based on the approximate distribution $$ \mathbf{U}'\mathbf{I}^{-1}\mathbf{U} \sim \chi^2(d) $$ where the degree of freedom $d$ is the number of scalar restrictions, here $2$. The vector $\mathbf{U}$ and the matrix $\mathbf{I}$ are the score vector and the information matrix at the restricted estimate, say $\widehat{\boldsymbol{\theta}}_{0}= [{\widehat{\boldsymbol{\psi}}_{0}}',\, {\widehat{\boldsymbol{\psi}}_{0}}']'$. Here $\mathbf{U}$ is a vector of length $4$ and $\mathbf{I}$ is $4 \times 4$. The LR statistic can here be obtained by noticing that $\mathbf{I}$ is block diagonal with two $2\times 2$ blocks leading to two contributions arising from the two samples. The score and observed information matrix can be used in closed form. With the fitdistr function, the parametrisation differs from above and uses $\mu:= r\times (1-\pi)/\pi$. Rather than a parameter change, we can with a little more effort use a concentrated log-likelihood. The two tests here have very small $p$-values hence one should clearly reject the null hypothesis. Using simulations, it seems that the information matrix $\mathbf{I}$ can in some rare cases be ill-conditioned, and the LM test statistic can then unduly become negative. In this cases at least, the LR test must be preferred to the LM test. ## fit NB distr from a sample X using concentrated logLik fitNB <- function(X) { n <- length(X) loglik.conc <- function(r) { prob <- n*r / (sum(X) + n*r) sum( lgamma(r + X) - lgamma(r) - lgamma(X + 1) + r * log(prob) + X * log(1 - prob) ) } ## find 'r' with an 1D optim... res <- optimize(f = loglik.conc, interval = c(0.001, 1000), maximum = TRUE) r <- res$maximum[1] params <- c(size = r, prob = n*r / (sum(X) + n*r)) attr(params, "logLik") <- res$objective[1] params } ## compute score vector and info matrix at params 'psi' using closed forms scoreAndInfo <- function(psi, X) { size <- psi[1]; prob <- psi[2] n <- length(X) U <- c(sum(digamma(size + X) - digamma(size) + log(prob)), sum(size / prob - X / (1-prob) )) I <- matrix(c(- sum(trigamma(size + X) - trigamma(size)), -n / prob, -n / prob, sum( size / prob^2 + X / (1-prob)^2)), nrow = 2, ncol = 2) names(U) <- rownames(I) <- colnames(I) <- c("size", "prob") LM <- as.numeric(t(U) %*% solve(I) %*% U) list(score = U, info = I, LM = LM) } ## continuing on the question code a is for "all" c.fit <- fitNB(X = c.dots) w.fit <- fitNB(X = w.dots) a.fit <- fitNB(X = c(c.dots, w.dots)) ## LR test and p.value D.LR <- 2 * ( attr(c.fit, "logLik") + attr(w.fit, "logLik") ) - 2 * attr(a.fit, "logLik") p.LR <- pchisq(D.LR, df = 2, lower.tail = FALSE) ## use restricted parameter estimate to compute the LM contributions c.sI <- scoreAndInfo(psi = a.fit, X = c.dots) w.sI <- scoreAndInfo(psi = a.fit, X = w.dots) D.LM <- c.sI$LM + w.sI$LM p.LM <- pchisq(D.LM, df = 2, lower.tail = FALSE)
Comparing two vectors from negative binomial distribution in R The present answer assumes that both distributions are actually Negative Binomial (NB), and also assumes the two samples to be independent. In this framework, you have a model with four parameters, tw
38,548
Comparing two vectors from negative binomial distribution in R
You can use a Bayesian technique. http://www.indiana.edu/~kruschke/BEST/ This gives you the posterior distribution of the sample means for comparison. x<-c(rep(1,200),rep(2,200)) y<-c(c.dots,w.dots) require(R2jags) #open jags console. dataList<-list(x=x,y=y, Ntotal=length(y)) modelstring = " model { for ( i in 1:Ntotal ) { y[i] ~ dnegbin( p[x[i]] , r[x[i]] ) } for ( j in 1:2) { p[j] <- r[j]/(r[j]+m[j]) m[j] ~ dgamma(0.01, 0.01) r[j] ~ dgamma(0.01, 0.01) v[j] <- r[j]*(1-p[j])/(p[j]*p[j]) } } " writeLines(modelstring,con="model.txt") parameters=c("m") adaptSteps = 1000 burnInSteps = 1000 nChains = 1 numSavedSteps=10000 thinSteps=1 nPerChain = ceiling( ( numSavedSteps * thinSteps ) / nChains ) JagsModel = jags.model( "model.txt" , data=dataList , n.chains=nChains , n.adapt=adaptSteps ) codaSamples = coda.samples( JagsModel , variable.names=parameters , n.iter=nPerChain , thin=thinSteps ) m <- as.matrix(codaSamples) head(m) plot(density(m[,1])) lines(density(m[,2]))
Comparing two vectors from negative binomial distribution in R
You can use a Bayesian technique. http://www.indiana.edu/~kruschke/BEST/ This gives you the posterior distribution of the sample means for comparison. x<-c(rep(1,200),rep(2,200)) y<-c(c.dots,w.dots)
Comparing two vectors from negative binomial distribution in R You can use a Bayesian technique. http://www.indiana.edu/~kruschke/BEST/ This gives you the posterior distribution of the sample means for comparison. x<-c(rep(1,200),rep(2,200)) y<-c(c.dots,w.dots) require(R2jags) #open jags console. dataList<-list(x=x,y=y, Ntotal=length(y)) modelstring = " model { for ( i in 1:Ntotal ) { y[i] ~ dnegbin( p[x[i]] , r[x[i]] ) } for ( j in 1:2) { p[j] <- r[j]/(r[j]+m[j]) m[j] ~ dgamma(0.01, 0.01) r[j] ~ dgamma(0.01, 0.01) v[j] <- r[j]*(1-p[j])/(p[j]*p[j]) } } " writeLines(modelstring,con="model.txt") parameters=c("m") adaptSteps = 1000 burnInSteps = 1000 nChains = 1 numSavedSteps=10000 thinSteps=1 nPerChain = ceiling( ( numSavedSteps * thinSteps ) / nChains ) JagsModel = jags.model( "model.txt" , data=dataList , n.chains=nChains , n.adapt=adaptSteps ) codaSamples = coda.samples( JagsModel , variable.names=parameters , n.iter=nPerChain , thin=thinSteps ) m <- as.matrix(codaSamples) head(m) plot(density(m[,1])) lines(density(m[,2]))
Comparing two vectors from negative binomial distribution in R You can use a Bayesian technique. http://www.indiana.edu/~kruschke/BEST/ This gives you the posterior distribution of the sample means for comparison. x<-c(rep(1,200),rep(2,200)) y<-c(c.dots,w.dots)
38,549
Temporal trend visualizations for regions
You can make each of the plots easily enough. Sticking with your example, I'll use unemployment data from the European countries between 1999 and 2011 (from Eurostat), called unempd (sorry it's long!): > dput(unempd) structure(list(Year = c(1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L), Country = structure(c(2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L), .Label = c("Austria", "Belgium", "Bulgaria", "Croatia", "Cyprus", "Czech Republic", "Denmark", "Estonia", "Finland", "France", "Germany", "Greece", "Hungary", "Iceland", "Ireland", "Italy", "Latvia", "Lithuania", "Luxembourg", "Malta", "Netherlands", "Norway", "Poland", "Portugal", "Romania", "Slovakia", "Slovenia", "Spain", "Sweden", "Switzerland", "United Kingdom"), class = "factor"), Unemployment = c(8.6, 7, 6.6, 7.5, 8.2, 8.4, 8.4, 8.2, 7.5, 7, 7.9, 8.3, 7.1, 3.6, 4.1, 18.2, 13.7, 12, 10.1, 9, 6.9, 5.6, 6.8, 10.2, 11.2, 8.8, 8.8, 8.2, 7.3, 7.8, 8.3, 7.9, 7.1, 5.3, 4.4, 6.7, 7.3, 6.7, 5.6, 4.6, 4.6, 4.6, 5.4, 5.5, 4.8, 3.9, 3.8, 3.3, 6, 7.4, 7.6, 8.9, 7.9, 7.8, 8.5, 9.8, 10.7, 11.1, 10.2, 8.6, 7.5, 7.7, 7.1, 5.9, 11.6, 13.6, 12.6, 10.3, 10, 9.7, 7.9, 5.9, 4.7, 5.5, 13.8, 16.9, 12.5, 5.8, 4.3, 3.9, 4.4, 4.7, 4.5, 4.3, 4.4, 4.6, 6, 11.7, 13.5, 14.4, 12.1, 11.4, 10.8, 10.3, 9.7, 10.5, 9.8, 8.9, 8.3, 7.7, 9.5, 12.5, 17.7, 15.7, 13.9, 10.5, 11.5, 11.5, 11, 9.2, 8.5, 8.3, 11.3, 18, 20.1, 21.6, 12, 10.2, 9.1, 9.2, 8.9, 9.3, 9.3, 9.3, 8.4, 7.8, 9.5, 9.7, 9.7, 11.4, 10.6, 9.5, 9, 8.7, 8, 7.7, 6.8, 6.1, 6.7, 7.8, 8.4, 8.4, 5, 4, 3.3, 4.1, 4.3, 5.3, 4.5, 3.9, 3.7, 5.3, 6.2, 7.7, 13.8, 14.2, 13.1, 12.1, 10.5, 10.4, 8.9, 6.8, 6, 7.5, 17.1, 18.7, 15.4, 13.4, 15.9, 16.8, 13.7, 12.4, 11.4, 8.3, 5.6, 4.3, 5.8, 13.7, 17.8, 15.4, 2.4, 2.3, 1.8, 2.6, 3.7, 5.1, 4.5, 4.7, 4.1, 5.1, 5.1, 4.4, 4.9, 7, 6.4, 5.7, 5.8, 5.9, 6.1, 7.2, 7.5, 7.4, 7.8, 10, 11.2, 10.9, 6.3, 7.1, 6.9, 7.6, 7.2, 7.3, 7.3, 6.4, 6, 7, 6.9, 6.5, 3.6, 2.9, 2.3, 2.8, 3.7, 4.6, 4.7, 3.9, 3.2, 2.8, 3.4, 4.5, 4.4, 3.7, 3.5, 3.6, 4, 4.3, 4.9, 5.2, 4.7, 4.4, 3.8, 4.8, 4.4, 4.1, 12.3, 16.1, 18.2, 19.9, 19.6, 19, 17.7, 13.9, 9.6, 7.1, 8.2, 9.6, 9.6, 4.5, 4, 4, 5, 6.3, 6.7, 7.6, 7.7, 8, 7.6, 9.5, 10.8, 12.7, 6.9, 7.2, 6.6, 8.4, 7, 8.1, 7.2, 7.3, 6.4, 5.8, 6.9, 7.3, 7.4, 7.4, 6.7, 6.2, 6.3, 6.7, 6.3, 6.5, 6, 4.8, 4.4, 5.9, 7.2, 8.2, 16.4, 18.8, 19.3, 18.7, 17.6, 18.2, 16.3, 13.4, 11.1, 9.5, 12, 14.4, 13.5, 10.2, 9.8, 9.1, 9.1, 9, 8.8, 8.4, 7.7, 6.9, 6.4, 8.2, 8.4, 7.8, 7.6, 5.4, 4.8, 5.1, 5.7, 6.5, 7.5, 7.1, 6.2, 6.2, 8.4, 8.4, 7.5, 6, 5.6, 5, 5.1, 5, 4.7, 4.8, 5.4, 5.3, 5.6, 7.6, 7.8, 8, 2.2, 1.9, 1.9, 3, 3.3, 3, 2.5, 2.8, 2.3, 2.9, 7.2, 7.6, 7, 3.2, 3.3, 3.5, 3.8, 4, 4.2, 4.4, 3.4, 2.5, 2.5, 3.1, 3.5, 3.2, 3.1, 2.7, 2.5, 2.9, 4.1, 4.3, 4.4, 4, 3.7, 3.3, 4.1, 4.5, 4.1, 15.1, 13.9, 13.7, 12.6, 11.1, 9.6, 8.4, 9.1, 11.8, 13.4)), .Names = c("Year", "Country", "Unemployment"), class = "data.frame", row.names = c(NA, -397L )) You can make the heatmap with: library(ggplot2) hmplot <- ggplot(unempd, aes(Year, Country, fill=Unemployment)) hmplot + geom_tile(colour="white") + scale_fill_gradient(low="light blue", high="dark blue") + ylab("") + xlab("") + opts(legend.position="none") which produces the following plot: Then to make the time series plot, you can use geom_line(stat="identity") [I just averaged the yearly figures from countries using the ddply function from the plyr package which obviously isn't a legitimate reflection of unemployment rate across Europe, but hopefully works for the sake of illustration...]. library(plyr) unempxyr <- ddply(unempd, .(Year), summarise, meanunemp = mean(Unemployment)) tsplot <- ggplot(unempxyr, aes(Year, meanunemp)) tsplot + geom_line(stat="identity") + ylab("Level") + xlab("") + scale_y_continuous(lim=c(5,10)) + theme_bw() This results in this graphic: Finally, for the "boxplots", I again used ddply to calculate the boxplot statistics for each country: countryxemp <- ddply(unempd, .(Country), summarise, minemp = fivenum(Unemployment)[1], q2emp = fivenum(Unemployment)[2], medemp = fivenum(Unemployment)[3], q3emp = fivenum(Unemployment)[4], maxemp = fivenum(Unemployment)[5] ) bplot <- ggplot(countryxemp, aes(medemp, Country)) + geom_point() bplot + geom_errorbarh(aes(xmin=minemp, xmax=q2emp), colour=I("black"), height=0) + geom_errorbarh(aes(xmin=q3emp, xmax=maxemp), colour=I("black"), height=0) + ylab("") + xlab("Levels\n (internal)") + theme_bw() which results in this graphic: Is this close enough to what you want? Putting the plots together in the way the article does is another matter. I'm not sure if it's possible via gridExtra::grid.arrange() or something similar to that...?
Temporal trend visualizations for regions
You can make each of the plots easily enough. Sticking with your example, I'll use unemployment data from the European countries between 1999 and 2011 (from Eurostat), called unempd (sorry it's long!)
Temporal trend visualizations for regions You can make each of the plots easily enough. Sticking with your example, I'll use unemployment data from the European countries between 1999 and 2011 (from Eurostat), called unempd (sorry it's long!): > dput(unempd) structure(list(Year = c(1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 1999L, 2000L, 2001L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L, 2002L, 2003L, 2004L, 2005L, 2006L, 2007L, 2008L, 2009L, 2010L, 2011L), Country = structure(c(2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 15L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 12L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 28L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 10L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 16L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 18L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 19L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 13L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 20L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 21L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 23L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 24L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 25L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 27L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 26L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 29L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 31L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 22L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 30L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L), .Label = c("Austria", "Belgium", "Bulgaria", "Croatia", "Cyprus", "Czech Republic", "Denmark", "Estonia", "Finland", "France", "Germany", "Greece", "Hungary", "Iceland", "Ireland", "Italy", "Latvia", "Lithuania", "Luxembourg", "Malta", "Netherlands", "Norway", "Poland", "Portugal", "Romania", "Slovakia", "Slovenia", "Spain", "Sweden", "Switzerland", "United Kingdom"), class = "factor"), Unemployment = c(8.6, 7, 6.6, 7.5, 8.2, 8.4, 8.4, 8.2, 7.5, 7, 7.9, 8.3, 7.1, 3.6, 4.1, 18.2, 13.7, 12, 10.1, 9, 6.9, 5.6, 6.8, 10.2, 11.2, 8.8, 8.8, 8.2, 7.3, 7.8, 8.3, 7.9, 7.1, 5.3, 4.4, 6.7, 7.3, 6.7, 5.6, 4.6, 4.6, 4.6, 5.4, 5.5, 4.8, 3.9, 3.8, 3.3, 6, 7.4, 7.6, 8.9, 7.9, 7.8, 8.5, 9.8, 10.7, 11.1, 10.2, 8.6, 7.5, 7.7, 7.1, 5.9, 11.6, 13.6, 12.6, 10.3, 10, 9.7, 7.9, 5.9, 4.7, 5.5, 13.8, 16.9, 12.5, 5.8, 4.3, 3.9, 4.4, 4.7, 4.5, 4.3, 4.4, 4.6, 6, 11.7, 13.5, 14.4, 12.1, 11.4, 10.8, 10.3, 9.7, 10.5, 9.8, 8.9, 8.3, 7.7, 9.5, 12.5, 17.7, 15.7, 13.9, 10.5, 11.5, 11.5, 11, 9.2, 8.5, 8.3, 11.3, 18, 20.1, 21.6, 12, 10.2, 9.1, 9.2, 8.9, 9.3, 9.3, 9.3, 8.4, 7.8, 9.5, 9.7, 9.7, 11.4, 10.6, 9.5, 9, 8.7, 8, 7.7, 6.8, 6.1, 6.7, 7.8, 8.4, 8.4, 5, 4, 3.3, 4.1, 4.3, 5.3, 4.5, 3.9, 3.7, 5.3, 6.2, 7.7, 13.8, 14.2, 13.1, 12.1, 10.5, 10.4, 8.9, 6.8, 6, 7.5, 17.1, 18.7, 15.4, 13.4, 15.9, 16.8, 13.7, 12.4, 11.4, 8.3, 5.6, 4.3, 5.8, 13.7, 17.8, 15.4, 2.4, 2.3, 1.8, 2.6, 3.7, 5.1, 4.5, 4.7, 4.1, 5.1, 5.1, 4.4, 4.9, 7, 6.4, 5.7, 5.8, 5.9, 6.1, 7.2, 7.5, 7.4, 7.8, 10, 11.2, 10.9, 6.3, 7.1, 6.9, 7.6, 7.2, 7.3, 7.3, 6.4, 6, 7, 6.9, 6.5, 3.6, 2.9, 2.3, 2.8, 3.7, 4.6, 4.7, 3.9, 3.2, 2.8, 3.4, 4.5, 4.4, 3.7, 3.5, 3.6, 4, 4.3, 4.9, 5.2, 4.7, 4.4, 3.8, 4.8, 4.4, 4.1, 12.3, 16.1, 18.2, 19.9, 19.6, 19, 17.7, 13.9, 9.6, 7.1, 8.2, 9.6, 9.6, 4.5, 4, 4, 5, 6.3, 6.7, 7.6, 7.7, 8, 7.6, 9.5, 10.8, 12.7, 6.9, 7.2, 6.6, 8.4, 7, 8.1, 7.2, 7.3, 6.4, 5.8, 6.9, 7.3, 7.4, 7.4, 6.7, 6.2, 6.3, 6.7, 6.3, 6.5, 6, 4.8, 4.4, 5.9, 7.2, 8.2, 16.4, 18.8, 19.3, 18.7, 17.6, 18.2, 16.3, 13.4, 11.1, 9.5, 12, 14.4, 13.5, 10.2, 9.8, 9.1, 9.1, 9, 8.8, 8.4, 7.7, 6.9, 6.4, 8.2, 8.4, 7.8, 7.6, 5.4, 4.8, 5.1, 5.7, 6.5, 7.5, 7.1, 6.2, 6.2, 8.4, 8.4, 7.5, 6, 5.6, 5, 5.1, 5, 4.7, 4.8, 5.4, 5.3, 5.6, 7.6, 7.8, 8, 2.2, 1.9, 1.9, 3, 3.3, 3, 2.5, 2.8, 2.3, 2.9, 7.2, 7.6, 7, 3.2, 3.3, 3.5, 3.8, 4, 4.2, 4.4, 3.4, 2.5, 2.5, 3.1, 3.5, 3.2, 3.1, 2.7, 2.5, 2.9, 4.1, 4.3, 4.4, 4, 3.7, 3.3, 4.1, 4.5, 4.1, 15.1, 13.9, 13.7, 12.6, 11.1, 9.6, 8.4, 9.1, 11.8, 13.4)), .Names = c("Year", "Country", "Unemployment"), class = "data.frame", row.names = c(NA, -397L )) You can make the heatmap with: library(ggplot2) hmplot <- ggplot(unempd, aes(Year, Country, fill=Unemployment)) hmplot + geom_tile(colour="white") + scale_fill_gradient(low="light blue", high="dark blue") + ylab("") + xlab("") + opts(legend.position="none") which produces the following plot: Then to make the time series plot, you can use geom_line(stat="identity") [I just averaged the yearly figures from countries using the ddply function from the plyr package which obviously isn't a legitimate reflection of unemployment rate across Europe, but hopefully works for the sake of illustration...]. library(plyr) unempxyr <- ddply(unempd, .(Year), summarise, meanunemp = mean(Unemployment)) tsplot <- ggplot(unempxyr, aes(Year, meanunemp)) tsplot + geom_line(stat="identity") + ylab("Level") + xlab("") + scale_y_continuous(lim=c(5,10)) + theme_bw() This results in this graphic: Finally, for the "boxplots", I again used ddply to calculate the boxplot statistics for each country: countryxemp <- ddply(unempd, .(Country), summarise, minemp = fivenum(Unemployment)[1], q2emp = fivenum(Unemployment)[2], medemp = fivenum(Unemployment)[3], q3emp = fivenum(Unemployment)[4], maxemp = fivenum(Unemployment)[5] ) bplot <- ggplot(countryxemp, aes(medemp, Country)) + geom_point() bplot + geom_errorbarh(aes(xmin=minemp, xmax=q2emp), colour=I("black"), height=0) + geom_errorbarh(aes(xmin=q3emp, xmax=maxemp), colour=I("black"), height=0) + ylab("") + xlab("Levels\n (internal)") + theme_bw() which results in this graphic: Is this close enough to what you want? Putting the plots together in the way the article does is another matter. I'm not sure if it's possible via gridExtra::grid.arrange() or something similar to that...?
Temporal trend visualizations for regions You can make each of the plots easily enough. Sticking with your example, I'll use unemployment data from the European countries between 1999 and 2011 (from Eurostat), called unempd (sorry it's long!)
38,550
Temporal trend visualizations for regions
I know your answer was marked for R, but if you're open to an Excel solution, its relatively easy to work up the same sort of graphic: From my perspective, Excel has a couple of advantages over R. First more people have access to it then can use R (I think I'm a prime example of that), so you have more audience reach with the software. Also, in Excel this is an interactive chart, so the top row (Netherlands in this example) can be used as a selector from any of the series and when changed repopulates all the related areas in that row, plus the target series in the lower charts.
Temporal trend visualizations for regions
I know your answer was marked for R, but if you're open to an Excel solution, its relatively easy to work up the same sort of graphic: From my perspective, Excel has a couple of advantages over R.
Temporal trend visualizations for regions I know your answer was marked for R, but if you're open to an Excel solution, its relatively easy to work up the same sort of graphic: From my perspective, Excel has a couple of advantages over R. First more people have access to it then can use R (I think I'm a prime example of that), so you have more audience reach with the software. Also, in Excel this is an interactive chart, so the top row (Netherlands in this example) can be used as a selector from any of the series and when changed repopulates all the related areas in that row, plus the target series in the lower charts.
Temporal trend visualizations for regions I know your answer was marked for R, but if you're open to an Excel solution, its relatively easy to work up the same sort of graphic: From my perspective, Excel has a couple of advantages over R.
38,551
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$
Define $Y_n = \sum_{i=1} X_n^2$. As you mentioned, $Y_n$ is distributed according to chi-squared with density: $$p(y) = C(n)x^{n/2-1}e^{-x/2}$$ where $C(n)$ is some normalization constant (defined only for $y \ge 0$). The probability of $Y_n$ being larger than $a$ is (as a function of $n$): $$\int_0^a C(n) x^{n/2-1} e^{-x/2}$$ where $C(n) = 2^{n/2} \Gamma(n/2)$. This is strictly smaller than $$\int_0^a C(n) x^{n/2-1} = C(n) \frac{2}{n} a^{n/2}$$ because $e^{-z} < 1$ for $z > 0$. This means that we need to show that $$D(n) = \frac{2}{n 2^{n/2} \Gamma(n/2)} a^{n/2} \le \frac{d^n}{\Gamma(n/2)}$$ goes to 0 as $n$ goes to infinity and $d = \sqrt{a/2}$. Then, $\Gamma(n/2)$ is always larger than $\Gamma(m/2)$ where $m$ is either $n$ if $n$ is even or $n-1$ if $n$ is odd (i.e. $m$ is always an even number smaller than $n$). Therefore, $\Gamma(n/2) \le \Gamma(m/2) = (m/2-1)!$. So we get $$D(n) \le \frac{d^n}{(m/2-1)!}$$ which clearly goes to 0 at the speed of light.
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$
Define $Y_n = \sum_{i=1} X_n^2$. As you mentioned, $Y_n$ is distributed according to chi-squared with density: $$p(y) = C(n)x^{n/2-1}e^{-x/2}$$ where $C(n)$ is some normalization constant (defined onl
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$ Define $Y_n = \sum_{i=1} X_n^2$. As you mentioned, $Y_n$ is distributed according to chi-squared with density: $$p(y) = C(n)x^{n/2-1}e^{-x/2}$$ where $C(n)$ is some normalization constant (defined only for $y \ge 0$). The probability of $Y_n$ being larger than $a$ is (as a function of $n$): $$\int_0^a C(n) x^{n/2-1} e^{-x/2}$$ where $C(n) = 2^{n/2} \Gamma(n/2)$. This is strictly smaller than $$\int_0^a C(n) x^{n/2-1} = C(n) \frac{2}{n} a^{n/2}$$ because $e^{-z} < 1$ for $z > 0$. This means that we need to show that $$D(n) = \frac{2}{n 2^{n/2} \Gamma(n/2)} a^{n/2} \le \frac{d^n}{\Gamma(n/2)}$$ goes to 0 as $n$ goes to infinity and $d = \sqrt{a/2}$. Then, $\Gamma(n/2)$ is always larger than $\Gamma(m/2)$ where $m$ is either $n$ if $n$ is even or $n-1$ if $n$ is odd (i.e. $m$ is always an even number smaller than $n$). Therefore, $\Gamma(n/2) \le \Gamma(m/2) = (m/2-1)!$. So we get $$D(n) \le \frac{d^n}{(m/2-1)!}$$ which clearly goes to 0 at the speed of light.
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$ Define $Y_n = \sum_{i=1} X_n^2$. As you mentioned, $Y_n$ is distributed according to chi-squared with density: $$p(y) = C(n)x^{n/2-1}e^{-x/2}$$ where $C(n)$ is some normalization constant (defined onl
38,552
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$
The stated result has nothing to do with the normal distribution and is fully general. It can also be proven using only the most basic properties of cumulative distribution functions. In particular, it is unnecessary to appeal to (relatively) "high-powered" theorems like the Law of Large Numbers. Proposition: Let $X_i \sim F$ be iid with any distribution other than $\delta_0$, a point-mass at zero. Then, for each $a \geq 0$, $$ \lim_{n\to\infty} \mathbb P\left(\sum_{i=1}^n X_i^2 \leq a\right) = 0 \>. $$ Define $Y_i = X_i^2$ and denote the distribution of $Y_i$ by $G$. If $F$ is of unbounded support, then so is $G$. In this case, everything is completely straightforward once we have the following lemma. Lemma: If $Y_i \geq 0$ are iid, then $\mathbb P(\sum_{i=1}^n Y_i \leq a) \leq G^n(a)$. Proof: $\{Y_1 + Y_2 + \dots + Y_n \leq a\} \subset \{Y_1 \leq a, Y_2 \leq a, \ldots, Y_n \leq a\}$. Hence, $$ \mathbb P\left( \sum_{i=1}^n Y_i \leq a\right) \leq G^n(a). $$ In words, if the sum of $n$ nonnegative terms is less than $a$, then all of the terms must be less than $a$. The iid property of the $Y_i$ variables is then invoked to obtain the result. Now, if $G$ is of unbounded support, then $G(a) < 1$ for all $a \geq 0$, but then $G^n(a) \to 0$, so invoking the lemma, we are done. Extending to the case of bounded support is not much more difficult. Suppose there exists $B > 0$ such that $G(a) = 1$ for all $a \geq B$ and $G(a) < 1$ for $a < B$. The case where $a < B$ is already handled by the argument above. For $a > B$, there is a fixed $N := N(a) = [1+(a/B)]$ such that $$ G_N(a) := \mathbb P\left(\sum_{i=1}^N Y_i \leq a\right) < 1 \>. $$ (Why?) But, then this reduces to the previous case since $(G_N(a))^m \to 0$ as $m \to \infty$ by considering sums over blocks of size $N$. NB The intuition in the bounded case is that once we add enough terms, the support of the distribution of the sum will eventually catch up to, and overtake, $a$. Once that happens, we find ourselves in the previous case. Epilogue The specific case of the normal distribution falls under the category of $F$ (hence, $G$) with unbounded support. So, we only need the first part of the answer (which requires no calculation whatsoever) to establish the result in the question statement.
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$
The stated result has nothing to do with the normal distribution and is fully general. It can also be proven using only the most basic properties of cumulative distribution functions. In particular, i
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$ The stated result has nothing to do with the normal distribution and is fully general. It can also be proven using only the most basic properties of cumulative distribution functions. In particular, it is unnecessary to appeal to (relatively) "high-powered" theorems like the Law of Large Numbers. Proposition: Let $X_i \sim F$ be iid with any distribution other than $\delta_0$, a point-mass at zero. Then, for each $a \geq 0$, $$ \lim_{n\to\infty} \mathbb P\left(\sum_{i=1}^n X_i^2 \leq a\right) = 0 \>. $$ Define $Y_i = X_i^2$ and denote the distribution of $Y_i$ by $G$. If $F$ is of unbounded support, then so is $G$. In this case, everything is completely straightforward once we have the following lemma. Lemma: If $Y_i \geq 0$ are iid, then $\mathbb P(\sum_{i=1}^n Y_i \leq a) \leq G^n(a)$. Proof: $\{Y_1 + Y_2 + \dots + Y_n \leq a\} \subset \{Y_1 \leq a, Y_2 \leq a, \ldots, Y_n \leq a\}$. Hence, $$ \mathbb P\left( \sum_{i=1}^n Y_i \leq a\right) \leq G^n(a). $$ In words, if the sum of $n$ nonnegative terms is less than $a$, then all of the terms must be less than $a$. The iid property of the $Y_i$ variables is then invoked to obtain the result. Now, if $G$ is of unbounded support, then $G(a) < 1$ for all $a \geq 0$, but then $G^n(a) \to 0$, so invoking the lemma, we are done. Extending to the case of bounded support is not much more difficult. Suppose there exists $B > 0$ such that $G(a) = 1$ for all $a \geq B$ and $G(a) < 1$ for $a < B$. The case where $a < B$ is already handled by the argument above. For $a > B$, there is a fixed $N := N(a) = [1+(a/B)]$ such that $$ G_N(a) := \mathbb P\left(\sum_{i=1}^N Y_i \leq a\right) < 1 \>. $$ (Why?) But, then this reduces to the previous case since $(G_N(a))^m \to 0$ as $m \to \infty$ by considering sums over blocks of size $N$. NB The intuition in the bounded case is that once we add enough terms, the support of the distribution of the sum will eventually catch up to, and overtake, $a$. Once that happens, we find ourselves in the previous case. Epilogue The specific case of the normal distribution falls under the category of $F$ (hence, $G$) with unbounded support. So, we only need the first part of the answer (which requires no calculation whatsoever) to establish the result in the question statement.
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$ The stated result has nothing to do with the normal distribution and is fully general. It can also be proven using only the most basic properties of cumulative distribution functions. In particular, i
38,553
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$
We note that since each $X_i$ has mean 0 and variance 1, $(\sum_i X_i^2)/n$ converges to 1 a.s. But if $$ \lim_{n\to\infty} \mathbb P\Big( \sum_i X_i^2 < a \Big) > 0 \,, $$ then with positive probability $$ \frac{1}{n}\sum_i X_i^2 $$ goes to 0, which is a contradiction because if the sum have a positive probability of staying finite then dividing by n we have $(\sum_i X_i^2)/n$ converging to 0 with positive probability contradicting convergence to 1 almost surely.
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$
We note that since each $X_i$ has mean 0 and variance 1, $(\sum_i X_i^2)/n$ converges to 1 a.s. But if $$ \lim_{n\to\infty} \mathbb P\Big( \sum_i X_i^2 < a \Big) > 0 \,, $$ then with positive probabi
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$ We note that since each $X_i$ has mean 0 and variance 1, $(\sum_i X_i^2)/n$ converges to 1 a.s. But if $$ \lim_{n\to\infty} \mathbb P\Big( \sum_i X_i^2 < a \Big) > 0 \,, $$ then with positive probability $$ \frac{1}{n}\sum_i X_i^2 $$ goes to 0, which is a contradiction because if the sum have a positive probability of staying finite then dividing by n we have $(\sum_i X_i^2)/n$ converging to 0 with positive probability contradicting convergence to 1 almost surely.
How can I show that for any $a > 0$, $\lim_{n\to \infty}P \left(\sum_{i=1}^n X_i^2\leq a\right)=0$ We note that since each $X_i$ has mean 0 and variance 1, $(\sum_i X_i^2)/n$ converges to 1 a.s. But if $$ \lim_{n\to\infty} \mathbb P\Big( \sum_i X_i^2 < a \Big) > 0 \,, $$ then with positive probabi
38,554
How to conduct a meta-analysis using raw data?
As @Alexander already mentioned, you are looking for an approach that is called "individual participant/person/patient data meta-analysis" (IPD meta-analysis). He also refered to an article by Richard Riley, who has published a lot in this field. Please find below a collection of articles that I used for our advanced meta-analysis class: Cooper, H., & Patall, E. A. (2009). The relative benefits of meta-analysis conducted with individual participant data versus aggregated data. Psychological methods, 14(2), 165–176. doi:10.1037/a0015565 Curran, P. J., & Hussong, A. M. (2009). Integrative data analysis: the simultaneous analysis of multiple data sets. Psychological methods, 14(2), 81–100. doi:10.1037/a0015914 Lyman, G. H., & Kuderer, N. M. (2005). The strengths and limitations of meta-analyses based on aggregate data. BMC Medical Research Methodology, 5(1), 14. (already mentioned) Riley, R. D., Lambert, P. C., & Abo-Zaid, G. (2010). Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ, 340(feb05 1), c221-c221. doi:10.1136/bmj.c221 Riley, R. D., Lambert, P. C., Staessen, J. A., Wang, J., Gueyffier, F., Thijs, L., & Boutitie, F. (2007). Meta-analysis of continuous outcomes combining individual patient data and aggregate data. Statistics in Medicine. doi:10.1002/sim.3165 Simmonds, M. C., Higgins, J. P., Stewart, L. A., Tierney, J. F., Clarke, M. J., & Thompson, S. G. (2005). Meta-analysis of individual patient data from randomized trials: a review of methods used in practice. Clinical Trials, 2(3), 209–217. Stewart, L. A., & Tierney, J. F. (2002). To IPD or not to IPD? Advantages and disadvantages of systematic reviews using individual patient data. Evaluation and The Health Professions, 25(1), 76-97.
How to conduct a meta-analysis using raw data?
As @Alexander already mentioned, you are looking for an approach that is called "individual participant/person/patient data meta-analysis" (IPD meta-analysis). He also refered to an article by Richard
How to conduct a meta-analysis using raw data? As @Alexander already mentioned, you are looking for an approach that is called "individual participant/person/patient data meta-analysis" (IPD meta-analysis). He also refered to an article by Richard Riley, who has published a lot in this field. Please find below a collection of articles that I used for our advanced meta-analysis class: Cooper, H., & Patall, E. A. (2009). The relative benefits of meta-analysis conducted with individual participant data versus aggregated data. Psychological methods, 14(2), 165–176. doi:10.1037/a0015565 Curran, P. J., & Hussong, A. M. (2009). Integrative data analysis: the simultaneous analysis of multiple data sets. Psychological methods, 14(2), 81–100. doi:10.1037/a0015914 Lyman, G. H., & Kuderer, N. M. (2005). The strengths and limitations of meta-analyses based on aggregate data. BMC Medical Research Methodology, 5(1), 14. (already mentioned) Riley, R. D., Lambert, P. C., & Abo-Zaid, G. (2010). Meta-analysis of individual participant data: rationale, conduct, and reporting. BMJ, 340(feb05 1), c221-c221. doi:10.1136/bmj.c221 Riley, R. D., Lambert, P. C., Staessen, J. A., Wang, J., Gueyffier, F., Thijs, L., & Boutitie, F. (2007). Meta-analysis of continuous outcomes combining individual patient data and aggregate data. Statistics in Medicine. doi:10.1002/sim.3165 Simmonds, M. C., Higgins, J. P., Stewart, L. A., Tierney, J. F., Clarke, M. J., & Thompson, S. G. (2005). Meta-analysis of individual patient data from randomized trials: a review of methods used in practice. Clinical Trials, 2(3), 209–217. Stewart, L. A., & Tierney, J. F. (2002). To IPD or not to IPD? Advantages and disadvantages of systematic reviews using individual patient data. Evaluation and The Health Professions, 25(1), 76-97.
How to conduct a meta-analysis using raw data? As @Alexander already mentioned, you are looking for an approach that is called "individual participant/person/patient data meta-analysis" (IPD meta-analysis). He also refered to an article by Richard
38,555
How to conduct a meta-analysis using raw data?
Yes. It can be a problem, especially in scenarios where pharmaceutical clinical trials are involved and due to competition companies may not want to share their data. Nevertheless, such studies have been carried out in recent years in such settings. Combining raw data when the studies involve the same endpoints and the populations are similar is probably better than using summary data. But I do think there are situations where summary statistics or even just sample size information along with p-values can be used. This very much depends on the studies and the goals of the meta-analysis. In my experience, the hardest part of meta-analysis studies is deciding which studies should be included and which must be left out.
How to conduct a meta-analysis using raw data?
Yes. It can be a problem, especially in scenarios where pharmaceutical clinical trials are involved and due to competition companies may not want to share their data. Nevertheless, such studies have
How to conduct a meta-analysis using raw data? Yes. It can be a problem, especially in scenarios where pharmaceutical clinical trials are involved and due to competition companies may not want to share their data. Nevertheless, such studies have been carried out in recent years in such settings. Combining raw data when the studies involve the same endpoints and the populations are similar is probably better than using summary data. But I do think there are situations where summary statistics or even just sample size information along with p-values can be used. This very much depends on the studies and the goals of the meta-analysis. In my experience, the hardest part of meta-analysis studies is deciding which studies should be included and which must be left out.
How to conduct a meta-analysis using raw data? Yes. It can be a problem, especially in scenarios where pharmaceutical clinical trials are involved and due to competition companies may not want to share their data. Nevertheless, such studies have
38,556
How to display multiple density or distribution functions on a single plot?
The question asks for "easiest." Interpreting that in terms of either (i) lines of code, (ii) naturality of expression, or (iii) raw capabilities, I find the Mathematica solutions to be well worth considering. For example, Plot[Evaluate[ PDF[WeibullDistribution[#, 20]][x] & /@ {1/2, 1/3, 1/4, 1/5}], {x, 0, 1}, AxesOrigin -> {0, 0}] produces the example in the question and gMixture[x_, weights_, shapes_, scales_] := MapThread[PDF[GammaDistribution[##]][x] &, {shapes, scales}] . weights / Total[weights]; Plot[gMixture[x, {1, 2, 3}, {2, 3, 10}, {1, 1, 1}], {x, 0, 20}, AxesOrigin -> {0, 0}] shows what it takes to define and plot a new distribution (here, a mixture of gammas): Need something more exotic? It's likely already part of Mathematica. E.g., here is a PDF obtained from a Jacobi theta function by normalizing its area to unity: With[{c = NIntegrate[EllipticTheta[1, z, 1/2], {z, 0, Pi}]}, Plot[EllipticTheta[1, z, 1/2] / c, {z, 0, Pi}, Filling -> Axis]]
How to display multiple density or distribution functions on a single plot?
The question asks for "easiest." Interpreting that in terms of either (i) lines of code, (ii) naturality of expression, or (iii) raw capabilities, I find the Mathematica solutions to be well worth co
How to display multiple density or distribution functions on a single plot? The question asks for "easiest." Interpreting that in terms of either (i) lines of code, (ii) naturality of expression, or (iii) raw capabilities, I find the Mathematica solutions to be well worth considering. For example, Plot[Evaluate[ PDF[WeibullDistribution[#, 20]][x] & /@ {1/2, 1/3, 1/4, 1/5}], {x, 0, 1}, AxesOrigin -> {0, 0}] produces the example in the question and gMixture[x_, weights_, shapes_, scales_] := MapThread[PDF[GammaDistribution[##]][x] &, {shapes, scales}] . weights / Total[weights]; Plot[gMixture[x, {1, 2, 3}, {2, 3, 10}, {1, 1, 1}], {x, 0, 20}, AxesOrigin -> {0, 0}] shows what it takes to define and plot a new distribution (here, a mixture of gammas): Need something more exotic? It's likely already part of Mathematica. E.g., here is a PDF obtained from a Jacobi theta function by normalizing its area to unity: With[{c = NIntegrate[EllipticTheta[1, z, 1/2], {z, 0, Pi}]}, Plot[EllipticTheta[1, z, 1/2] / c, {z, 0, Pi}, Filling -> Axis]]
How to display multiple density or distribution functions on a single plot? The question asks for "easiest." Interpreting that in terms of either (i) lines of code, (ii) naturality of expression, or (iii) raw capabilities, I find the Mathematica solutions to be well worth co
38,557
How to display multiple density or distribution functions on a single plot?
I love R, easy and free. Here's an example: # The par removes the "padding" from the axis par(xaxs="i", yaxs="i") # Initiate the x, a small "by" is neat for a smooth curve # Can't use 0 since it gives produces an integrate() error x <- seq(0.0001, 3, by=.01) # Just some vanity - adding a little color :-), heat.colors(5) could be an option colors <- c("darkred", "red", "orange", "gold", "yellow") plot(x, type="n", ylim=c(0,3), ylab="Density") for(i in 1:5){ lines(x, dweibull(x, shape=1/i), col=colors[i]) } title("Weibull tests") Gives this: Update I've played around with Peter Flom's suggestion with the integrate function. The prob. function, same as above: plot(x, type="n", ylim=c(0,1), xlim=range(x), ylab="Prob") for(i in 1:5){ lines(x, pweibull(x, shape=1/i), col=colors[i]) } title("Using the pweibull funciton") Give this graph: When using the integrate function to get the "same" graph the code looks like this: plot(x, type="n", ylim=c(0,1), xlim=c(0, max(x)), ylab="Density") for(i in 1:5){ t <- apply(matrix(x), MARGIN=1, FUN=function(x) integrate(function(a) dweibull(a, shape=1/i), 0, x)$value) lines(x, t, col=colors[i]) } title("Using the integrate funciton") And this gives virtually an identical graph:
How to display multiple density or distribution functions on a single plot?
I love R, easy and free. Here's an example: # The par removes the "padding" from the axis par(xaxs="i", yaxs="i") # Initiate the x, a small "by" is neat for a smooth curve # Can't use 0 since it give
How to display multiple density or distribution functions on a single plot? I love R, easy and free. Here's an example: # The par removes the "padding" from the axis par(xaxs="i", yaxs="i") # Initiate the x, a small "by" is neat for a smooth curve # Can't use 0 since it gives produces an integrate() error x <- seq(0.0001, 3, by=.01) # Just some vanity - adding a little color :-), heat.colors(5) could be an option colors <- c("darkred", "red", "orange", "gold", "yellow") plot(x, type="n", ylim=c(0,3), ylab="Density") for(i in 1:5){ lines(x, dweibull(x, shape=1/i), col=colors[i]) } title("Weibull tests") Gives this: Update I've played around with Peter Flom's suggestion with the integrate function. The prob. function, same as above: plot(x, type="n", ylim=c(0,1), xlim=range(x), ylab="Prob") for(i in 1:5){ lines(x, pweibull(x, shape=1/i), col=colors[i]) } title("Using the pweibull funciton") Give this graph: When using the integrate function to get the "same" graph the code looks like this: plot(x, type="n", ylim=c(0,1), xlim=c(0, max(x)), ylab="Density") for(i in 1:5){ t <- apply(matrix(x), MARGIN=1, FUN=function(x) integrate(function(a) dweibull(a, shape=1/i), 0, x)$value) lines(x, t, col=colors[i]) } title("Using the integrate funciton") And this gives virtually an identical graph:
How to display multiple density or distribution functions on a single plot? I love R, easy and free. Here's an example: # The par removes the "padding" from the axis par(xaxs="i", yaxs="i") # Initiate the x, a small "by" is neat for a smooth curve # Can't use 0 since it give
38,558
How to display multiple density or distribution functions on a single plot?
I like R too. Here is a more or less generic function to plot any probability distribution from the base R functions. It should not be difficult to extend the code with functions available in other packages, e.g. SuppDists. plot.func <- function(distr=c("beta", "binom", "cauchy", "chisq", "exp", "f","gamma", "geom", "hyper", "logis", "lnorm", "nbinom", "norm", "pois", "t", "unif", "weibull"), what=c("pdf", "cdf"), params=list(), type="b", xlim=c(0, 1), log=FALSE, n=101, add=FALSE, ...) { what <- match.arg(what) d <- match.fun(paste(switch(what, pdf = "d", cdf = "p"), distr, sep="")) # Define x-values (because we won't use 'curve') as last parameter # (with pdf, it should be 'x', while for cdf it is 'q'). len <- length(params) params[[len+1]] <- seq(xlim[1], xlim[2], length=n) if (add) lines(params[[len+1]], do.call(d, params), type, ...) else plot(params[[len+1]], do.call(d, params), type, ...) } It's a bit crappy and I haven't tested it a lot. The params list must obey R's conventions for naming {C|P}DF parameters (e.g., shape and scale for the Weibull distribution, and not a or b). There's room for improvement, especially about the way it handles multiple plotting on the same graphic device (and, actually, passing vector of parameters only works as a side-effect when type="p"). Also, there's not much parameter checking! Here are some examples of use: # Normal CDF xl <- c(-5, 5) plot.func("norm", what="pdf", params=list(mean=1, sd=1.2), xlim=xl, ylim=c(0,.5), cex=.8, type="l", xlab="x", ylab="F(x)") plot.func("norm", what="pdf", params=list(mean=3, sd=.8), xlim=xl, add=TRUE, pch=19, cex=.8) plot.func("norm", what="pdf", params=list(mean=.5, sd=1.3), n=201, xlim=xl, add=TRUE, pch=19, cex=.4, type="p", col="steelblue") title(main="Some gaussian PDFs") # Standard normal PDF plot.func("norm", "cdf", xlab="Quantile (x)", ylab="P(X<x)", xlim=c(-3,3), type="l", main="Some gaussian CDFs") plot.func("norm", "cdf", list(sd=c(0.5,1.5)), xlim=c(-3,3), add=TRUE, type="p", pch=c("o","+"), n=201, cex=.8) legend("topleft", paste("N(0;", c(1,0.5,1.5), ")", sep=""), lty=c(1,NA,NA), pch=c(NA,"o","+"), bty="n") # Weibull distribution s <- c(.5,.75,1) plot.func("weibull", what="pdf", xlim=c(0,1), params=list(shape=s), col=1:3, type="p", n=301, pch=19, cex=.6, xlab="", ylab="") title(main="Weibull distribution", xlab="x", ylab="F(x)") legend("topright", legend=as.character(s), title="Shape", col=1:3, pch=19)
How to display multiple density or distribution functions on a single plot?
I like R too. Here is a more or less generic function to plot any probability distribution from the base R functions. It should not be difficult to extend the code with functions available in other pa
How to display multiple density or distribution functions on a single plot? I like R too. Here is a more or less generic function to plot any probability distribution from the base R functions. It should not be difficult to extend the code with functions available in other packages, e.g. SuppDists. plot.func <- function(distr=c("beta", "binom", "cauchy", "chisq", "exp", "f","gamma", "geom", "hyper", "logis", "lnorm", "nbinom", "norm", "pois", "t", "unif", "weibull"), what=c("pdf", "cdf"), params=list(), type="b", xlim=c(0, 1), log=FALSE, n=101, add=FALSE, ...) { what <- match.arg(what) d <- match.fun(paste(switch(what, pdf = "d", cdf = "p"), distr, sep="")) # Define x-values (because we won't use 'curve') as last parameter # (with pdf, it should be 'x', while for cdf it is 'q'). len <- length(params) params[[len+1]] <- seq(xlim[1], xlim[2], length=n) if (add) lines(params[[len+1]], do.call(d, params), type, ...) else plot(params[[len+1]], do.call(d, params), type, ...) } It's a bit crappy and I haven't tested it a lot. The params list must obey R's conventions for naming {C|P}DF parameters (e.g., shape and scale for the Weibull distribution, and not a or b). There's room for improvement, especially about the way it handles multiple plotting on the same graphic device (and, actually, passing vector of parameters only works as a side-effect when type="p"). Also, there's not much parameter checking! Here are some examples of use: # Normal CDF xl <- c(-5, 5) plot.func("norm", what="pdf", params=list(mean=1, sd=1.2), xlim=xl, ylim=c(0,.5), cex=.8, type="l", xlab="x", ylab="F(x)") plot.func("norm", what="pdf", params=list(mean=3, sd=.8), xlim=xl, add=TRUE, pch=19, cex=.8) plot.func("norm", what="pdf", params=list(mean=.5, sd=1.3), n=201, xlim=xl, add=TRUE, pch=19, cex=.4, type="p", col="steelblue") title(main="Some gaussian PDFs") # Standard normal PDF plot.func("norm", "cdf", xlab="Quantile (x)", ylab="P(X<x)", xlim=c(-3,3), type="l", main="Some gaussian CDFs") plot.func("norm", "cdf", list(sd=c(0.5,1.5)), xlim=c(-3,3), add=TRUE, type="p", pch=c("o","+"), n=201, cex=.8) legend("topleft", paste("N(0;", c(1,0.5,1.5), ")", sep=""), lty=c(1,NA,NA), pch=c(NA,"o","+"), bty="n") # Weibull distribution s <- c(.5,.75,1) plot.func("weibull", what="pdf", xlim=c(0,1), params=list(shape=s), col=1:3, type="p", n=301, pch=19, cex=.6, xlab="", ylab="") title(main="Weibull distribution", xlab="x", ylab="F(x)") legend("topright", legend=as.character(s), title="Shape", col=1:3, pch=19)
How to display multiple density or distribution functions on a single plot? I like R too. Here is a more or less generic function to plot any probability distribution from the base R functions. It should not be difficult to extend the code with functions available in other pa
38,559
When should dimensional-reduction be used?
Rather than asking "when use" let's look at "why use" - I believe this nicely leads us to the "when" answer. My understanding is that dimensionality reduction is mainly done to speed up learning (many features lead to longer computations) and compress data (many features take a lot of disk/memory space). In this view, you should reduce dimensions only if running time or data size is "unacceptable", and you reduce the feature space until things become "acceptable". "Unacceptable" is, obviously, defined solely by the task at hand. Modern computers can handle a lot of computations and store a lot of data - which is why, I think, you was told that 500 features is not too much. There are few other reasons for dimensionality reduction I can think of: matrix inversion problems - an algorithm can build a matrix from sample set, and if some features are interdependent this makes the marix non-invertible. But in practice it's not a big deal and gets circumvented via Moore-Penrose pseudoinverse so, in my view, this one should not be the reason for dimensionality reduction. data visualization - the rule of thumb here is to extract features until you're left with a maximum of two, due to a deficiency in human cognition :)
When should dimensional-reduction be used?
Rather than asking "when use" let's look at "why use" - I believe this nicely leads us to the "when" answer. My understanding is that dimensionality reduction is mainly done to speed up learning (man
When should dimensional-reduction be used? Rather than asking "when use" let's look at "why use" - I believe this nicely leads us to the "when" answer. My understanding is that dimensionality reduction is mainly done to speed up learning (many features lead to longer computations) and compress data (many features take a lot of disk/memory space). In this view, you should reduce dimensions only if running time or data size is "unacceptable", and you reduce the feature space until things become "acceptable". "Unacceptable" is, obviously, defined solely by the task at hand. Modern computers can handle a lot of computations and store a lot of data - which is why, I think, you was told that 500 features is not too much. There are few other reasons for dimensionality reduction I can think of: matrix inversion problems - an algorithm can build a matrix from sample set, and if some features are interdependent this makes the marix non-invertible. But in practice it's not a big deal and gets circumvented via Moore-Penrose pseudoinverse so, in my view, this one should not be the reason for dimensionality reduction. data visualization - the rule of thumb here is to extract features until you're left with a maximum of two, due to a deficiency in human cognition :)
When should dimensional-reduction be used? Rather than asking "when use" let's look at "why use" - I believe this nicely leads us to the "when" answer. My understanding is that dimensionality reduction is mainly done to speed up learning (man
38,560
When should dimensional-reduction be used?
As far as I know, we don't have a rule of thumb regarding when to use dimensional reduction. I'm also thinking that, is depends upon the ratio between the number of subjects and features. Also other factors such as processing power of the system you are going to deploy your learning algorithm, might have to consider. Further, dimensional reduction techniques such as sparse auto-encoder are capable of finding interesting patterns in the data, hence improve the accuracy of algorithms. Therefore one might think that it is always better to use a dimensional reduction method.
When should dimensional-reduction be used?
As far as I know, we don't have a rule of thumb regarding when to use dimensional reduction. I'm also thinking that, is depends upon the ratio between the number of subjects and features. Also other f
When should dimensional-reduction be used? As far as I know, we don't have a rule of thumb regarding when to use dimensional reduction. I'm also thinking that, is depends upon the ratio between the number of subjects and features. Also other factors such as processing power of the system you are going to deploy your learning algorithm, might have to consider. Further, dimensional reduction techniques such as sparse auto-encoder are capable of finding interesting patterns in the data, hence improve the accuracy of algorithms. Therefore one might think that it is always better to use a dimensional reduction method.
When should dimensional-reduction be used? As far as I know, we don't have a rule of thumb regarding when to use dimensional reduction. I'm also thinking that, is depends upon the ratio between the number of subjects and features. Also other f
38,561
When should dimensional-reduction be used?
The number of features is not the only reason for reduction. It is also important to check what are these features. Although this is a computer science oriented site, the issues of memory and run time are relevant but they shouldn't be the only focus of many of the learning tasks. When you are selecting your features, you should have some kind of hypothesis regarding what is relevant for the task in hand. If you selected you features in a random way, or in a way that is not related to the task you wish to learn, it is OK to continue using "random" methods to reduce this number. But if you had some hypothesis about the features, I would try to keep as many of them as possible in the learning process. In general, the better understanding you have and the better planning of your task regarding what are the best features to learn with, the better your results will be.
When should dimensional-reduction be used?
The number of features is not the only reason for reduction. It is also important to check what are these features. Although this is a computer science oriented site, the issues of memory and run tim
When should dimensional-reduction be used? The number of features is not the only reason for reduction. It is also important to check what are these features. Although this is a computer science oriented site, the issues of memory and run time are relevant but they shouldn't be the only focus of many of the learning tasks. When you are selecting your features, you should have some kind of hypothesis regarding what is relevant for the task in hand. If you selected you features in a random way, or in a way that is not related to the task you wish to learn, it is OK to continue using "random" methods to reduce this number. But if you had some hypothesis about the features, I would try to keep as many of them as possible in the learning process. In general, the better understanding you have and the better planning of your task regarding what are the best features to learn with, the better your results will be.
When should dimensional-reduction be used? The number of features is not the only reason for reduction. It is also important to check what are these features. Although this is a computer science oriented site, the issues of memory and run tim
38,562
When should dimensional-reduction be used?
If the complexity of your model or classifier trained on those n features scales badly (e.g. the number of parameters grows as O(n^3)), then even 500 features can be a problem. Not only because the optimization takes longer, but also because you might not have enough data to constrain your parameters, which would lead to overfitting. By reducing model complexity, dimensionality reduction can therefore also act as a means of regularization.
When should dimensional-reduction be used?
If the complexity of your model or classifier trained on those n features scales badly (e.g. the number of parameters grows as O(n^3)), then even 500 features can be a problem. Not only because the op
When should dimensional-reduction be used? If the complexity of your model or classifier trained on those n features scales badly (e.g. the number of parameters grows as O(n^3)), then even 500 features can be a problem. Not only because the optimization takes longer, but also because you might not have enough data to constrain your parameters, which would lead to overfitting. By reducing model complexity, dimensionality reduction can therefore also act as a means of regularization.
When should dimensional-reduction be used? If the complexity of your model or classifier trained on those n features scales badly (e.g. the number of parameters grows as O(n^3)), then even 500 features can be a problem. Not only because the op
38,563
When should dimensional-reduction be used?
I saw another very interesting usage case for dimensionality reduction in a video from stanford a while ago. They scanned a bunch of people with a body scanner, and used that to generate 3d models. After they had a bunch of data they applied dimensionality reduction to reduce the amount of variables they had to work with. And modifying those variables allowed them to quickly change the height/weight/gender of the resulting 3d models.
When should dimensional-reduction be used?
I saw another very interesting usage case for dimensionality reduction in a video from stanford a while ago. They scanned a bunch of people with a body scanner, and used that to generate 3d models. Af
When should dimensional-reduction be used? I saw another very interesting usage case for dimensionality reduction in a video from stanford a while ago. They scanned a bunch of people with a body scanner, and used that to generate 3d models. After they had a bunch of data they applied dimensionality reduction to reduce the amount of variables they had to work with. And modifying those variables allowed them to quickly change the height/weight/gender of the resulting 3d models.
When should dimensional-reduction be used? I saw another very interesting usage case for dimensionality reduction in a video from stanford a while ago. They scanned a bunch of people with a body scanner, and used that to generate 3d models. Af
38,564
How to test a logistic regression model developed on a training sample on the data left out using R?
You can use predict() for that. You need the model fitted to the training data, and the data from your test group. With type="response", you'll get the predicted probabilities, the default is the predicted logits. # generate some data for a logistic regression, all observations x <- rnorm(100, 175, 7) # predictor variable y <- 0.4*x + 10 + rnorm(100, 0, 3) # continuous predicted variable yFac <- cut(y, breaks=c(-Inf, median(y), Inf), labels=c("lo", "hi")) # median split d <- data.frame(yFac, x) # data frame # now set aside training sample and corresponding test sample idxTrn <- 1:70 # training sample idxTst <- !(1:nrow(d) %in% idxTrn) # test sample -> all remaining obs # if idxTrn were a logical index vector, this would just be idxTst <- !idxTrn # fit logistic regression only to training sample fitTrn <- glm(yFac ~ x, family=binomial(link="logit"), data=d, subset=idxTrn) # apply fitted model to test sample (predicted probabilities) predTst <- predict(fitTrn, d[idxTst, ], type="response") Now you may compare the predicted probabilities against actuall class values however you like. You may set a threshold for categorizing the predicted probabilities, and compare actual against predicted category memberships. > thresh <- 0.5 # threshold for categorizing predicted probabilities > predFac <- cut(predTst, breaks=c(-Inf, thresh, Inf), labels=c("lo", "hi")) > cTab <- table(yFac[idxTst], predFac, dnn=c("actual", "predicted")) > addmargins(cTab) predicted actual lo hi Sum lo 12 4 16 hi 5 9 14 Sum 17 13 30 Note that the dataframe supplied to predict() needs to have the same variable names as the df used in the call to glm(), and the factors need to have the same levels in the same order. If you're interested in k-fold cross validation, have a look at the cv.glm() function from package boot.
How to test a logistic regression model developed on a training sample on the data left out using R
You can use predict() for that. You need the model fitted to the training data, and the data from your test group. With type="response", you'll get the predicted probabilities, the default is the pred
How to test a logistic regression model developed on a training sample on the data left out using R? You can use predict() for that. You need the model fitted to the training data, and the data from your test group. With type="response", you'll get the predicted probabilities, the default is the predicted logits. # generate some data for a logistic regression, all observations x <- rnorm(100, 175, 7) # predictor variable y <- 0.4*x + 10 + rnorm(100, 0, 3) # continuous predicted variable yFac <- cut(y, breaks=c(-Inf, median(y), Inf), labels=c("lo", "hi")) # median split d <- data.frame(yFac, x) # data frame # now set aside training sample and corresponding test sample idxTrn <- 1:70 # training sample idxTst <- !(1:nrow(d) %in% idxTrn) # test sample -> all remaining obs # if idxTrn were a logical index vector, this would just be idxTst <- !idxTrn # fit logistic regression only to training sample fitTrn <- glm(yFac ~ x, family=binomial(link="logit"), data=d, subset=idxTrn) # apply fitted model to test sample (predicted probabilities) predTst <- predict(fitTrn, d[idxTst, ], type="response") Now you may compare the predicted probabilities against actuall class values however you like. You may set a threshold for categorizing the predicted probabilities, and compare actual against predicted category memberships. > thresh <- 0.5 # threshold for categorizing predicted probabilities > predFac <- cut(predTst, breaks=c(-Inf, thresh, Inf), labels=c("lo", "hi")) > cTab <- table(yFac[idxTst], predFac, dnn=c("actual", "predicted")) > addmargins(cTab) predicted actual lo hi Sum lo 12 4 16 hi 5 9 14 Sum 17 13 30 Note that the dataframe supplied to predict() needs to have the same variable names as the df used in the call to glm(), and the factors need to have the same levels in the same order. If you're interested in k-fold cross validation, have a look at the cv.glm() function from package boot.
How to test a logistic regression model developed on a training sample on the data left out using R You can use predict() for that. You need the model fitted to the training data, and the data from your test group. With type="response", you'll get the predicted probabilities, the default is the pred
38,565
How to test a logistic regression model developed on a training sample on the data left out using R?
You may want to take a close look at the caret package which has a lot of support for this type of analysis. Its four vignettes give a good overview of how it can help you with this.
How to test a logistic regression model developed on a training sample on the data left out using R
You may want to take a close look at the caret package which has a lot of support for this type of analysis. Its four vignettes give a good overview of how it can help you with this.
How to test a logistic regression model developed on a training sample on the data left out using R? You may want to take a close look at the caret package which has a lot of support for this type of analysis. Its four vignettes give a good overview of how it can help you with this.
How to test a logistic regression model developed on a training sample on the data left out using R You may want to take a close look at the caret package which has a lot of support for this type of analysis. Its four vignettes give a good overview of how it can help you with this.
38,566
Understanding statistical control charts
The purpose of a control chart is to identify, as quickly as possible, when something fixable is going wrong. For it to work well, it must not identify random or uncontrollable changes as being "out of control." The problems with the procedure described are manifold. They include The "stable" section of the graph is not typical. By definition, it is less variable than usual. By underestimating variability of the in-control situation, it will cause the chart incorrectly to identify many changes as out of control. Using standard errors is simply mistaken. A standard error estimates the sampling variability of the mean weekly call rate, not the variability of the call rates themselves. Setting the limits at $\pm 3$ standard deviations might or might not be effective. It is based on a rule of thumb applicable for normally distributed data that are not serially correlated. Call rates will not be normally distributed unless they are moderately large (around 100+ per week, approximately). They might or might not be serially correlated. The procedure assumes the underlying process has an unvarying rate over time. But you're not making widgets; you're responding to a market that--hopefully--is (a) increasing in size yet (b) decreasing its call rate over time. Temporal trends are expected. Sooner or later any trends will cause the data to look consistently out of control. People tend to undergo annual cycles of activity corresponding to seasons, the academic calendar, holidays, and so on. These cycles act like trends to cause predictable (but meaningless) out-of-control events. A simulated dataset illustrates these principles and problems. The simulation procedure creates a realistic series of data that are in control: relative to a predictable underlying pattern, it includes no out-of-control excursions that can be assigned a cause. This plot is a typical outcome of the simulation. These data a drawn from Poisson distributions, a reasonable model for call rates. They start at a baseline of 100 per week, trending upward linearly by 13 per week per year. Superimposed on this trend is a sinusoidal annual cycle with an amplitude of eight calls per week (traced by the dashed gray curve). This is a modest trend and a relatively small seasonality, I believe. The red dots (around weeks 12 - 37) were identified as the 26-week period of lowest standard deviation encountered during the first 1.5 years of this two year chart. The thin red and blue lines are set at $\pm 3$ standard errors around this period's mean. (Obviously they are useless.) The thick gold and green lines are set at $\pm 3$ standard deviations around the mean. (One doesn't usually project control lines backwards in time, but I have done that here for visual reference. It's usually meaningless to apply controls retroactively: they're intended to identify future changes.) Note how the secular trend and the seasonal variations drive the system into apparent out-of-control conditions between weeks 40-65 (an annual high) and after week 85 (an annual high plus over one year's cumulative trend). Anybody attempting to use this as a control chart would be mistakenly looking for nonexistent causes most of the time. In practice, this system would be hated and soon ignored by everyone. (I have seen companies where every office door and all the hallway walls were covered in control charts that nobody bothered to read, because they all knew better.) The right way to proceed begins by asking the basic questions, such as how do you measure quality? What influences can you have over it? How, despite your best efforts, are these measures likely to fluctuate? What would extreme fluctuations tell you (what could their controllable causes be)? Then, you need to perform a statistical analysis of the past data. What is their distribution? Are they temporally correlated? Are there trends? Seasonal components? Evidence of past excursions that might have indicated out of control situations? Having done all this, it may then be possible to create an effective control chart (or other statistical monitoring) system. The literature is large, so if this company is serious about using quantitative methods to improve quality, there is ample information about how to do so. But ignoring these statistical principles (whether through lack of time or lack of knowledge) practically guarantees that the effort will fail.
Understanding statistical control charts
The purpose of a control chart is to identify, as quickly as possible, when something fixable is going wrong. For it to work well, it must not identify random or uncontrollable changes as being "out
Understanding statistical control charts The purpose of a control chart is to identify, as quickly as possible, when something fixable is going wrong. For it to work well, it must not identify random or uncontrollable changes as being "out of control." The problems with the procedure described are manifold. They include The "stable" section of the graph is not typical. By definition, it is less variable than usual. By underestimating variability of the in-control situation, it will cause the chart incorrectly to identify many changes as out of control. Using standard errors is simply mistaken. A standard error estimates the sampling variability of the mean weekly call rate, not the variability of the call rates themselves. Setting the limits at $\pm 3$ standard deviations might or might not be effective. It is based on a rule of thumb applicable for normally distributed data that are not serially correlated. Call rates will not be normally distributed unless they are moderately large (around 100+ per week, approximately). They might or might not be serially correlated. The procedure assumes the underlying process has an unvarying rate over time. But you're not making widgets; you're responding to a market that--hopefully--is (a) increasing in size yet (b) decreasing its call rate over time. Temporal trends are expected. Sooner or later any trends will cause the data to look consistently out of control. People tend to undergo annual cycles of activity corresponding to seasons, the academic calendar, holidays, and so on. These cycles act like trends to cause predictable (but meaningless) out-of-control events. A simulated dataset illustrates these principles and problems. The simulation procedure creates a realistic series of data that are in control: relative to a predictable underlying pattern, it includes no out-of-control excursions that can be assigned a cause. This plot is a typical outcome of the simulation. These data a drawn from Poisson distributions, a reasonable model for call rates. They start at a baseline of 100 per week, trending upward linearly by 13 per week per year. Superimposed on this trend is a sinusoidal annual cycle with an amplitude of eight calls per week (traced by the dashed gray curve). This is a modest trend and a relatively small seasonality, I believe. The red dots (around weeks 12 - 37) were identified as the 26-week period of lowest standard deviation encountered during the first 1.5 years of this two year chart. The thin red and blue lines are set at $\pm 3$ standard errors around this period's mean. (Obviously they are useless.) The thick gold and green lines are set at $\pm 3$ standard deviations around the mean. (One doesn't usually project control lines backwards in time, but I have done that here for visual reference. It's usually meaningless to apply controls retroactively: they're intended to identify future changes.) Note how the secular trend and the seasonal variations drive the system into apparent out-of-control conditions between weeks 40-65 (an annual high) and after week 85 (an annual high plus over one year's cumulative trend). Anybody attempting to use this as a control chart would be mistakenly looking for nonexistent causes most of the time. In practice, this system would be hated and soon ignored by everyone. (I have seen companies where every office door and all the hallway walls were covered in control charts that nobody bothered to read, because they all knew better.) The right way to proceed begins by asking the basic questions, such as how do you measure quality? What influences can you have over it? How, despite your best efforts, are these measures likely to fluctuate? What would extreme fluctuations tell you (what could their controllable causes be)? Then, you need to perform a statistical analysis of the past data. What is their distribution? Are they temporally correlated? Are there trends? Seasonal components? Evidence of past excursions that might have indicated out of control situations? Having done all this, it may then be possible to create an effective control chart (or other statistical monitoring) system. The literature is large, so if this company is serious about using quantitative methods to improve quality, there is ample information about how to do so. But ignoring these statistical principles (whether through lack of time or lack of knowledge) practically guarantees that the effort will fail.
Understanding statistical control charts The purpose of a control chart is to identify, as quickly as possible, when something fixable is going wrong. For it to work well, it must not identify random or uncontrollable changes as being "out
38,567
Understanding statistical control charts
The general idea of control charts is to distinguish between common cause variation and special cause variation. The idea is that the process is fairly stable and generates data from a given distribution (though the Poisson makes more sense for number of calls than the normal). One big advantage of control charts is that they limit overreacting to natural variation while still allowing for finding when the process has changed. Choosing a set of observations because they have small variation would almost guarantee that the limits are too narrow and therefore increase the inappropriate reactions to normal variation. Using all the data makes a lot more sense, and using a Poisson C chart might be better than an x-bar chart. But, it also seems that a call center would expect differences due to holidays or season (depending on what is being supported), so the underlying assumptions may not even be appropriate here. It sounds like they are doing something because they can rather than because it answers a meaningful question.
Understanding statistical control charts
The general idea of control charts is to distinguish between common cause variation and special cause variation. The idea is that the process is fairly stable and generates data from a given distribu
Understanding statistical control charts The general idea of control charts is to distinguish between common cause variation and special cause variation. The idea is that the process is fairly stable and generates data from a given distribution (though the Poisson makes more sense for number of calls than the normal). One big advantage of control charts is that they limit overreacting to natural variation while still allowing for finding when the process has changed. Choosing a set of observations because they have small variation would almost guarantee that the limits are too narrow and therefore increase the inappropriate reactions to normal variation. Using all the data makes a lot more sense, and using a Poisson C chart might be better than an x-bar chart. But, it also seems that a call center would expect differences due to holidays or season (depending on what is being supported), so the underlying assumptions may not even be appropriate here. It sounds like they are doing something because they can rather than because it answers a meaningful question.
Understanding statistical control charts The general idea of control charts is to distinguish between common cause variation and special cause variation. The idea is that the process is fairly stable and generates data from a given distribu
38,568
When does a logistic regression model have a unique solution?
The solution of logistic regression is a solution of maximization of certain function, namely log-likelihood: $$\sum_{i=1}^ny_i\log p_i+(1-y_i)\log(1-p_i),$$ where $$p_i=\frac{\exp(\beta_0+\beta_1x_{1i}+...+\beta_kx_{ik})}{1+\exp(\beta_0+\beta_1x_{1i}+...+\beta_kx_{ik})},$$ and $(y_i,x_{1i},...,x_{ki})$, $i=1,...,n$ is the data. So mathematically speaking the unique solution of logistic regression exists for given data set if the log-likelihood has a unique maximum. If I am not mistaken full rank of matrix $X=[1,x_{1i},...,x_{ki}]$ is necessary for that. For more mathematical conditions you might look into iterative reweighted least squares, since maximisation of log likelihood function for logistic regression is a special case of IRWLS.
When does a logistic regression model have a unique solution?
The solution of logistic regression is a solution of maximization of certain function, namely log-likelihood: $$\sum_{i=1}^ny_i\log p_i+(1-y_i)\log(1-p_i),$$ where $$p_i=\frac{\exp(\beta_0+\beta_1x_{
When does a logistic regression model have a unique solution? The solution of logistic regression is a solution of maximization of certain function, namely log-likelihood: $$\sum_{i=1}^ny_i\log p_i+(1-y_i)\log(1-p_i),$$ where $$p_i=\frac{\exp(\beta_0+\beta_1x_{1i}+...+\beta_kx_{ik})}{1+\exp(\beta_0+\beta_1x_{1i}+...+\beta_kx_{ik})},$$ and $(y_i,x_{1i},...,x_{ki})$, $i=1,...,n$ is the data. So mathematically speaking the unique solution of logistic regression exists for given data set if the log-likelihood has a unique maximum. If I am not mistaken full rank of matrix $X=[1,x_{1i},...,x_{ki}]$ is necessary for that. For more mathematical conditions you might look into iterative reweighted least squares, since maximisation of log likelihood function for logistic regression is a special case of IRWLS.
When does a logistic regression model have a unique solution? The solution of logistic regression is a solution of maximization of certain function, namely log-likelihood: $$\sum_{i=1}^ny_i\log p_i+(1-y_i)\log(1-p_i),$$ where $$p_i=\frac{\exp(\beta_0+\beta_1x_{
38,569
When does a logistic regression model have a unique solution?
I believe you are looking for the concept of orthogonality of the covariates. As soon as one of the covariates can be written as a linear combination of one of the others, you will not have a unique solution. As an extreme case: say you have 2 covariates, and one is (in your dataset) always the double of the other, then both $$logodds(outcome)=\beta_0+\beta_1 X_1$$ and $$logodds(outcome)=\beta_0+\frac{1}{2}\beta_1 X_2$$ Will yield the same results (regardless of $\beta_1$), and of course there are lots of other solutions.
When does a logistic regression model have a unique solution?
I believe you are looking for the concept of orthogonality of the covariates. As soon as one of the covariates can be written as a linear combination of one of the others, you will not have a unique s
When does a logistic regression model have a unique solution? I believe you are looking for the concept of orthogonality of the covariates. As soon as one of the covariates can be written as a linear combination of one of the others, you will not have a unique solution. As an extreme case: say you have 2 covariates, and one is (in your dataset) always the double of the other, then both $$logodds(outcome)=\beta_0+\beta_1 X_1$$ and $$logodds(outcome)=\beta_0+\frac{1}{2}\beta_1 X_2$$ Will yield the same results (regardless of $\beta_1$), and of course there are lots of other solutions.
When does a logistic regression model have a unique solution? I believe you are looking for the concept of orthogonality of the covariates. As soon as one of the covariates can be written as a linear combination of one of the others, you will not have a unique s
38,570
When does a logistic regression model have a unique solution?
I think an interesting point is, that when the data is separable there should be infinite solutions. But if u use GD you converge to the max-margin solution (intuition I have is that GD for linear regression when its overparametrized converges to pseudo-inverse which is min-norm which is max margin since margin is sometimes related to 1/w). So in a way, its like having a unique minimizer. Even though we never truly get there, we do converge to it. You can check this out here: [1710.10345] The Implicit Bias of Gradient Descent on Separable Data (https://arxiv.org/abs/1710.10345) intuitively if you look at the gradient: $$ \nabla_w l(w) = \frac{1}{N} \sum^N_{n=1} \frac{y^{(n)} x^{(n)}}{ 1 + e^{y^{(n)} w^\top x^{(n)}} }$$ since the weights increase so does the score and thus the denominator of the above. But the weights increase “sort of linearly” while the decrease in the size of the of the gradient is exponential (as seen above). So GD stops updating “pretty soon”. Or at least thats the ay I sort of understand it at a high level. For real answers, refer to the paper of course. Therefore if the data is separable and you use GD you converge (approach) to the max-margin solution for unregularized logistic regression, which is unique.
When does a logistic regression model have a unique solution?
I think an interesting point is, that when the data is separable there should be infinite solutions. But if u use GD you converge to the max-margin solution (intuition I have is that GD for linear reg
When does a logistic regression model have a unique solution? I think an interesting point is, that when the data is separable there should be infinite solutions. But if u use GD you converge to the max-margin solution (intuition I have is that GD for linear regression when its overparametrized converges to pseudo-inverse which is min-norm which is max margin since margin is sometimes related to 1/w). So in a way, its like having a unique minimizer. Even though we never truly get there, we do converge to it. You can check this out here: [1710.10345] The Implicit Bias of Gradient Descent on Separable Data (https://arxiv.org/abs/1710.10345) intuitively if you look at the gradient: $$ \nabla_w l(w) = \frac{1}{N} \sum^N_{n=1} \frac{y^{(n)} x^{(n)}}{ 1 + e^{y^{(n)} w^\top x^{(n)}} }$$ since the weights increase so does the score and thus the denominator of the above. But the weights increase “sort of linearly” while the decrease in the size of the of the gradient is exponential (as seen above). So GD stops updating “pretty soon”. Or at least thats the ay I sort of understand it at a high level. For real answers, refer to the paper of course. Therefore if the data is separable and you use GD you converge (approach) to the max-margin solution for unregularized logistic regression, which is unique.
When does a logistic regression model have a unique solution? I think an interesting point is, that when the data is separable there should be infinite solutions. But if u use GD you converge to the max-margin solution (intuition I have is that GD for linear reg
38,571
How to calculate confidence interval for count data in R?
You are looking for a confidence interval around the count from a Poisson process. If you put for example 42 into your linked example you get You observed 42 objects in a certain volume or 42 events in a certain time period. Exact Poisson confidence interval: The 90% confidence interval extends from 31.94 to 54.32 The 95% confidence interval extends from 30.27 to 56.77 The 99% confidence interval extends from 27.18 to 61.76 You can get this in R using poisson.test. For example > poisson.test(42, conf.level = 0.9 ) Exact Poisson test data: 42 time base: 1 number of events = 42, time base = 1, p-value < 2.2e-16 alternative hypothesis: true event rate is not equal to 1 90 percent confidence interval: 31.93813 54.32395 sample estimates: event rate 42 and similarly the other values by changing conf.level. If you do not want all the background information, try something like > poisson.test(42, conf.level = 0.95 )$conf.int [1] 30.26991 56.77180 attr(,"conf.level") [1] 0.95
How to calculate confidence interval for count data in R?
You are looking for a confidence interval around the count from a Poisson process. If you put for example 42 into your linked example you get You observed 42 objects in a certain volume or 42 eve
How to calculate confidence interval for count data in R? You are looking for a confidence interval around the count from a Poisson process. If you put for example 42 into your linked example you get You observed 42 objects in a certain volume or 42 events in a certain time period. Exact Poisson confidence interval: The 90% confidence interval extends from 31.94 to 54.32 The 95% confidence interval extends from 30.27 to 56.77 The 99% confidence interval extends from 27.18 to 61.76 You can get this in R using poisson.test. For example > poisson.test(42, conf.level = 0.9 ) Exact Poisson test data: 42 time base: 1 number of events = 42, time base = 1, p-value < 2.2e-16 alternative hypothesis: true event rate is not equal to 1 90 percent confidence interval: 31.93813 54.32395 sample estimates: event rate 42 and similarly the other values by changing conf.level. If you do not want all the background information, try something like > poisson.test(42, conf.level = 0.95 )$conf.int [1] 30.26991 56.77180 attr(,"conf.level") [1] 0.95
How to calculate confidence interval for count data in R? You are looking for a confidence interval around the count from a Poisson process. If you put for example 42 into your linked example you get You observed 42 objects in a certain volume or 42 eve
38,572
How to calculate confidence interval for count data in R?
If the number of event is too small, it would be better to use the exact method. exactPoiCI <- function (X, conf.level=0.95) { alpha = 1 - conf.level upper <- 0.5 * qchisq((1-(alpha/2)), (2*X)) lower <- 0.5 * qchisq(alpha/2, (2*X +2)) return(c(lower, upper)) } exactPoiCI(42, 0.9) exactPoiCI(42) exactPoiCI(42, 0.99) Reference: Liddell FD. Simple exact analysis of the standardised mortality ratio. J Epidemiol Community Health. 1984;38:85-8 (link)
How to calculate confidence interval for count data in R?
If the number of event is too small, it would be better to use the exact method. exactPoiCI <- function (X, conf.level=0.95) { alpha = 1 - conf.level upper <- 0.5 * qchisq((1-(alpha/2)), (2*X))
How to calculate confidence interval for count data in R? If the number of event is too small, it would be better to use the exact method. exactPoiCI <- function (X, conf.level=0.95) { alpha = 1 - conf.level upper <- 0.5 * qchisq((1-(alpha/2)), (2*X)) lower <- 0.5 * qchisq(alpha/2, (2*X +2)) return(c(lower, upper)) } exactPoiCI(42, 0.9) exactPoiCI(42) exactPoiCI(42, 0.99) Reference: Liddell FD. Simple exact analysis of the standardised mortality ratio. J Epidemiol Community Health. 1984;38:85-8 (link)
How to calculate confidence interval for count data in R? If the number of event is too small, it would be better to use the exact method. exactPoiCI <- function (X, conf.level=0.95) { alpha = 1 - conf.level upper <- 0.5 * qchisq((1-(alpha/2)), (2*X))
38,573
How to calculate confidence interval for count data in R?
The first answer using poisson.test does give the exact confidence interval. However, this calculation is so simple that I prefer to calculate it directly instead of using a library function. In the second answer, there is a minor error. The +2 should be in the degree of freedom for the upper CI calculation, not for the lower one. So the correct code should be: exactPoiCI <- function (X, conf.level=0.95) { alpha = 1 - conf.level upper <- 0.5 * qchisq(1-alpha/2, 2*X+2) lower <- 0.5 * qchisq(alpha/2, 2*X) return(c(lower, upper)) } exactPoiCI(42, 0.9) exactPoiCI(42) exactPoiCI(42, 0.99)
How to calculate confidence interval for count data in R?
The first answer using poisson.test does give the exact confidence interval. However, this calculation is so simple that I prefer to calculate it directly instead of using a library function. In the s
How to calculate confidence interval for count data in R? The first answer using poisson.test does give the exact confidence interval. However, this calculation is so simple that I prefer to calculate it directly instead of using a library function. In the second answer, there is a minor error. The +2 should be in the degree of freedom for the upper CI calculation, not for the lower one. So the correct code should be: exactPoiCI <- function (X, conf.level=0.95) { alpha = 1 - conf.level upper <- 0.5 * qchisq(1-alpha/2, 2*X+2) lower <- 0.5 * qchisq(alpha/2, 2*X) return(c(lower, upper)) } exactPoiCI(42, 0.9) exactPoiCI(42) exactPoiCI(42, 0.99)
How to calculate confidence interval for count data in R? The first answer using poisson.test does give the exact confidence interval. However, this calculation is so simple that I prefer to calculate it directly instead of using a library function. In the s
38,574
Visualizing two scalar variables over time
The other idea is to plot one series as x and the second as y -- the time dependency will be hidden, but this plots shows correlations pretty well. (Yet time can be shown to some extent by connecting points chronologically; if the series are quite short and continuous it should be readable.)
Visualizing two scalar variables over time
The other idea is to plot one series as x and the second as y -- the time dependency will be hidden, but this plots shows correlations pretty well. (Yet time can be shown to some extent by connecting
Visualizing two scalar variables over time The other idea is to plot one series as x and the second as y -- the time dependency will be hidden, but this plots shows correlations pretty well. (Yet time can be shown to some extent by connecting points chronologically; if the series are quite short and continuous it should be readable.)
Visualizing two scalar variables over time The other idea is to plot one series as x and the second as y -- the time dependency will be hidden, but this plots shows correlations pretty well. (Yet time can be shown to some extent by connecting
38,575
Visualizing two scalar variables over time
I sometimes make the x-axis time and plot both scalar variables on the y-axis. When the two scalar variables are on a different metric, I rescale one or both of the scalar variables so they can be displayed on the same plot. I use things like colour and shape to discriminate the two scalar variables. I've often used xyplot from lattice for this purpose. Here's an example: require(lattice) xyplot(dv1 + dv2 ~ iv, data = x, col = c("black", "red"))
Visualizing two scalar variables over time
I sometimes make the x-axis time and plot both scalar variables on the y-axis. When the two scalar variables are on a different metric, I rescale one or both of the scalar variables so they can be dis
Visualizing two scalar variables over time I sometimes make the x-axis time and plot both scalar variables on the y-axis. When the two scalar variables are on a different metric, I rescale one or both of the scalar variables so they can be displayed on the same plot. I use things like colour and shape to discriminate the two scalar variables. I've often used xyplot from lattice for this purpose. Here's an example: require(lattice) xyplot(dv1 + dv2 ~ iv, data = x, col = c("black", "red"))
Visualizing two scalar variables over time I sometimes make the x-axis time and plot both scalar variables on the y-axis. When the two scalar variables are on a different metric, I rescale one or both of the scalar variables so they can be dis
38,576
Visualizing two scalar variables over time
A method that can be very effective--one I have found extremely useful--is to sort the data by time and draw a connected X,Y scatterplot. (That is, successive points are connected by line segments or a spline.) This much is straightforward in almost any statistical plotting package. If the result is too confusing, add graphical indications of directionality. Depending on the density of the points and their pattern of temporal evolution, options include using arrows on the line segments or otherwise applying a graduated color or thickness to the segments to indicate their times. You can even dispense with the connecting lines and just color or size the points to indicate time: that works better when there are many points on the plot. In addition to displaying the bivariate relationship among the data in a conventional form, this method supplies a clear visual indication of temporally local correlations, changes that run counter to the prevailing correlation, etc. An example of this appears in my reply at Forecasting time series based on a behavior of other one : Because the points follow clear paths through this plot, I dropped the connecting lines in favor of letting hue represent time. This plot reveals far more detail about corresponding behaviors than the two original graphs do. The time dependency is qualitatively clear from the changes in hue, but a legend (matching colors to times) would help the reader.
Visualizing two scalar variables over time
A method that can be very effective--one I have found extremely useful--is to sort the data by time and draw a connected X,Y scatterplot. (That is, successive points are connected by line segments or
Visualizing two scalar variables over time A method that can be very effective--one I have found extremely useful--is to sort the data by time and draw a connected X,Y scatterplot. (That is, successive points are connected by line segments or a spline.) This much is straightforward in almost any statistical plotting package. If the result is too confusing, add graphical indications of directionality. Depending on the density of the points and their pattern of temporal evolution, options include using arrows on the line segments or otherwise applying a graduated color or thickness to the segments to indicate their times. You can even dispense with the connecting lines and just color or size the points to indicate time: that works better when there are many points on the plot. In addition to displaying the bivariate relationship among the data in a conventional form, this method supplies a clear visual indication of temporally local correlations, changes that run counter to the prevailing correlation, etc. An example of this appears in my reply at Forecasting time series based on a behavior of other one : Because the points follow clear paths through this plot, I dropped the connecting lines in favor of letting hue represent time. This plot reveals far more detail about corresponding behaviors than the two original graphs do. The time dependency is qualitatively clear from the changes in hue, but a legend (matching colors to times) would help the reader.
Visualizing two scalar variables over time A method that can be very effective--one I have found extremely useful--is to sort the data by time and draw a connected X,Y scatterplot. (That is, successive points are connected by line segments or
38,577
Why is the Central Limit applicable in A/B testing?
Suppose you have two populations: A and B. You draw samples from A: $a_1,a_2,...,a_n$ You draw samples from B: $b_1,b_2,...,b_n$ The actual values of $a$'s and $b$'s are just the numbers $0$'s and $1$'s, which represent success/failures. Now let $\alpha$ and $\beta$ denote the averages of these samples, i.e. $$ \alpha = \frac{a_1+a_2+...+a_n}{n} \text{ and }\beta = \frac{b_1 + b_2 + ... + b_n}{n} $$ These quantities $\alpha,\beta$ represent the proportion of successes that you see in each population. For example, suppose your samples for A included a total of 30 times where $a=1$ and 70 times where $a=0$. Then $\alpha = .3$, and so you estimate that the success rate for population A is roughly 30 percent. You can apply the CLT to $\alpha$ and $\beta$ since they are means from a population. You are correct that that $a$'s and $b$'s are not normally distributed. But the moment you start taking their means they become normally distributed. As a follow up question you ask "why is their difference normally distributed"? Their difference is given by $\alpha - \beta$. It is a well-known theorem in probability, that if $\alpha$ and $\beta$ are normally distributed, and they are independent, then $\alpha - \beta$ is also normally distributed. Do you require help how to determine the $\mu$ and $\sigma$ parameters for their difference?
Why is the Central Limit applicable in A/B testing?
Suppose you have two populations: A and B. You draw samples from A: $a_1,a_2,...,a_n$ You draw samples from B: $b_1,b_2,...,b_n$ The actual values of $a$'s and $b$'s are just the numbers $0$'s and $1$
Why is the Central Limit applicable in A/B testing? Suppose you have two populations: A and B. You draw samples from A: $a_1,a_2,...,a_n$ You draw samples from B: $b_1,b_2,...,b_n$ The actual values of $a$'s and $b$'s are just the numbers $0$'s and $1$'s, which represent success/failures. Now let $\alpha$ and $\beta$ denote the averages of these samples, i.e. $$ \alpha = \frac{a_1+a_2+...+a_n}{n} \text{ and }\beta = \frac{b_1 + b_2 + ... + b_n}{n} $$ These quantities $\alpha,\beta$ represent the proportion of successes that you see in each population. For example, suppose your samples for A included a total of 30 times where $a=1$ and 70 times where $a=0$. Then $\alpha = .3$, and so you estimate that the success rate for population A is roughly 30 percent. You can apply the CLT to $\alpha$ and $\beta$ since they are means from a population. You are correct that that $a$'s and $b$'s are not normally distributed. But the moment you start taking their means they become normally distributed. As a follow up question you ask "why is their difference normally distributed"? Their difference is given by $\alpha - \beta$. It is a well-known theorem in probability, that if $\alpha$ and $\beta$ are normally distributed, and they are independent, then $\alpha - \beta$ is also normally distributed. Do you require help how to determine the $\mu$ and $\sigma$ parameters for their difference?
Why is the Central Limit applicable in A/B testing? Suppose you have two populations: A and B. You draw samples from A: $a_1,a_2,...,a_n$ You draw samples from B: $b_1,b_2,...,b_n$ The actual values of $a$'s and $b$'s are just the numbers $0$'s and $1$
38,578
Why is the Central Limit applicable in A/B testing?
we only draw two samples You can consider a sample of size $n$ as $n$ samples of size $1$. The outcome can be seen as a sum of $n$ independent Bernoulli distributed variables (if the people in the sample are independent), also know as a binomial distribution. the distribution of their means tends towards a normal distribution The central limit theorem tells that in the limit of an infinite sample size the distribution of the normalised sum will approach a normal distribution. In practice this is used to argue that a finite sample will also approximately follow a Normal distribution. In the special case of a Binomial distribution we can also use the De Moivre–Laplace theorem to argue that the distribution is approximately normal distributed. Related: Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximation for the spread of many hypothetical means?
Why is the Central Limit applicable in A/B testing?
we only draw two samples You can consider a sample of size $n$ as $n$ samples of size $1$. The outcome can be seen as a sum of $n$ independent Bernoulli distributed variables (if the people in the sa
Why is the Central Limit applicable in A/B testing? we only draw two samples You can consider a sample of size $n$ as $n$ samples of size $1$. The outcome can be seen as a sum of $n$ independent Bernoulli distributed variables (if the people in the sample are independent), also know as a binomial distribution. the distribution of their means tends towards a normal distribution The central limit theorem tells that in the limit of an infinite sample size the distribution of the normalised sum will approach a normal distribution. In practice this is used to argue that a finite sample will also approximately follow a Normal distribution. In the special case of a Binomial distribution we can also use the De Moivre–Laplace theorem to argue that the distribution is approximately normal distributed. Related: Statistics question: Why is the standard error, which is calculated from 1 sample, a good approximation for the spread of many hypothetical means?
Why is the Central Limit applicable in A/B testing? we only draw two samples You can consider a sample of size $n$ as $n$ samples of size $1$. The outcome can be seen as a sum of $n$ independent Bernoulli distributed variables (if the people in the sa
38,579
Why is the Central Limit applicable in A/B testing?
The CLT states that as we draw random samples from a population, the distribution of their means tends towards a normal distribution. Not quite, though maybe you have a book that says something vague and not particularly accurate like this. Such vagueness of language has misled you about what it is even referring to. It's not the number of samples, but the number of observations in a sample. Sample sizes are very large in typical A/B tests, but see the later discussion, which explains why that might not be sufficient for the sort of variables commonly used in many A/B tests. First lets look at what the CLT says, or at least let us get a bit closer to a formal statement of it. In particular (for a 'classical' CLT in mean-form), if $\bar{X}_n$ (n=1,2,3...) is a sequence of sample means of $n$ independent and identically distributed observations from a population with finite mean and variance ($\mu$ and $\sigma^2$), and $Z_n = \frac{\bar{X}_n-\mu}{\sigma/\sqrt{n}}$ is the standardized mean, then in the limit as $n\to\infty$ the (cumulative) distribution function, $F_n(z)$ of $Z_n$ converges to the standard normal cdf, $\Phi(z)$. (Conveniently, this theorem - relating as it does to distribution functions - is in a form that is potentially relevant to evaluating tail probabilities.) This would suggest as sample sizes become very large, the distribution function $F_n$ of a statistic $Z_n$ should eventually become close to that of a standard normal distribution. The CLT itself doesn't tell you how large that might need to be; it only talks about what happens in the limit as $n$ goes to infinity. In some situations (even when the CLT holds), that might need to be very large indeed. In particular, you might consider very skewed distributions (which are very common in calculations like click-through-rates or purchase rates or whatever, and also in effectively continuous quantities such as time or money spent on a site) and see that sample means can sometimes remain clearly non-normal even when sample sizes are getting into the thousands. Nonetheless, the difference between these two samples is guaranteed to be a normal distribution. I presume you intend "difference between sample means" there (otherwise, where does the CLT come in? 'difference in samples' is not a test statistic), but either way, this is wrong. Indeed if the original distributions were non-normal you can guarantee that the distributions of the sample means (and of their difference, given the usual assumptions) is not actually normal. However, it might in practice get quite close $-$ close enough that the normal will yield perfectly reasonable answers, except perhaps in the extreme tail $-$ if only the sample size were large enough. Large enough, that is, given the particular situation you're in and your particular sense of what might be close enough. My question is, how is the difference between these two samples constructed, and why is it guaranteed to be a normal distribution? Look to the specific statistic you're using in the test. You don't mention which it is, and in A/B testing there's at least two distinct situations that are commonly involved and many more which might be possible. For both those common cases the statistic at least has a numerator with the form of a difference of means. However, the CLT alone is not sufficient for the whole statistic in either case. A suitable argument for a t-test (e.g. if you were testing say, time or money spent at a site) or a z-test (as an approximation of a binomial test of proportions) would require additional argument, since the denominator is also a random variable. Such an argument is possible (e.g. by invoking Slutsky's theorem).
Why is the Central Limit applicable in A/B testing?
The CLT states that as we draw random samples from a population, the distribution of their means tends towards a normal distribution. Not quite, though maybe you have a book that says something vague
Why is the Central Limit applicable in A/B testing? The CLT states that as we draw random samples from a population, the distribution of their means tends towards a normal distribution. Not quite, though maybe you have a book that says something vague and not particularly accurate like this. Such vagueness of language has misled you about what it is even referring to. It's not the number of samples, but the number of observations in a sample. Sample sizes are very large in typical A/B tests, but see the later discussion, which explains why that might not be sufficient for the sort of variables commonly used in many A/B tests. First lets look at what the CLT says, or at least let us get a bit closer to a formal statement of it. In particular (for a 'classical' CLT in mean-form), if $\bar{X}_n$ (n=1,2,3...) is a sequence of sample means of $n$ independent and identically distributed observations from a population with finite mean and variance ($\mu$ and $\sigma^2$), and $Z_n = \frac{\bar{X}_n-\mu}{\sigma/\sqrt{n}}$ is the standardized mean, then in the limit as $n\to\infty$ the (cumulative) distribution function, $F_n(z)$ of $Z_n$ converges to the standard normal cdf, $\Phi(z)$. (Conveniently, this theorem - relating as it does to distribution functions - is in a form that is potentially relevant to evaluating tail probabilities.) This would suggest as sample sizes become very large, the distribution function $F_n$ of a statistic $Z_n$ should eventually become close to that of a standard normal distribution. The CLT itself doesn't tell you how large that might need to be; it only talks about what happens in the limit as $n$ goes to infinity. In some situations (even when the CLT holds), that might need to be very large indeed. In particular, you might consider very skewed distributions (which are very common in calculations like click-through-rates or purchase rates or whatever, and also in effectively continuous quantities such as time or money spent on a site) and see that sample means can sometimes remain clearly non-normal even when sample sizes are getting into the thousands. Nonetheless, the difference between these two samples is guaranteed to be a normal distribution. I presume you intend "difference between sample means" there (otherwise, where does the CLT come in? 'difference in samples' is not a test statistic), but either way, this is wrong. Indeed if the original distributions were non-normal you can guarantee that the distributions of the sample means (and of their difference, given the usual assumptions) is not actually normal. However, it might in practice get quite close $-$ close enough that the normal will yield perfectly reasonable answers, except perhaps in the extreme tail $-$ if only the sample size were large enough. Large enough, that is, given the particular situation you're in and your particular sense of what might be close enough. My question is, how is the difference between these two samples constructed, and why is it guaranteed to be a normal distribution? Look to the specific statistic you're using in the test. You don't mention which it is, and in A/B testing there's at least two distinct situations that are commonly involved and many more which might be possible. For both those common cases the statistic at least has a numerator with the form of a difference of means. However, the CLT alone is not sufficient for the whole statistic in either case. A suitable argument for a t-test (e.g. if you were testing say, time or money spent at a site) or a z-test (as an approximation of a binomial test of proportions) would require additional argument, since the denominator is also a random variable. Such an argument is possible (e.g. by invoking Slutsky's theorem).
Why is the Central Limit applicable in A/B testing? The CLT states that as we draw random samples from a population, the distribution of their means tends towards a normal distribution. Not quite, though maybe you have a book that says something vague
38,580
Why is the Central Limit applicable in A/B testing?
Your question is full of incoherent statements. The CLT states that as we draw random samples from a population, the distribution of their means tends towards a normal distribution. In math, "tends to" is used to refer to a limit, and a limit always has some independent variable, and we take the limit as that variable goes to some value. Furthermore, a limit requires some norm and/or topology. Real numbers have the norm of the absolute value. PDFs do have norms, but there are more than one, so to be rigorous, one should specify one. So your statement of the CLT does not constitute a clear mathematical statement. And while one could infer some more rigorous statement, such that you mean "the $L^2$ norm of the distribution of their means minus a normal distribution goes towards 0 as the sample size goes to infinity", that still wouldn't be correct, because there is no one normal distribution that it goes towards. You have to take the z-score for it to go to a particular normal distribution. However, in A/B testing, we only draw two samples, and their distribution is not necessarily guaranteed to be normal. This also is not a precise statement. The normal distribution is a continuous distribution. A sample is a set of discrete values. What does it mean to compare them? Nonetheless, the difference between these two samples is guaranteed to be a normal distribution. Difference? A sample is a set of observations. How do you take the "difference" between two sets? There is the "set difference" of everything in one that isn't in the other, but how would that be normal? Perhaps you mean "the difference between their means". If so, you should be more precise. Furthermore, the mean is a particular number, not a distribution. Precision is very important in mathematics. Yes, mathematicians speak loosely in some contexts, but if you're having trouble understanding something, that's not an appropriate context to be using casual language. You're asking people to explain something to you, and requiring them to make inference after inference as to what you mean. The core issue in your question seems to be the statement "However, in A/B testing, we only draw two samples, and their distribution is not necessarily guaranteed to be normal." The distribution of sample means is approximately normal for large sample size, so your apparent intended statement is false.
Why is the Central Limit applicable in A/B testing?
Your question is full of incoherent statements. The CLT states that as we draw random samples from a population, the distribution of their means tends towards a normal distribution. In math, "tends
Why is the Central Limit applicable in A/B testing? Your question is full of incoherent statements. The CLT states that as we draw random samples from a population, the distribution of their means tends towards a normal distribution. In math, "tends to" is used to refer to a limit, and a limit always has some independent variable, and we take the limit as that variable goes to some value. Furthermore, a limit requires some norm and/or topology. Real numbers have the norm of the absolute value. PDFs do have norms, but there are more than one, so to be rigorous, one should specify one. So your statement of the CLT does not constitute a clear mathematical statement. And while one could infer some more rigorous statement, such that you mean "the $L^2$ norm of the distribution of their means minus a normal distribution goes towards 0 as the sample size goes to infinity", that still wouldn't be correct, because there is no one normal distribution that it goes towards. You have to take the z-score for it to go to a particular normal distribution. However, in A/B testing, we only draw two samples, and their distribution is not necessarily guaranteed to be normal. This also is not a precise statement. The normal distribution is a continuous distribution. A sample is a set of discrete values. What does it mean to compare them? Nonetheless, the difference between these two samples is guaranteed to be a normal distribution. Difference? A sample is a set of observations. How do you take the "difference" between two sets? There is the "set difference" of everything in one that isn't in the other, but how would that be normal? Perhaps you mean "the difference between their means". If so, you should be more precise. Furthermore, the mean is a particular number, not a distribution. Precision is very important in mathematics. Yes, mathematicians speak loosely in some contexts, but if you're having trouble understanding something, that's not an appropriate context to be using casual language. You're asking people to explain something to you, and requiring them to make inference after inference as to what you mean. The core issue in your question seems to be the statement "However, in A/B testing, we only draw two samples, and their distribution is not necessarily guaranteed to be normal." The distribution of sample means is approximately normal for large sample size, so your apparent intended statement is false.
Why is the Central Limit applicable in A/B testing? Your question is full of incoherent statements. The CLT states that as we draw random samples from a population, the distribution of their means tends towards a normal distribution. In math, "tends
38,581
Why does the MAP differ from the MLE for the uniform prior in Laplace's Rule?
Starting with a $\operatorname{Beta}(1,1)$ prior, your posterior would be $\operatorname{Beta}(k+1,n-k+1)$. The mode of a $\operatorname{Beta}(k+1,n-k+1)$ distribution is $\frac{k}{n}$, which is the result you seem to want for an MAP (or MLE) estimator. But Laplace's rule of succession instead takes the mean of the $\operatorname{Beta}(k+1,n-k+1)$ distribution, which is $\frac{k+1}{n+2}$. Personally, I would usually take the mean of the posterior distribution, as the MAP and MLE do not correspond to a loss function and so seem difficult to justify. I might start with a different prior, such as a Jeffreys' $\operatorname{Beta}(\frac12,\frac12)$ prior.
Why does the MAP differ from the MLE for the uniform prior in Laplace's Rule?
Starting with a $\operatorname{Beta}(1,1)$ prior, your posterior would be $\operatorname{Beta}(k+1,n-k+1)$. The mode of a $\operatorname{Beta}(k+1,n-k+1)$ distribution is $\frac{k}{n}$, which is the r
Why does the MAP differ from the MLE for the uniform prior in Laplace's Rule? Starting with a $\operatorname{Beta}(1,1)$ prior, your posterior would be $\operatorname{Beta}(k+1,n-k+1)$. The mode of a $\operatorname{Beta}(k+1,n-k+1)$ distribution is $\frac{k}{n}$, which is the result you seem to want for an MAP (or MLE) estimator. But Laplace's rule of succession instead takes the mean of the $\operatorname{Beta}(k+1,n-k+1)$ distribution, which is $\frac{k+1}{n+2}$. Personally, I would usually take the mean of the posterior distribution, as the MAP and MLE do not correspond to a loss function and so seem difficult to justify. I might start with a different prior, such as a Jeffreys' $\operatorname{Beta}(\frac12,\frac12)$ prior.
Why does the MAP differ from the MLE for the uniform prior in Laplace's Rule? Starting with a $\operatorname{Beta}(1,1)$ prior, your posterior would be $\operatorname{Beta}(k+1,n-k+1)$. The mode of a $\operatorname{Beta}(k+1,n-k+1)$ distribution is $\frac{k}{n}$, which is the r
38,582
Why does the MAP differ from the MLE for the uniform prior in Laplace's Rule?
There is no such thing as an "uninformative" prior, that brings no information to a model. For beta-binomial model, with uniform prior $\mathcal{B}(1, 1)$ the mode of the posterior (MAP) is $\frac{x+1-1}{x+1+x-n+1-2} = \frac{x}{x+n-x} = \frac{x}{n}$, so it's the same as MLE. If you want the mean of the posterior to be equal to MLE, there's another prior, though an improper one, that leads to the same solution as MLE: it's the Haldane's prior $\mathcal{B}(0, 0)$.
Why does the MAP differ from the MLE for the uniform prior in Laplace's Rule?
There is no such thing as an "uninformative" prior, that brings no information to a model. For beta-binomial model, with uniform prior $\mathcal{B}(1, 1)$ the mode of the posterior (MAP) is $\frac{x+1
Why does the MAP differ from the MLE for the uniform prior in Laplace's Rule? There is no such thing as an "uninformative" prior, that brings no information to a model. For beta-binomial model, with uniform prior $\mathcal{B}(1, 1)$ the mode of the posterior (MAP) is $\frac{x+1-1}{x+1+x-n+1-2} = \frac{x}{x+n-x} = \frac{x}{n}$, so it's the same as MLE. If you want the mean of the posterior to be equal to MLE, there's another prior, though an improper one, that leads to the same solution as MLE: it's the Haldane's prior $\mathcal{B}(0, 0)$.
Why does the MAP differ from the MLE for the uniform prior in Laplace's Rule? There is no such thing as an "uninformative" prior, that brings no information to a model. For beta-binomial model, with uniform prior $\mathcal{B}(1, 1)$ the mode of the posterior (MAP) is $\frac{x+1
38,583
"Are All Correlations Spurious But Some Correlations Are More Spurious?" - Not George Orwell
In statistics, a spurious relationship or spurious correlation is a mathematical relationship in which two or more events or variables are associated but not causally related, due to either coincidence or the presence of a certain third, unseen factor (referred to as a "common response variable", "confounding factor", or "lurking variable"). (Wikipedia) You are correct that correlation does not imply causation, so many correlations are spurious. When do we know that the relation is spurious? It’s when there’s no causal relationship. We don’t use correlations to find them, so looking at correlations alone won’t tell you that. Now the question boils down to “how do we detect causality?” It was answered in many treads tagged as causality, e.g. Interview question: If correlation doesn't imply causation, how do you detect causation?, while Introduction to causal analysis gives many good references to dive deeper.
"Are All Correlations Spurious But Some Correlations Are More Spurious?" - Not George Orwell
In statistics, a spurious relationship or spurious correlation is a mathematical relationship in which two or more events or variables are associated but not causally related, due to either coincidenc
"Are All Correlations Spurious But Some Correlations Are More Spurious?" - Not George Orwell In statistics, a spurious relationship or spurious correlation is a mathematical relationship in which two or more events or variables are associated but not causally related, due to either coincidence or the presence of a certain third, unseen factor (referred to as a "common response variable", "confounding factor", or "lurking variable"). (Wikipedia) You are correct that correlation does not imply causation, so many correlations are spurious. When do we know that the relation is spurious? It’s when there’s no causal relationship. We don’t use correlations to find them, so looking at correlations alone won’t tell you that. Now the question boils down to “how do we detect causality?” It was answered in many treads tagged as causality, e.g. Interview question: If correlation doesn't imply causation, how do you detect causation?, while Introduction to causal analysis gives many good references to dive deeper.
"Are All Correlations Spurious But Some Correlations Are More Spurious?" - Not George Orwell In statistics, a spurious relationship or spurious correlation is a mathematical relationship in which two or more events or variables are associated but not causally related, due to either coincidenc
38,584
"Are All Correlations Spurious But Some Correlations Are More Spurious?" - Not George Orwell
I'd say your understanding is correct. Philosophically, it's not truly possible to "prove" causation with 100% certainty, but we can convince ourselves that the alternative is arbitrarily unlikely. Most methods of determining causation rely on manipulating pertinent factors and randomizing all other possibly contributing factors, but this requires knowing what other possibly contributing factors there are. No matter how we control or randomize, it's not possible to rule out all potential unknowns, by their nature of being unknowns. For any well-described cause-effect relationship, I could posit that it's actually an unseen wizard that's manipulating the effect in just the right way such that it happens to line up perfectly with your suggested "cause". This of course quickly becomes very implausible when we look at realistic physical mechanisms for cause and effect, but you can't categorically disprove the existence of something by lack of evidence that it exists. We can become increasingly sure that a cause-effect relationship is real by appropriate experimental design and by using domain knowledge to describe realistic and well-understood mechanisms for cause and effect, but you can't truly rule out the possibility of an unknown cause that's never been observed or described, it just becomes vanishingly unlikely. At some point, the statistical and mechanistic support for the link between cause and effect are great enough to consider it "proven", even though it's technically not possible to rule out unseen causes with no understood mechanism.
"Are All Correlations Spurious But Some Correlations Are More Spurious?" - Not George Orwell
I'd say your understanding is correct. Philosophically, it's not truly possible to "prove" causation with 100% certainty, but we can convince ourselves that the alternative is arbitrarily unlikely. Mo
"Are All Correlations Spurious But Some Correlations Are More Spurious?" - Not George Orwell I'd say your understanding is correct. Philosophically, it's not truly possible to "prove" causation with 100% certainty, but we can convince ourselves that the alternative is arbitrarily unlikely. Most methods of determining causation rely on manipulating pertinent factors and randomizing all other possibly contributing factors, but this requires knowing what other possibly contributing factors there are. No matter how we control or randomize, it's not possible to rule out all potential unknowns, by their nature of being unknowns. For any well-described cause-effect relationship, I could posit that it's actually an unseen wizard that's manipulating the effect in just the right way such that it happens to line up perfectly with your suggested "cause". This of course quickly becomes very implausible when we look at realistic physical mechanisms for cause and effect, but you can't categorically disprove the existence of something by lack of evidence that it exists. We can become increasingly sure that a cause-effect relationship is real by appropriate experimental design and by using domain knowledge to describe realistic and well-understood mechanisms for cause and effect, but you can't truly rule out the possibility of an unknown cause that's never been observed or described, it just becomes vanishingly unlikely. At some point, the statistical and mechanistic support for the link between cause and effect are great enough to consider it "proven", even though it's technically not possible to rule out unseen causes with no understood mechanism.
"Are All Correlations Spurious But Some Correlations Are More Spurious?" - Not George Orwell I'd say your understanding is correct. Philosophically, it's not truly possible to "prove" causation with 100% certainty, but we can convince ourselves that the alternative is arbitrarily unlikely. Mo
38,585
Statistically compare two large continuous datasets
You could try some sampling techniques. In more detail, you could select smaller random samples from the blue, red and green populations and compare those samples using the traditional statistical tests you mentioned. Run that multiple times and count how many times the null hypothesis (that the means are equal) got rejected out of the total. Keep in mind that p-values are random variables too so at a significance level of 5%, you'd expect 5% of these hypotheses tests to reject the null hypothesis even when the means are the same (so potentially even in the red vs blue case). Alternatively, another option would be to run the Kolmogorov-Smirnov test.
Statistically compare two large continuous datasets
You could try some sampling techniques. In more detail, you could select smaller random samples from the blue, red and green populations and compare those samples using the traditional statistical tes
Statistically compare two large continuous datasets You could try some sampling techniques. In more detail, you could select smaller random samples from the blue, red and green populations and compare those samples using the traditional statistical tests you mentioned. Run that multiple times and count how many times the null hypothesis (that the means are equal) got rejected out of the total. Keep in mind that p-values are random variables too so at a significance level of 5%, you'd expect 5% of these hypotheses tests to reject the null hypothesis even when the means are the same (so potentially even in the red vs blue case). Alternatively, another option would be to run the Kolmogorov-Smirnov test.
Statistically compare two large continuous datasets You could try some sampling techniques. In more detail, you could select smaller random samples from the blue, red and green populations and compare those samples using the traditional statistical tes
38,586
Statistically compare two large continuous datasets
One feature (not a bug) of hypothesis testing is that it gets more sensitive to small differences as the sample size increases. Consequently, hypothesis testing considers more than just effect size, and you’re really only interested in the effect size (perhaps in addition to some quantification of the uncertainty). However, the description of your problem suggests that you will always have a sample size of the $50000$ pixels in your image. I suspect those pixels are not independent of one another (if a picture of a black dog has a black pixel, I say there’s a good chance that nearby pixels will also be black), but maybe you’re willing to make such an assumption; let’s assume so. In such a case, differences in the p-value will be due to differences in effect size and nothing more, so the p-value will be a decent measure of distribution similarity. To handle the p-value being tiny, you might consider taking a logarithm and determining your threshold on the log scale. However, you would be doing this to get at the effect size, so I would suggest looking directly at the effect size. You can use your software to calculate the difference in means along with confidence intervals, using those to make your decision. Perhaps even better would be to use the approach from the Kolmogorov-Smirnov test and find the maximum vertical distance between the empirical CDFs (along with a confidence interval for such a value), which will be sensitive to differences other than the mean. Another option to which you allude when you mention the overlap of the histograms is the Earth-mover’s distance. Yet another option is KL divergence. (Note that a such an approach using confidence intervals still relies on independence of the pixels, which I doubt.)
Statistically compare two large continuous datasets
One feature (not a bug) of hypothesis testing is that it gets more sensitive to small differences as the sample size increases. Consequently, hypothesis testing considers more than just effect size, a
Statistically compare two large continuous datasets One feature (not a bug) of hypothesis testing is that it gets more sensitive to small differences as the sample size increases. Consequently, hypothesis testing considers more than just effect size, and you’re really only interested in the effect size (perhaps in addition to some quantification of the uncertainty). However, the description of your problem suggests that you will always have a sample size of the $50000$ pixels in your image. I suspect those pixels are not independent of one another (if a picture of a black dog has a black pixel, I say there’s a good chance that nearby pixels will also be black), but maybe you’re willing to make such an assumption; let’s assume so. In such a case, differences in the p-value will be due to differences in effect size and nothing more, so the p-value will be a decent measure of distribution similarity. To handle the p-value being tiny, you might consider taking a logarithm and determining your threshold on the log scale. However, you would be doing this to get at the effect size, so I would suggest looking directly at the effect size. You can use your software to calculate the difference in means along with confidence intervals, using those to make your decision. Perhaps even better would be to use the approach from the Kolmogorov-Smirnov test and find the maximum vertical distance between the empirical CDFs (along with a confidence interval for such a value), which will be sensitive to differences other than the mean. Another option to which you allude when you mention the overlap of the histograms is the Earth-mover’s distance. Yet another option is KL divergence. (Note that a such an approach using confidence intervals still relies on independence of the pixels, which I doubt.)
Statistically compare two large continuous datasets One feature (not a bug) of hypothesis testing is that it gets more sensitive to small differences as the sample size increases. Consequently, hypothesis testing considers more than just effect size, a
38,587
Statistically compare two large continuous datasets
With such large sample sizes, you may get a clearer view of the relatively small (but obvious) differences between red and blue by looking at your histograms than from formal tests. Consider the (roughly similar) fictitious data below: set.seed(2022) r = rbeta(50000,7,3) summary(r) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.09307 0.60781 0.71252 0.69911 0.80363 0.99693 b = rbeta(50000,7.5,2.5) summary(b) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.1610 0.6658 0.7676 0.7495 0.8500 0.9974 Plotting kernel density estimators: plot(density(b), col="blue", lwd=2, ylab="Density", xlab="value", main="KDEs") lines(density(r), col="red", lwd=2) Both distributions have support $(0,1)$ and there is some skewness, so there is some doubt whether t tests are precisely accurate, even if they do show a highly significant difference. t.test(r, b) Welch Two Sample t-test data: r and b t = -59.146, df = 99710, p-value < 2.2e-16 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.05205704 -0.04871757 sample estimates: mean of x mean of y 0.6991094 0.7494967 Because of slightly different shapes and dispersions, a Wilcoxon rank sum test shows stochastic domination of blue over red (rather than a just a difference in medians). wilcox.test(r, b) Wilcoxon rank sum test with continuity correction data: r and b W = 983310000, p-value < 2.2e-16 alternative hypothesis: true location shift is not equal to 0 It seems more direct to look at your histograms of the actual data where results are obvious than to make excuses for tests that may not be exactly applicable.
Statistically compare two large continuous datasets
With such large sample sizes, you may get a clearer view of the relatively small (but obvious) differences between red and blue by looking at your histograms than from formal tests. Consider the (roug
Statistically compare two large continuous datasets With such large sample sizes, you may get a clearer view of the relatively small (but obvious) differences between red and blue by looking at your histograms than from formal tests. Consider the (roughly similar) fictitious data below: set.seed(2022) r = rbeta(50000,7,3) summary(r) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.09307 0.60781 0.71252 0.69911 0.80363 0.99693 b = rbeta(50000,7.5,2.5) summary(b) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.1610 0.6658 0.7676 0.7495 0.8500 0.9974 Plotting kernel density estimators: plot(density(b), col="blue", lwd=2, ylab="Density", xlab="value", main="KDEs") lines(density(r), col="red", lwd=2) Both distributions have support $(0,1)$ and there is some skewness, so there is some doubt whether t tests are precisely accurate, even if they do show a highly significant difference. t.test(r, b) Welch Two Sample t-test data: r and b t = -59.146, df = 99710, p-value < 2.2e-16 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.05205704 -0.04871757 sample estimates: mean of x mean of y 0.6991094 0.7494967 Because of slightly different shapes and dispersions, a Wilcoxon rank sum test shows stochastic domination of blue over red (rather than a just a difference in medians). wilcox.test(r, b) Wilcoxon rank sum test with continuity correction data: r and b W = 983310000, p-value < 2.2e-16 alternative hypothesis: true location shift is not equal to 0 It seems more direct to look at your histograms of the actual data where results are obvious than to make excuses for tests that may not be exactly applicable.
Statistically compare two large continuous datasets With such large sample sizes, you may get a clearer view of the relatively small (but obvious) differences between red and blue by looking at your histograms than from formal tests. Consider the (roug
38,588
Statistically compare two large continuous datasets
The t-test and z-test do not work here because the degree of freedom is huge hence the p-value is 0 Yes. If you're testing the null hypothesis that the two datasets are unrelated, the p-value will be tiny. Even for the red versus green datasets, you should reject the null hypothesis. If you're not testing the null hypothesis that the two are unrelated, but merely trying to quantify how much they differ, there's many different metrics. Your description isn't really clear on what exactly you want to test, but if you want to ask "are these curves the same", you'll want a vector norm. One main type is the $L^p$ norm. In this norm, for a particular $p$ value (note: this is completely different from the p-value in the sense of probability) you take the sum over all $x$-values of $|f(x_i)-g(x_i)|^p$, and then take the $p$-th root of that value. This yields a different norm for each $p$, which can range from $0$ to $\infty$ ($L^{\infty}$ norm is just the maximum). $L^2$ is the Euclidean/Pythagorean norm. There's also the covariance. Both of $L^p$ and covariance are dependent on the scale of the distribution. That is, doubling both $f$ and $g$ will result in larger values. If you don't want that, you can normalize them. If you divide the covariance by the product of the the standard deviations, you get the correlation.
Statistically compare two large continuous datasets
The t-test and z-test do not work here because the degree of freedom is huge hence the p-value is 0 Yes. If you're testing the null hypothesis that the two datasets are unrelated, the p-value will be
Statistically compare two large continuous datasets The t-test and z-test do not work here because the degree of freedom is huge hence the p-value is 0 Yes. If you're testing the null hypothesis that the two datasets are unrelated, the p-value will be tiny. Even for the red versus green datasets, you should reject the null hypothesis. If you're not testing the null hypothesis that the two are unrelated, but merely trying to quantify how much they differ, there's many different metrics. Your description isn't really clear on what exactly you want to test, but if you want to ask "are these curves the same", you'll want a vector norm. One main type is the $L^p$ norm. In this norm, for a particular $p$ value (note: this is completely different from the p-value in the sense of probability) you take the sum over all $x$-values of $|f(x_i)-g(x_i)|^p$, and then take the $p$-th root of that value. This yields a different norm for each $p$, which can range from $0$ to $\infty$ ($L^{\infty}$ norm is just the maximum). $L^2$ is the Euclidean/Pythagorean norm. There's also the covariance. Both of $L^p$ and covariance are dependent on the scale of the distribution. That is, doubling both $f$ and $g$ will result in larger values. If you don't want that, you can normalize them. If you divide the covariance by the product of the the standard deviations, you get the correlation.
Statistically compare two large continuous datasets The t-test and z-test do not work here because the degree of freedom is huge hence the p-value is 0 Yes. If you're testing the null hypothesis that the two datasets are unrelated, the p-value will be
38,589
Statistically compare two large continuous datasets
I'm not sure what your data represents, but in landscape/spatial ecology, it's common to have multiple raster datasets representing different variables for a given spatial area. One of the first issues that comes up is Spatial Autocorrelation. Put simply, spatial autocorrelation occurs when you have measured variables at two points close enough together in space that they are not independent, which in turn can undermine the assumptions of your statistical tests (such as your t-test). So the first thing you have to do is figure out if spatial autocorrelation is an issue for your data. I'm not an expert on this, just familiar with the issue, so you'll have to spend some time researching the methods for this. One measure that you will definitely come across is Moran's I, which is probably a good place to start. If you determine that spatial autocorrelation is an issue, then one way you can deal with it is by subsetting your data points so that they are far enough apart that they can be considered independent (there may be other ways that I'm not familiar with). There are statistical tools for determining how far is necessary, but it's been so long since the one time I had to do it that I can't remember what they are. Especially because my dataset at the time was too big and I couldn't get them to run, so I ended up picking a distance that I could justify biologically based on my knowledge of the study system.
Statistically compare two large continuous datasets
I'm not sure what your data represents, but in landscape/spatial ecology, it's common to have multiple raster datasets representing different variables for a given spatial area. One of the first issue
Statistically compare two large continuous datasets I'm not sure what your data represents, but in landscape/spatial ecology, it's common to have multiple raster datasets representing different variables for a given spatial area. One of the first issues that comes up is Spatial Autocorrelation. Put simply, spatial autocorrelation occurs when you have measured variables at two points close enough together in space that they are not independent, which in turn can undermine the assumptions of your statistical tests (such as your t-test). So the first thing you have to do is figure out if spatial autocorrelation is an issue for your data. I'm not an expert on this, just familiar with the issue, so you'll have to spend some time researching the methods for this. One measure that you will definitely come across is Moran's I, which is probably a good place to start. If you determine that spatial autocorrelation is an issue, then one way you can deal with it is by subsetting your data points so that they are far enough apart that they can be considered independent (there may be other ways that I'm not familiar with). There are statistical tools for determining how far is necessary, but it's been so long since the one time I had to do it that I can't remember what they are. Especially because my dataset at the time was too big and I couldn't get them to run, so I ended up picking a distance that I could justify biologically based on my knowledge of the study system.
Statistically compare two large continuous datasets I'm not sure what your data represents, but in landscape/spatial ecology, it's common to have multiple raster datasets representing different variables for a given spatial area. One of the first issue
38,590
Statistically compare two large continuous datasets
In machine learning for generative adversarial networks (GANs), it is common to use Wasserstein metric / Earthmover's distance to compare output images from the well-known paper 2017 on Wasserstein GAN. Traditional measures like K-L divergence may be too stringent: see What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? Also if your goal is to compare specifically images for being nearly visual duplicates, you can look into image perceptual hashing/fingerprinting: pHash, Neal Krawetz's classic introductory blogpost on perceptual hashing (can be as simple as squashing image down to 8x8 and binarizing, then comparing by Hamming distance)
Statistically compare two large continuous datasets
In machine learning for generative adversarial networks (GANs), it is common to use Wasserstein metric / Earthmover's distance to compare output images from the well-known paper 2017 on Wasserstein GA
Statistically compare two large continuous datasets In machine learning for generative adversarial networks (GANs), it is common to use Wasserstein metric / Earthmover's distance to compare output images from the well-known paper 2017 on Wasserstein GAN. Traditional measures like K-L divergence may be too stringent: see What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? Also if your goal is to compare specifically images for being nearly visual duplicates, you can look into image perceptual hashing/fingerprinting: pHash, Neal Krawetz's classic introductory blogpost on perceptual hashing (can be as simple as squashing image down to 8x8 and binarizing, then comparing by Hamming distance)
Statistically compare two large continuous datasets In machine learning for generative adversarial networks (GANs), it is common to use Wasserstein metric / Earthmover's distance to compare output images from the well-known paper 2017 on Wasserstein GA
38,591
Statistically compare two large continuous datasets
I would like to perform a statistical test that will quantify this difference and then I can choose a threshold and decide accordingly if these are similar enough for my application or not. A statistical test will be based on statistical variations and will rank samples based on probability. For instance the chi-squared test and Kolmogornov-Smirnov will assume a particular distribution or particular assumptions about the distribution and compute a statistic with those assumptions. You might want to look to other measures of distance instead. Measures that are not inspired by statistical hypothesis testing. My question is - which statistical test or model or method is adequate for that assuming the distribution is similar to what you see in the examples Those two images are not enough to create a model of the data and come up with an appropriate distance measure for which you can select a cut-off value. To come up with a model requires an understanding of the process that generates the data. You can not just look at two examples of data output and decide what would be a good model from which a distance measure can be defined. (However if you have thousands of examples, then you could use some neural network to come up with a model learned from the examples).
Statistically compare two large continuous datasets
I would like to perform a statistical test that will quantify this difference and then I can choose a threshold and decide accordingly if these are similar enough for my application or not. A statist
Statistically compare two large continuous datasets I would like to perform a statistical test that will quantify this difference and then I can choose a threshold and decide accordingly if these are similar enough for my application or not. A statistical test will be based on statistical variations and will rank samples based on probability. For instance the chi-squared test and Kolmogornov-Smirnov will assume a particular distribution or particular assumptions about the distribution and compute a statistic with those assumptions. You might want to look to other measures of distance instead. Measures that are not inspired by statistical hypothesis testing. My question is - which statistical test or model or method is adequate for that assuming the distribution is similar to what you see in the examples Those two images are not enough to create a model of the data and come up with an appropriate distance measure for which you can select a cut-off value. To come up with a model requires an understanding of the process that generates the data. You can not just look at two examples of data output and decide what would be a good model from which a distance measure can be defined. (However if you have thousands of examples, then you could use some neural network to come up with a model learned from the examples).
Statistically compare two large continuous datasets I would like to perform a statistical test that will quantify this difference and then I can choose a threshold and decide accordingly if these are similar enough for my application or not. A statist
38,592
Statistically compare two large continuous datasets
Since you are dealing with histograms of the channels of an image, you could consider using OpenCV to compare the histograms via a distance metric to express how well they match. In OpenCV, this task is somewhat trivial: you could use the function cv.compareHist to compare histograms using a given metric; you can select among four different distance metrics: Correlation, Chi-Square, Intersection, and Bhattacharyya. A handy tutorial on histogram comparison using OpenCV is here.
Statistically compare two large continuous datasets
Since you are dealing with histograms of the channels of an image, you could consider using OpenCV to compare the histograms via a distance metric to express how well they match. In OpenCV, this task
Statistically compare two large continuous datasets Since you are dealing with histograms of the channels of an image, you could consider using OpenCV to compare the histograms via a distance metric to express how well they match. In OpenCV, this task is somewhat trivial: you could use the function cv.compareHist to compare histograms using a given metric; you can select among four different distance metrics: Correlation, Chi-Square, Intersection, and Bhattacharyya. A handy tutorial on histogram comparison using OpenCV is here.
Statistically compare two large continuous datasets Since you are dealing with histograms of the channels of an image, you could consider using OpenCV to compare the histograms via a distance metric to express how well they match. In OpenCV, this task
38,593
Why are confidence intervals of hazard ratios not symmetric?
The log (partial) likelihood maximization methods used to fit survival models work in the log-hazard scale. The regression coefficient estimates are in that scale, and in that scale the covariance of the estimates is assumed to be multivariate normal. In the log-hazard scale, confidence intervals (CI) are thus symmetric. When you exponentiate those coefficients and corresponding CI limits to get results in terms of hazard ratios (HR), the CI are necessarily asymmetric. That's even true for the usual null-hypothesis assumption of log-hazard-ratio of 0 or HR of 1. For example, if the log-hazard-ratio estimate has CI of (-0.2, +0.2) around a point estimate of 0, in the (exponentiated) HR scale the CI are (0.8187, 1.2214), asymmetric about the point estimate of HR = 1. More broadly, as other answers nicely demonstrate, there is no need for CI to be symmetric at all. They often are taken to be symmetric when coefficient estimates can be assumed to have symmetric distributions, but even then there is no rule requiring such a choice. Any interval containing 95% of the probability distribution can be taken as 95% CI.
Why are confidence intervals of hazard ratios not symmetric?
The log (partial) likelihood maximization methods used to fit survival models work in the log-hazard scale. The regression coefficient estimates are in that scale, and in that scale the covariance of
Why are confidence intervals of hazard ratios not symmetric? The log (partial) likelihood maximization methods used to fit survival models work in the log-hazard scale. The regression coefficient estimates are in that scale, and in that scale the covariance of the estimates is assumed to be multivariate normal. In the log-hazard scale, confidence intervals (CI) are thus symmetric. When you exponentiate those coefficients and corresponding CI limits to get results in terms of hazard ratios (HR), the CI are necessarily asymmetric. That's even true for the usual null-hypothesis assumption of log-hazard-ratio of 0 or HR of 1. For example, if the log-hazard-ratio estimate has CI of (-0.2, +0.2) around a point estimate of 0, in the (exponentiated) HR scale the CI are (0.8187, 1.2214), asymmetric about the point estimate of HR = 1. More broadly, as other answers nicely demonstrate, there is no need for CI to be symmetric at all. They often are taken to be symmetric when coefficient estimates can be assumed to have symmetric distributions, but even then there is no rule requiring such a choice. Any interval containing 95% of the probability distribution can be taken as 95% CI.
Why are confidence intervals of hazard ratios not symmetric? The log (partial) likelihood maximization methods used to fit survival models work in the log-hazard scale. The regression coefficient estimates are in that scale, and in that scale the covariance of
38,594
Why are confidence intervals of hazard ratios not symmetric?
My understanding is that the confidence interval for a hazard ratio should be symmetrical about the mean (the distance between the lower limit and the mean is the same as the distance between the mean and the upper limit). This is not correct. Hazard ratios have no reason to be symmetric. (They could in principle be symmetric in specific cases, e.g. if the point estimate is $1.0$ - but they are not symmetric in general.) Here is an argument. Wikipedia says: In its simplest form, the hazard ratio can be interpreted as the chance of an event occurring in the treatment arm divided by the chance of the event occurring in the control arm, or vice versa, of a study. Now, note that whether one arm is considered "treatment" and the other one "control" is absolutely up to interpretation. We could switch the labels at will. And such switching would turn the hazard ratio into its reciprocal, $\text{HR}'=\frac{1}{\text{HR}}$. If, now, hazard ratio CIs were always symmetrical, then this inversion would need to respect this symmetry. Thus, a CI of $(r-\epsilon,r+\epsilon)$ for $\text{HR}$ would need to turn into a CI $(\frac{1}{r+\epsilon},\frac{1}{r-\epsilon})$ for $\text{HR}'$ that is also symmetric about $\frac{1}{r}$, or $$\frac{1}{r+\epsilon} = \frac{1}{r}-\delta, \qquad\frac{1}{r-\epsilon}=\frac{1}{r}+\delta. $$ But this is not possible mathematically in general. Taking reciprocals is not a symmetric operation.
Why are confidence intervals of hazard ratios not symmetric?
My understanding is that the confidence interval for a hazard ratio should be symmetrical about the mean (the distance between the lower limit and the mean is the same as the distance between the mean
Why are confidence intervals of hazard ratios not symmetric? My understanding is that the confidence interval for a hazard ratio should be symmetrical about the mean (the distance between the lower limit and the mean is the same as the distance between the mean and the upper limit). This is not correct. Hazard ratios have no reason to be symmetric. (They could in principle be symmetric in specific cases, e.g. if the point estimate is $1.0$ - but they are not symmetric in general.) Here is an argument. Wikipedia says: In its simplest form, the hazard ratio can be interpreted as the chance of an event occurring in the treatment arm divided by the chance of the event occurring in the control arm, or vice versa, of a study. Now, note that whether one arm is considered "treatment" and the other one "control" is absolutely up to interpretation. We could switch the labels at will. And such switching would turn the hazard ratio into its reciprocal, $\text{HR}'=\frac{1}{\text{HR}}$. If, now, hazard ratio CIs were always symmetrical, then this inversion would need to respect this symmetry. Thus, a CI of $(r-\epsilon,r+\epsilon)$ for $\text{HR}$ would need to turn into a CI $(\frac{1}{r+\epsilon},\frac{1}{r-\epsilon})$ for $\text{HR}'$ that is also symmetric about $\frac{1}{r}$, or $$\frac{1}{r+\epsilon} = \frac{1}{r}-\delta, \qquad\frac{1}{r-\epsilon}=\frac{1}{r}+\delta. $$ But this is not possible mathematically in general. Taking reciprocals is not a symmetric operation.
Why are confidence intervals of hazard ratios not symmetric? My understanding is that the confidence interval for a hazard ratio should be symmetrical about the mean (the distance between the lower limit and the mean is the same as the distance between the mean
38,595
Why are confidence intervals of hazard ratios not symmetric?
It is just that we are used to symmetric confidence intervals, because this is the most typical/common situation: the case of some sort of linear regression with Gaussian distributed errors. But... there is not a single unique confidence interval. Any region with 95% confidence will do. Often one chooses the region that is smallest in some sense. For instance by using a hypothesis test (the confidence interval can be seen a region of hypothesis tests) that computes the p-values based on the region with highest probability density. Example 1 Below is an example from this question that shows the estimation of the rate parameter $\lambda$ of a population that is exponential distributed. The estimate is based on the observed sample mean $\bar{x}$. You can see that the confidence interval boundaries (thick black lines) are not symmetrical around the estimate (dotted line). This has two reasons. The estimate for the rate parameter $\hat\lambda$ is not a linear function of the observed $\bar{x}$. The sample distribution of $\bar{x}$ for a given parameter $\lambda$ is not symmetric. Example 2 A very clear example is the estimation of the parameter $p$ in a binomial distribution. Say you estimate the number of red and blue balls in an urn. If in some sample you observe zero red balls then the estimate for the number of red balls will be zero, but the confidence interval won't be symmetric (or it shouldn't be) or otherwise you include negative values. Example 3 The answer of Christoph Hanck in the previous linked question shows an example of the Gaussian distribution and how the confidence interval will turn out to be symmetric. But if you transform the parameter for which you compute the interval then it will again become non-symmetric. Possibly that is the case for the hazard ratio.
Why are confidence intervals of hazard ratios not symmetric?
It is just that we are used to symmetric confidence intervals, because this is the most typical/common situation: the case of some sort of linear regression with Gaussian distributed errors. But... th
Why are confidence intervals of hazard ratios not symmetric? It is just that we are used to symmetric confidence intervals, because this is the most typical/common situation: the case of some sort of linear regression with Gaussian distributed errors. But... there is not a single unique confidence interval. Any region with 95% confidence will do. Often one chooses the region that is smallest in some sense. For instance by using a hypothesis test (the confidence interval can be seen a region of hypothesis tests) that computes the p-values based on the region with highest probability density. Example 1 Below is an example from this question that shows the estimation of the rate parameter $\lambda$ of a population that is exponential distributed. The estimate is based on the observed sample mean $\bar{x}$. You can see that the confidence interval boundaries (thick black lines) are not symmetrical around the estimate (dotted line). This has two reasons. The estimate for the rate parameter $\hat\lambda$ is not a linear function of the observed $\bar{x}$. The sample distribution of $\bar{x}$ for a given parameter $\lambda$ is not symmetric. Example 2 A very clear example is the estimation of the parameter $p$ in a binomial distribution. Say you estimate the number of red and blue balls in an urn. If in some sample you observe zero red balls then the estimate for the number of red balls will be zero, but the confidence interval won't be symmetric (or it shouldn't be) or otherwise you include negative values. Example 3 The answer of Christoph Hanck in the previous linked question shows an example of the Gaussian distribution and how the confidence interval will turn out to be symmetric. But if you transform the parameter for which you compute the interval then it will again become non-symmetric. Possibly that is the case for the hazard ratio.
Why are confidence intervals of hazard ratios not symmetric? It is just that we are used to symmetric confidence intervals, because this is the most typical/common situation: the case of some sort of linear regression with Gaussian distributed errors. But... th
38,596
Plotting binary vs. binary to identify relationship
Really, for only two variables with only two possible values, you just make a contingency table. If you want, you can compute the rowwise / columnwise / tablewise proportions. If you really need a plot, a mosaic plot would be fine, or a four fold plot, but it doesn't seem very necessary to me. Here is an example in R: table(a,b) # b # a 0 1 # 0 5 7 # 1 5 5 round(prop.table(table(a,b)),2) # b # a 0 1 # 0 0.23 0.32 # 1 0.23 0.23 library(vcd) mosaicplot(table(a,b), shade=T) fourfold(table(a,b))
Plotting binary vs. binary to identify relationship
Really, for only two variables with only two possible values, you just make a contingency table. If you want, you can compute the rowwise / columnwise / tablewise proportions. If you really need a p
Plotting binary vs. binary to identify relationship Really, for only two variables with only two possible values, you just make a contingency table. If you want, you can compute the rowwise / columnwise / tablewise proportions. If you really need a plot, a mosaic plot would be fine, or a four fold plot, but it doesn't seem very necessary to me. Here is an example in R: table(a,b) # b # a 0 1 # 0 5 7 # 1 5 5 round(prop.table(table(a,b)),2) # b # a 0 1 # 0 0.23 0.32 # 1 0.23 0.23 library(vcd) mosaicplot(table(a,b), shade=T) fourfold(table(a,b))
Plotting binary vs. binary to identify relationship Really, for only two variables with only two possible values, you just make a contingency table. If you want, you can compute the rowwise / columnwise / tablewise proportions. If you really need a p
38,597
Plotting binary vs. binary to identify relationship
Such relationships are conventionally summarized with contingency tables, as in this (random) example: Col 1 Col 2 Col 3 Col 4 Row 1 3 6 40 34 Row 2 18 6 9 1 Typically we are interested in comparing these data to values suggested by some default model, such as a null model of independent row and column proportions. When comparing the data to those values, the actual counts are important because they are proportional to the variances of the differences. Consequently, a good visualization would clearly show the counts and their expected values, preferably organized to parallel the table. Studies by psychologists and statisticians indicate that graphical elements like hue and shade do a relatively poor job in depicting quantities like counts. Although length and position tend to be clearest and most accurate, they are suited only for showing relative counts: that is, their proportions. Not good enough. I therefore propose to represent any count $k$ by drawing $k$ distinct, non-overlapping identically-sized graphical symbols, so that each symbol clearly represents one thing that is counts. To make this work well, my experiments have found the following: Clustering the symbols into a compact object seems to work better than positioning them randomly within a drawing area. Overplotting the symbols on a polygon whose area represents the expectation permits a direct visual comparison of the count to its expectation. Rectangles, concentric with the symbol clusters, suffice for this purpose. As a bonus, the standard error of each count, which is proportional to its square root, is thereby represented by the perimeter of its reference polygon. Although this is subtle, it's nice to see such a useful quantity appear naturally in the graphic. People gravitate towards colorful graphics, but because colors might not reproduced (think of page charges in a research journal, for instance), I apply color to distinguish the cells but not to represent anything essential. Here is an example of this solution for the table above: It is immediately clear which cells have overly large counts and which have overly small ones. We even get a quick impression of how much they exceed or fall short of their expectations. With a little practice, you can learn to eyeball the chi-squared statistic from such a plot. I have decorated the figure with the usual accompaniments: row and column labels to the left and top; row and column totals to the right and bottom; and the p-value of a test (in this case, Fisher's Exact test of independence as computed with a million simulated datasets). For comparison, here is the visualization with randomly dispersed symbols: Because the symbols are no longer clustered, it's useless to draw the reference rectangles. Instead, I have used the cell shading to represent expected values. (Darker is higher.) Although this method still works, I get more out of the first (clustered) version. When either or both of the variables are ordered, the same visualization is effective provided the rows and columns follow the ordering. Finally, this works well for $2\times 2$ tables. Here is one that came up in an analysis of an age discrimination case where it was alleged that older workers were preferentially fired. Indeed, the table looks a little incriminating because no younger people were let go at all: Old Young Kept 135 26 Fired 14 0 The visualization, however, indicates a close agreement between the observations and the expected values under the null hypothesis of no relationship with age: The Fisher Exact test p-value of $0.134$ supports the visual impression. Because I know people will ask for it, here is the R code used to produce the figures. m <- 2 n <- 4 set.seed(17) shape <- .8 mu <- 180 / (m*n) x <- matrix(rpois(m*n, rgamma(m*n, shape, shape/mu)), m, n) if (is.null(colnames(x))) colnames(x) <- paste("Col", 1:n) if (is.null(rownames(x))) rownames(x) <- paste("Row", 1:m) breaks.x <- seq(0, n, length.out=n+1) breaks.y <- rev(seq(0, m, length.out=m+1)) # # Testing. # p.value <- signif(fisher.test(x, simulate.p.value=TRUE, B=1e6)$p.value, 3) print(x) # # Set up plotting parameters. # random <- TRUE h <- sample.int(m*n) colors <- matrix(hsv(h / length(h), 0.9, 0.8, 1/2), nrow(x), ncol(x)) eps <- (1 - 1/(1.08))/2 # (Makes the plotting area exactly the right size.) lim <- c(eps, 1-eps) plot(lim*n, lim*m, type="n", xaxt="n", yaxt="n", bty="n", xlab="", ylab="", xaxs="r", yaxs="r", asp=m/n, main=substitute(paste("A ", m %*% n, " Table"), list(m=m, n=n))) mtext(bquote(italic(p)==.(p.value)), side=1, line=2) # # Expectations. # gamma <- 6/3 # (Values above 1 reduce the background contrast.) p.row <- rowSums(x)/sum(x) p.col <- colSums(x)/sum(x) if (isTRUE(random)) { for (i in 1:m) { polygon(c(range(breaks.x), rev(range(breaks.x))), rep(breaks.y[0:1+i], each=2), col=hsv(0,0,0, p.row[i]^gamma)) } for (j in 1:n) { polygon(breaks.x[c(j,j+1,j+1,j)], rep(range(breaks.y), each=2), col=hsv(0,0,0, p.col[j]^gamma)) } } else { for (i in 1:m) { for (j in 1:n) { p <- p.row[i] * p.col[j] h <- (1 - (breaks.y[i] - breaks.y[i+1]) * sqrt(p))/2 w <- (1 - (breaks.x[j+1] - breaks.x[j]) * sqrt(p))/2 polygon(c(breaks.x[j]+w, breaks.x[j+1]-w, breaks.x[j+1]-w, breaks.x[j]+w), c(breaks.y[i+1]+w, breaks.y[i+1]+w, breaks.y[i]-w, breaks.y[i]-w), col=hsv(0,0,1/2)) } } } # # Borders. # gray <- hsv(0,0,5/6) invisible(sapply(breaks.x, function(x) lines(rep(x,2), range(breaks.y), col=gray))) invisible(sapply(breaks.y, function(y) lines(range(breaks.x), rep(y,2), col=gray))) polygon(c(range(breaks.x), rev(range(breaks.x))), rep(range(breaks.y), each=2)) # # Labels. # at <- (breaks.y[-1] + breaks.y[-(m+1)])/2 mtext(rownames(x), at=at, side=2, line=1/4) mtext(rowSums(x), at=at, side=4, line=1/4) at <- (breaks.x[-1] + breaks.x[-(n+1)])/2 mtext(colnames(x), at=at, side=3, line=0) mtext(colSums(x), at=at, side=1, line=1/4) # # Samples. # runif2 <- function(n, ncol, nrow, lower.x=0, upper.x=1, lower.y=0, upper.y=1, random=TRUE) { if (n > nrow*ncol) { warning("Unable to generate enough samples") n <- nrow*ncol } if (isTRUE(random)) { i <- sample.int(nrow*ncol, n) - 1 } else { # i <- seq_len(n) - 1 k <- order(outer(nrow*(1:ncol-(ncol+1)/2), ncol*(1:nrow-(nrow+1)/2), function(x,y) x^2+y^2)) i <- k[seq_len(n)] - 1 } j <- (i %% ncol + 1/2) / ncol * (upper.y - lower.y) + lower.y i <- (i %/% ncol + 1/2) / nrow * (upper.x - lower.x) + lower.x cbind(i,j) } ### Adjust the `400` to make the symbols barely overlap ### cex <- 1 / sqrt(max(x)/400*max(m,n)) eps.x <- eps.y <- 0.05 u <- sqrt(max(x)/ (m*n)) u <- ceiling(u) for (i in 1:m) { for (j in 1:n) { points(runif2(x[i,j], ceiling(m*u), ceiling(n*u), breaks.x[j]+eps.x, breaks.x[j+1]-eps.x, breaks.y[i+1]+eps.y, breaks.y[i]-eps.y, random=random), pch=22, cex=cex, col=colors[i,j], bg=colors[i,j]) } }
Plotting binary vs. binary to identify relationship
Such relationships are conventionally summarized with contingency tables, as in this (random) example: Col 1 Col 2 Col 3 Col 4 Row 1 3 6 40 34 Row 2 18 6 9 1 Typica
Plotting binary vs. binary to identify relationship Such relationships are conventionally summarized with contingency tables, as in this (random) example: Col 1 Col 2 Col 3 Col 4 Row 1 3 6 40 34 Row 2 18 6 9 1 Typically we are interested in comparing these data to values suggested by some default model, such as a null model of independent row and column proportions. When comparing the data to those values, the actual counts are important because they are proportional to the variances of the differences. Consequently, a good visualization would clearly show the counts and their expected values, preferably organized to parallel the table. Studies by psychologists and statisticians indicate that graphical elements like hue and shade do a relatively poor job in depicting quantities like counts. Although length and position tend to be clearest and most accurate, they are suited only for showing relative counts: that is, their proportions. Not good enough. I therefore propose to represent any count $k$ by drawing $k$ distinct, non-overlapping identically-sized graphical symbols, so that each symbol clearly represents one thing that is counts. To make this work well, my experiments have found the following: Clustering the symbols into a compact object seems to work better than positioning them randomly within a drawing area. Overplotting the symbols on a polygon whose area represents the expectation permits a direct visual comparison of the count to its expectation. Rectangles, concentric with the symbol clusters, suffice for this purpose. As a bonus, the standard error of each count, which is proportional to its square root, is thereby represented by the perimeter of its reference polygon. Although this is subtle, it's nice to see such a useful quantity appear naturally in the graphic. People gravitate towards colorful graphics, but because colors might not reproduced (think of page charges in a research journal, for instance), I apply color to distinguish the cells but not to represent anything essential. Here is an example of this solution for the table above: It is immediately clear which cells have overly large counts and which have overly small ones. We even get a quick impression of how much they exceed or fall short of their expectations. With a little practice, you can learn to eyeball the chi-squared statistic from such a plot. I have decorated the figure with the usual accompaniments: row and column labels to the left and top; row and column totals to the right and bottom; and the p-value of a test (in this case, Fisher's Exact test of independence as computed with a million simulated datasets). For comparison, here is the visualization with randomly dispersed symbols: Because the symbols are no longer clustered, it's useless to draw the reference rectangles. Instead, I have used the cell shading to represent expected values. (Darker is higher.) Although this method still works, I get more out of the first (clustered) version. When either or both of the variables are ordered, the same visualization is effective provided the rows and columns follow the ordering. Finally, this works well for $2\times 2$ tables. Here is one that came up in an analysis of an age discrimination case where it was alleged that older workers were preferentially fired. Indeed, the table looks a little incriminating because no younger people were let go at all: Old Young Kept 135 26 Fired 14 0 The visualization, however, indicates a close agreement between the observations and the expected values under the null hypothesis of no relationship with age: The Fisher Exact test p-value of $0.134$ supports the visual impression. Because I know people will ask for it, here is the R code used to produce the figures. m <- 2 n <- 4 set.seed(17) shape <- .8 mu <- 180 / (m*n) x <- matrix(rpois(m*n, rgamma(m*n, shape, shape/mu)), m, n) if (is.null(colnames(x))) colnames(x) <- paste("Col", 1:n) if (is.null(rownames(x))) rownames(x) <- paste("Row", 1:m) breaks.x <- seq(0, n, length.out=n+1) breaks.y <- rev(seq(0, m, length.out=m+1)) # # Testing. # p.value <- signif(fisher.test(x, simulate.p.value=TRUE, B=1e6)$p.value, 3) print(x) # # Set up plotting parameters. # random <- TRUE h <- sample.int(m*n) colors <- matrix(hsv(h / length(h), 0.9, 0.8, 1/2), nrow(x), ncol(x)) eps <- (1 - 1/(1.08))/2 # (Makes the plotting area exactly the right size.) lim <- c(eps, 1-eps) plot(lim*n, lim*m, type="n", xaxt="n", yaxt="n", bty="n", xlab="", ylab="", xaxs="r", yaxs="r", asp=m/n, main=substitute(paste("A ", m %*% n, " Table"), list(m=m, n=n))) mtext(bquote(italic(p)==.(p.value)), side=1, line=2) # # Expectations. # gamma <- 6/3 # (Values above 1 reduce the background contrast.) p.row <- rowSums(x)/sum(x) p.col <- colSums(x)/sum(x) if (isTRUE(random)) { for (i in 1:m) { polygon(c(range(breaks.x), rev(range(breaks.x))), rep(breaks.y[0:1+i], each=2), col=hsv(0,0,0, p.row[i]^gamma)) } for (j in 1:n) { polygon(breaks.x[c(j,j+1,j+1,j)], rep(range(breaks.y), each=2), col=hsv(0,0,0, p.col[j]^gamma)) } } else { for (i in 1:m) { for (j in 1:n) { p <- p.row[i] * p.col[j] h <- (1 - (breaks.y[i] - breaks.y[i+1]) * sqrt(p))/2 w <- (1 - (breaks.x[j+1] - breaks.x[j]) * sqrt(p))/2 polygon(c(breaks.x[j]+w, breaks.x[j+1]-w, breaks.x[j+1]-w, breaks.x[j]+w), c(breaks.y[i+1]+w, breaks.y[i+1]+w, breaks.y[i]-w, breaks.y[i]-w), col=hsv(0,0,1/2)) } } } # # Borders. # gray <- hsv(0,0,5/6) invisible(sapply(breaks.x, function(x) lines(rep(x,2), range(breaks.y), col=gray))) invisible(sapply(breaks.y, function(y) lines(range(breaks.x), rep(y,2), col=gray))) polygon(c(range(breaks.x), rev(range(breaks.x))), rep(range(breaks.y), each=2)) # # Labels. # at <- (breaks.y[-1] + breaks.y[-(m+1)])/2 mtext(rownames(x), at=at, side=2, line=1/4) mtext(rowSums(x), at=at, side=4, line=1/4) at <- (breaks.x[-1] + breaks.x[-(n+1)])/2 mtext(colnames(x), at=at, side=3, line=0) mtext(colSums(x), at=at, side=1, line=1/4) # # Samples. # runif2 <- function(n, ncol, nrow, lower.x=0, upper.x=1, lower.y=0, upper.y=1, random=TRUE) { if (n > nrow*ncol) { warning("Unable to generate enough samples") n <- nrow*ncol } if (isTRUE(random)) { i <- sample.int(nrow*ncol, n) - 1 } else { # i <- seq_len(n) - 1 k <- order(outer(nrow*(1:ncol-(ncol+1)/2), ncol*(1:nrow-(nrow+1)/2), function(x,y) x^2+y^2)) i <- k[seq_len(n)] - 1 } j <- (i %% ncol + 1/2) / ncol * (upper.y - lower.y) + lower.y i <- (i %/% ncol + 1/2) / nrow * (upper.x - lower.x) + lower.x cbind(i,j) } ### Adjust the `400` to make the symbols barely overlap ### cex <- 1 / sqrt(max(x)/400*max(m,n)) eps.x <- eps.y <- 0.05 u <- sqrt(max(x)/ (m*n)) u <- ceiling(u) for (i in 1:m) { for (j in 1:n) { points(runif2(x[i,j], ceiling(m*u), ceiling(n*u), breaks.x[j]+eps.x, breaks.x[j+1]-eps.x, breaks.y[i+1]+eps.y, breaks.y[i]-eps.y, random=random), pch=22, cex=cex, col=colors[i,j], bg=colors[i,j]) } }
Plotting binary vs. binary to identify relationship Such relationships are conventionally summarized with contingency tables, as in this (random) example: Col 1 Col 2 Col 3 Col 4 Row 1 3 6 40 34 Row 2 18 6 9 1 Typica
38,598
Plotting binary vs. binary to identify relationship
For your data, as @gung has pointed out, you can make a confusion matrix, so something like below: df.columns=['a','b'] sns.heatmap(pd.crosstab(df['a'],df['b']), annot=True) Or you can call a mosaic plot from statsmodels that shows the deviation from expected: import matplotlib.pyplot as plt from statsmodels.graphics.mosaicplot import mosaic fig,ax1 =plt.subplots(1) mosaic(df,['a','b'],ax=ax1) fig.show()
Plotting binary vs. binary to identify relationship
For your data, as @gung has pointed out, you can make a confusion matrix, so something like below: df.columns=['a','b'] sns.heatmap(pd.crosstab(df['a'],df['b']), annot=True) Or you can call a mosaic
Plotting binary vs. binary to identify relationship For your data, as @gung has pointed out, you can make a confusion matrix, so something like below: df.columns=['a','b'] sns.heatmap(pd.crosstab(df['a'],df['b']), annot=True) Or you can call a mosaic plot from statsmodels that shows the deviation from expected: import matplotlib.pyplot as plt from statsmodels.graphics.mosaicplot import mosaic fig,ax1 =plt.subplots(1) mosaic(df,['a','b'],ax=ax1) fig.show()
Plotting binary vs. binary to identify relationship For your data, as @gung has pointed out, you can make a confusion matrix, so something like below: df.columns=['a','b'] sns.heatmap(pd.crosstab(df['a'],df['b']), annot=True) Or you can call a mosaic
38,599
Inference in Time Series: Prophet vs. ARIMA
ARIMA and similar models assume some sort of causal relationship between past values and past errors and future values of the time series: $$Y_{t+h}=f(Y_{t},Y_{t-1},Y_{t-2},....,\epsilon_{t},\epsilon_{t-1},\epsilon_{t-2},...)$$ e.g. the volatility of a stock today is causally driven by the volatility of that stock yesterday and two days ago, the population of a species this year is a direct function of the population of that same species last year, etc... Facebook Prophet doesn't look for any such causal relationships between past and future. Instead, it simply tries to find the best curve to fit to the data, using a linear or logistic curve, and Fourier coefficients for the seasonal components. There is also a regression component, but that is for external regressors, not for the time series itself (The Prophet model is a special case of GAM - Generalized Additive Model). Theoretically speaking, the assumptions underlying Prophet are indeed simplistic and weak - just fit the best curve to your historical data. Since fitting a curve to a limited data set over a specific time period doesn't impose any constraints on how the curve behaves outside of your historical data set, it is entirely possible that the best fitting curve will "go off the rails" outside of the historical time interval. For example, I have often noticed that Prophet can go negative in the future, even if the historical data set has only positive values, because the simplistic assumptions mean that it will naively perpetuate a downward trend forever. This why prophet is recommended only for time series where the only informative signals are (relatively stable) trend and seasonality, and the residuals are just noise. In theory, a more rigorous causal or structural approach is more likely to capture signals that will extrapolate into the future. More importantly, if the residuals are not just noise, then an ARIMA model or a Neural Network might be able to capture those relationships...in theory. In practice, outside of the examples I mentioned above and a few others, the chances of finding a business time series where the underlying data generating process involves a causal relationship of the type $Y_{t+h}=f(Y_{t},Y_{t-1},Y_{t-2},...)$ are very slim. Think about it: why would sales for a grocery or fashion item ever be driven by a process of the form $Y_t = a_1Y_{t-1}+...a_nY_{t-n}+c+\sigma(t)$? What causal mechanism would there be that says your sales of butter this week should be a linear combination of your butter sales last week and your butter sales from two weeks ago? Or that your web traffic today should be a linear combination of your web traffic from yesterday, two days ago, three days ago, and last week? So at the end of the day, the assumptions of ARIMA and similar models end up being so strong and implausible that, for all of their mathematical rigor, they are just as add-hoc in practice as Prophet or Holt-Winters. So the simplicity of Prophet's approach in practice makes sense for a lot of business time series. Moreover, the authors acknowledge this in their paper.
Inference in Time Series: Prophet vs. ARIMA
ARIMA and similar models assume some sort of causal relationship between past values and past errors and future values of the time series: $$Y_{t+h}=f(Y_{t},Y_{t-1},Y_{t-2},....,\epsilon_{t},\epsilon_
Inference in Time Series: Prophet vs. ARIMA ARIMA and similar models assume some sort of causal relationship between past values and past errors and future values of the time series: $$Y_{t+h}=f(Y_{t},Y_{t-1},Y_{t-2},....,\epsilon_{t},\epsilon_{t-1},\epsilon_{t-2},...)$$ e.g. the volatility of a stock today is causally driven by the volatility of that stock yesterday and two days ago, the population of a species this year is a direct function of the population of that same species last year, etc... Facebook Prophet doesn't look for any such causal relationships between past and future. Instead, it simply tries to find the best curve to fit to the data, using a linear or logistic curve, and Fourier coefficients for the seasonal components. There is also a regression component, but that is for external regressors, not for the time series itself (The Prophet model is a special case of GAM - Generalized Additive Model). Theoretically speaking, the assumptions underlying Prophet are indeed simplistic and weak - just fit the best curve to your historical data. Since fitting a curve to a limited data set over a specific time period doesn't impose any constraints on how the curve behaves outside of your historical data set, it is entirely possible that the best fitting curve will "go off the rails" outside of the historical time interval. For example, I have often noticed that Prophet can go negative in the future, even if the historical data set has only positive values, because the simplistic assumptions mean that it will naively perpetuate a downward trend forever. This why prophet is recommended only for time series where the only informative signals are (relatively stable) trend and seasonality, and the residuals are just noise. In theory, a more rigorous causal or structural approach is more likely to capture signals that will extrapolate into the future. More importantly, if the residuals are not just noise, then an ARIMA model or a Neural Network might be able to capture those relationships...in theory. In practice, outside of the examples I mentioned above and a few others, the chances of finding a business time series where the underlying data generating process involves a causal relationship of the type $Y_{t+h}=f(Y_{t},Y_{t-1},Y_{t-2},...)$ are very slim. Think about it: why would sales for a grocery or fashion item ever be driven by a process of the form $Y_t = a_1Y_{t-1}+...a_nY_{t-n}+c+\sigma(t)$? What causal mechanism would there be that says your sales of butter this week should be a linear combination of your butter sales last week and your butter sales from two weeks ago? Or that your web traffic today should be a linear combination of your web traffic from yesterday, two days ago, three days ago, and last week? So at the end of the day, the assumptions of ARIMA and similar models end up being so strong and implausible that, for all of their mathematical rigor, they are just as add-hoc in practice as Prophet or Holt-Winters. So the simplicity of Prophet's approach in practice makes sense for a lot of business time series. Moreover, the authors acknowledge this in their paper.
Inference in Time Series: Prophet vs. ARIMA ARIMA and similar models assume some sort of causal relationship between past values and past errors and future values of the time series: $$Y_{t+h}=f(Y_{t},Y_{t-1},Y_{t-2},....,\epsilon_{t},\epsilon_
38,600
Inference in Time Series: Prophet vs. ARIMA
I played with Prophet a bit. They promise big. As I understood their claim was for the framework for massive forecasting. If you have 10,000 series to forecast, there's no way to do it manually. So, let's just run the thing on all of them automatically, and maybe we'll get a decent set of forecast on average. In finance we also forecast massive numbers of series, e.g. loan loss forecasting may involve millions of loans in the portfolio. In this case we manually strat the portfolio, and manually build models for each strat then run the same model on all loans. Prophet would not need this, because it would estimate a separate model, potentially with its own variables for each loan. However, I tried Prophet on a different problem. I had just a few series, and looked at the quality of the forecast. The problem that I saw was the "change point" detection. In essence, if I understood right what it's doing then Prophet adjusts the slope if it think it encountered a change point. That was a problem for me because we sometimes have temporary deviations from the mean, could be a different regime, then things get back to old way. In other words mean reversion is quite prevalent. Prophet would not be able to say what is a true change point. However, this is not a criticism of a framework. It's just difficult generally to detect a change point automatically or manually.
Inference in Time Series: Prophet vs. ARIMA
I played with Prophet a bit. They promise big. As I understood their claim was for the framework for massive forecasting. If you have 10,000 series to forecast, there's no way to do it manually. So, l
Inference in Time Series: Prophet vs. ARIMA I played with Prophet a bit. They promise big. As I understood their claim was for the framework for massive forecasting. If you have 10,000 series to forecast, there's no way to do it manually. So, let's just run the thing on all of them automatically, and maybe we'll get a decent set of forecast on average. In finance we also forecast massive numbers of series, e.g. loan loss forecasting may involve millions of loans in the portfolio. In this case we manually strat the portfolio, and manually build models for each strat then run the same model on all loans. Prophet would not need this, because it would estimate a separate model, potentially with its own variables for each loan. However, I tried Prophet on a different problem. I had just a few series, and looked at the quality of the forecast. The problem that I saw was the "change point" detection. In essence, if I understood right what it's doing then Prophet adjusts the slope if it think it encountered a change point. That was a problem for me because we sometimes have temporary deviations from the mean, could be a different regime, then things get back to old way. In other words mean reversion is quite prevalent. Prophet would not be able to say what is a true change point. However, this is not a criticism of a framework. It's just difficult generally to detect a change point automatically or manually.
Inference in Time Series: Prophet vs. ARIMA I played with Prophet a bit. They promise big. As I understood their claim was for the framework for massive forecasting. If you have 10,000 series to forecast, there's no way to do it manually. So, l