idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
39,101 | Kruskal-Wallis or Fligner test to check homogeneity of variances? | The straightforward answer seems to be Levene's Test. Also described at Wikipedia. Levene's is applicable in your case because it is less sensitive to departures from normality than an alternative, the Bartlett Test. Levene's is parametric but suitable even with some degree of non-normality. If the distribution radically departed from normality, as with extreme outliers, you'd want to use a non-parametric alternative.
I don't see any Kruskal test as being applicable here. But you'll also want to check other threads such as this one. | Kruskal-Wallis or Fligner test to check homogeneity of variances? | The straightforward answer seems to be Levene's Test. Also described at Wikipedia. Levene's is applicable in your case because it is less sensitive to departures from normality than an alternative, | Kruskal-Wallis or Fligner test to check homogeneity of variances?
The straightforward answer seems to be Levene's Test. Also described at Wikipedia. Levene's is applicable in your case because it is less sensitive to departures from normality than an alternative, the Bartlett Test. Levene's is parametric but suitable even with some degree of non-normality. If the distribution radically departed from normality, as with extreme outliers, you'd want to use a non-parametric alternative.
I don't see any Kruskal test as being applicable here. But you'll also want to check other threads such as this one. | Kruskal-Wallis or Fligner test to check homogeneity of variances?
The straightforward answer seems to be Levene's Test. Also described at Wikipedia. Levene's is applicable in your case because it is less sensitive to departures from normality than an alternative, |
39,102 | Inference with Gaussian Random Variable | The law of iterated expectations can help here. We have:
$$Var[Y]=E(Var[Y|X])+Var[E(Y|X)]$$
Now conditional on $X$ the expected value of $Y$ is $2X+8$, and its variance is $1$. So we have:
$$Var[Y]=E(1)+Var[2X+8]=1+4 Var[X]=1+\frac{4}{\alpha}$$ | Inference with Gaussian Random Variable | The law of iterated expectations can help here. We have:
$$Var[Y]=E(Var[Y|X])+Var[E(Y|X)]$$
Now conditional on $X$ the expected value of $Y$ is $2X+8$, and its variance is $1$. So we have:
$$Var[Y]= | Inference with Gaussian Random Variable
The law of iterated expectations can help here. We have:
$$Var[Y]=E(Var[Y|X])+Var[E(Y|X)]$$
Now conditional on $X$ the expected value of $Y$ is $2X+8$, and its variance is $1$. So we have:
$$Var[Y]=E(1)+Var[2X+8]=1+4 Var[X]=1+\frac{4}{\alpha}$$ | Inference with Gaussian Random Variable
The law of iterated expectations can help here. We have:
$$Var[Y]=E(Var[Y|X])+Var[E(Y|X)]$$
Now conditional on $X$ the expected value of $Y$ is $2X+8$, and its variance is $1$. So we have:
$$Var[Y]= |
39,103 | Inference with Gaussian Random Variable | Solution to this homework is straightforward application of simple algebra and independence of $X$ and $N_y$: $\mathbb{E} (2 X + N_y)^2 = 4 \mathbb{E} X^2 + 4 \mathbb{E} X \mathbb{E} N_y + \mathbb{E} N_y^2 = 4 Var X + 0 + Var N_y = \frac{4}{\alpha} + 1$. | Inference with Gaussian Random Variable | Solution to this homework is straightforward application of simple algebra and independence of $X$ and $N_y$: $\mathbb{E} (2 X + N_y)^2 = 4 \mathbb{E} X^2 + 4 \mathbb{E} X \mathbb{E} N_y + \mathbb{E} | Inference with Gaussian Random Variable
Solution to this homework is straightforward application of simple algebra and independence of $X$ and $N_y$: $\mathbb{E} (2 X + N_y)^2 = 4 \mathbb{E} X^2 + 4 \mathbb{E} X \mathbb{E} N_y + \mathbb{E} N_y^2 = 4 Var X + 0 + Var N_y = \frac{4}{\alpha} + 1$. | Inference with Gaussian Random Variable
Solution to this homework is straightforward application of simple algebra and independence of $X$ and $N_y$: $\mathbb{E} (2 X + N_y)^2 = 4 \mathbb{E} X^2 + 4 \mathbb{E} X \mathbb{E} N_y + \mathbb{E} |
39,104 | Visualize movie/actor relationships | N.B.: This was previously a (long) comment that I've converted to an answer. Hopefully I'll be able to post an example of what I describe below within a day or two.
Why not try something like a heatmap? Have movies as rows and actors as columns. Maybe sort each of them in terms of the number of actors in the movie and number of movies each actor has been in. Then color each cell where there is a match. This is basically a visualization of the adjacency matrix. The proposed sorting should make some interesting patterns and the right use of color could make it both artistic and more informative. Maybe color by movie type or Netflix rating or proportion of male to female actors (or viewers!), etc. | Visualize movie/actor relationships | N.B.: This was previously a (long) comment that I've converted to an answer. Hopefully I'll be able to post an example of what I describe below within a day or two.
Why not try something like a heatm | Visualize movie/actor relationships
N.B.: This was previously a (long) comment that I've converted to an answer. Hopefully I'll be able to post an example of what I describe below within a day or two.
Why not try something like a heatmap? Have movies as rows and actors as columns. Maybe sort each of them in terms of the number of actors in the movie and number of movies each actor has been in. Then color each cell where there is a match. This is basically a visualization of the adjacency matrix. The proposed sorting should make some interesting patterns and the right use of color could make it both artistic and more informative. Maybe color by movie type or Netflix rating or proportion of male to female actors (or viewers!), etc. | Visualize movie/actor relationships
N.B.: This was previously a (long) comment that I've converted to an answer. Hopefully I'll be able to post an example of what I describe below within a day or two.
Why not try something like a heatm |
39,105 | Visualize movie/actor relationships | checkout Gephi, this software has some very good layout algorithms to handle the spaghetti problem: http://gephi.org/features/
Especially, try the ForceAtlas layout: http://forum.gephi.org/viewtopic.php?f=26&t=926
The software let you control the parameters in real time, and you can move the nodes manually.
(disclamer: I'm part of this community) | Visualize movie/actor relationships | checkout Gephi, this software has some very good layout algorithms to handle the spaghetti problem: http://gephi.org/features/
Especially, try the ForceAtlas layout: http://forum.gephi.org/viewtopic.p | Visualize movie/actor relationships
checkout Gephi, this software has some very good layout algorithms to handle the spaghetti problem: http://gephi.org/features/
Especially, try the ForceAtlas layout: http://forum.gephi.org/viewtopic.php?f=26&t=926
The software let you control the parameters in real time, and you can move the nodes manually.
(disclamer: I'm part of this community) | Visualize movie/actor relationships
checkout Gephi, this software has some very good layout algorithms to handle the spaghetti problem: http://gephi.org/features/
Especially, try the ForceAtlas layout: http://forum.gephi.org/viewtopic.p |
39,106 | Visualize movie/actor relationships | Graphviz can optimise the layout, see something similar here. | Visualize movie/actor relationships | Graphviz can optimise the layout, see something similar here. | Visualize movie/actor relationships
Graphviz can optimise the layout, see something similar here. | Visualize movie/actor relationships
Graphviz can optimise the layout, see something similar here. |
39,107 | Visualize movie/actor relationships | I wouldn't know how you'd go about constructing this but I liked the method using hyperbolic geometry
http://www.newscientist.com/data/images/ns/cms/dn19420/dn19420-1_800.jpg
http://www.newscientist.com/article/dn19420-escherlike-internet-map-could-speed-online-traffic.html | Visualize movie/actor relationships | I wouldn't know how you'd go about constructing this but I liked the method using hyperbolic geometry
http://www.newscientist.com/data/images/ns/cms/dn19420/dn19420-1_800.jpg
http://www.newscientist.c | Visualize movie/actor relationships
I wouldn't know how you'd go about constructing this but I liked the method using hyperbolic geometry
http://www.newscientist.com/data/images/ns/cms/dn19420/dn19420-1_800.jpg
http://www.newscientist.com/article/dn19420-escherlike-internet-map-could-speed-online-traffic.html | Visualize movie/actor relationships
I wouldn't know how you'd go about constructing this but I liked the method using hyperbolic geometry
http://www.newscientist.com/data/images/ns/cms/dn19420/dn19420-1_800.jpg
http://www.newscientist.c |
39,108 | Understanding multiple regression output | It seems like you need an introduction to regression. People made book recommendations here. Free book recommendations here.
It's hard to make sure you're doing the analysis right when we don't know what the variables are or what the goal is. But based on the output, I can tell you that your second regression specification looks better than your first. I say that because you have two highly significant coefficients, and the adjusted R^2 value took a big jump. Note though, although I consider these important clues, it is not true that models with more significant coefficients or higher adjusted R^2 are consistently better. There are lots of other issues to consider.
Your regression models are predicting Y, using a and b. In your second model, the estimated regression equation is -0.06807 + (3.01517 * a) - (0.00994 * b) - (1.13782 ab)
In other words, plug in a and b, and you get the models prediction for Y. I could say a lot more, but I'll leave you there and suggest you pick up a textbook.
I strongly recommend you try plotting your data. Y with a on the x-axis, Y with b on the x-axis, and a by b as well. | Understanding multiple regression output | It seems like you need an introduction to regression. People made book recommendations here. Free book recommendations here.
It's hard to make sure you're doing the analysis right when we don't know | Understanding multiple regression output
It seems like you need an introduction to regression. People made book recommendations here. Free book recommendations here.
It's hard to make sure you're doing the analysis right when we don't know what the variables are or what the goal is. But based on the output, I can tell you that your second regression specification looks better than your first. I say that because you have two highly significant coefficients, and the adjusted R^2 value took a big jump. Note though, although I consider these important clues, it is not true that models with more significant coefficients or higher adjusted R^2 are consistently better. There are lots of other issues to consider.
Your regression models are predicting Y, using a and b. In your second model, the estimated regression equation is -0.06807 + (3.01517 * a) - (0.00994 * b) - (1.13782 ab)
In other words, plug in a and b, and you get the models prediction for Y. I could say a lot more, but I'll leave you there and suggest you pick up a textbook.
I strongly recommend you try plotting your data. Y with a on the x-axis, Y with b on the x-axis, and a by b as well. | Understanding multiple regression output
It seems like you need an introduction to regression. People made book recommendations here. Free book recommendations here.
It's hard to make sure you're doing the analysis right when we don't know |
39,109 | Understanding multiple regression output | The two together don't tell you anything more than the second one would alone! The main effects are uninteresting and misleading when there is interaction present. The second model tells you all you need to know. Here are a couple of plots, with R code, to help you understand what that second model looks like...
library(lattice)
a <- rep(seq(-1.37, 2.12, (2.12--1.37)/9),4)
b <- sort(rep(quantile(seq(-1.03, 1.30, .01),c(.2,.4,.6,.8)),10) )
y <- -0.06807 + (3.01517 * a) + (-0.00994 * b) + (-1.13782 *a*b)
xyplot(y~a|factor(b))
This one shows the estimated effect of a on y by levels of b. At each level of b, the relationship is positive. This is your significant positive slope for main effect of a in the presence of the interaction a:b.
a <- sort(rep(quantile(seq(-1.37, 2.12, .01),c(.2,.4,.6,.8)),10) )
b <- rep(seq(-1.03, 1.30, (1.30--1.03)/9),4)
y <- -0.06807 + (3.01517 * a) + (-0.00994 * b) + (-1.13782 *a*b)
xyplot(y~b|factor(a))
This is image shows the estimated effects of b on y within levels of a. You can see why you have no significant main effect for b. The direction of the y~b relationship depends on level of a. Thus, no independent relationship (imagine averaging those lines) but a significant interaction (clear pattern when you take into account the level of a) | Understanding multiple regression output | The two together don't tell you anything more than the second one would alone! The main effects are uninteresting and misleading when there is interaction present. The second model tells you all you n | Understanding multiple regression output
The two together don't tell you anything more than the second one would alone! The main effects are uninteresting and misleading when there is interaction present. The second model tells you all you need to know. Here are a couple of plots, with R code, to help you understand what that second model looks like...
library(lattice)
a <- rep(seq(-1.37, 2.12, (2.12--1.37)/9),4)
b <- sort(rep(quantile(seq(-1.03, 1.30, .01),c(.2,.4,.6,.8)),10) )
y <- -0.06807 + (3.01517 * a) + (-0.00994 * b) + (-1.13782 *a*b)
xyplot(y~a|factor(b))
This one shows the estimated effect of a on y by levels of b. At each level of b, the relationship is positive. This is your significant positive slope for main effect of a in the presence of the interaction a:b.
a <- sort(rep(quantile(seq(-1.37, 2.12, .01),c(.2,.4,.6,.8)),10) )
b <- rep(seq(-1.03, 1.30, (1.30--1.03)/9),4)
y <- -0.06807 + (3.01517 * a) + (-0.00994 * b) + (-1.13782 *a*b)
xyplot(y~b|factor(a))
This is image shows the estimated effects of b on y within levels of a. You can see why you have no significant main effect for b. The direction of the y~b relationship depends on level of a. Thus, no independent relationship (imagine averaging those lines) but a significant interaction (clear pattern when you take into account the level of a) | Understanding multiple regression output
The two together don't tell you anything more than the second one would alone! The main effects are uninteresting and misleading when there is interaction present. The second model tells you all you n |
39,110 | Understanding multiple regression output | You may be interested by this introduction to the linear model (basis of almost any statistical analyses), and linear regression in particular:
it thoroughly explains lots of the mathematical aspects of linear regression, by detailing all important equations (which is usually left for exercise anywhere else on the Internet);
it uses a simple, yet informative enough, data set as an example;
and it gives all the R commands required to do the computations step by step, as well as plot the results. | Understanding multiple regression output | You may be interested by this introduction to the linear model (basis of almost any statistical analyses), and linear regression in particular:
it thoroughly explains lots of the mathematical aspects | Understanding multiple regression output
You may be interested by this introduction to the linear model (basis of almost any statistical analyses), and linear regression in particular:
it thoroughly explains lots of the mathematical aspects of linear regression, by detailing all important equations (which is usually left for exercise anywhere else on the Internet);
it uses a simple, yet informative enough, data set as an example;
and it gives all the R commands required to do the computations step by step, as well as plot the results. | Understanding multiple regression output
You may be interested by this introduction to the linear model (basis of almost any statistical analyses), and linear regression in particular:
it thoroughly explains lots of the mathematical aspects |
39,111 | Understanding multiple regression output | If you want a book specifically on this sort of regression - as opposed to data analysis in general - I recommend Regression Analysis by Example by Chatterjee and Price. Good, not technical, but it doesn't oversimplify. | Understanding multiple regression output | If you want a book specifically on this sort of regression - as opposed to data analysis in general - I recommend Regression Analysis by Example by Chatterjee and Price. Good, not technical, but it d | Understanding multiple regression output
If you want a book specifically on this sort of regression - as opposed to data analysis in general - I recommend Regression Analysis by Example by Chatterjee and Price. Good, not technical, but it doesn't oversimplify. | Understanding multiple regression output
If you want a book specifically on this sort of regression - as opposed to data analysis in general - I recommend Regression Analysis by Example by Chatterjee and Price. Good, not technical, but it d |
39,112 | How to generate user-friendly summaries of cluster analysis? | I like a 2D plot that shows the clusters and the actual data points so readers can get an idea of the quality of the clustering. If there are more than two factors, you can put the principle components on the axes, as in my example:
The equivalent 3D plots are only good if the viewer can interact with it get a sense of depth and obscured pieces. Here's a 3D example with the same data. | How to generate user-friendly summaries of cluster analysis? | I like a 2D plot that shows the clusters and the actual data points so readers can get an idea of the quality of the clustering. If there are more than two factors, you can put the principle component | How to generate user-friendly summaries of cluster analysis?
I like a 2D plot that shows the clusters and the actual data points so readers can get an idea of the quality of the clustering. If there are more than two factors, you can put the principle components on the axes, as in my example:
The equivalent 3D plots are only good if the viewer can interact with it get a sense of depth and obscured pieces. Here's a 3D example with the same data. | How to generate user-friendly summaries of cluster analysis?
I like a 2D plot that shows the clusters and the actual data points so readers can get an idea of the quality of the clustering. If there are more than two factors, you can put the principle component |
39,113 | How to generate user-friendly summaries of cluster analysis? | Bubble Charts are a good visual device that you can use to represent your cluster. Pick your 4 most important variables and plot each cluster using the x and y axis, size and color of bubble to represent the 4 factors. If you have many variables you can perform a principal components analysis first to reduce them to 4 factors.
http://www.google.com/images?um=1&hl=en&rlz=1I7GGLD_en&tbs=isch:1&aq=f&aqi=g6&oq=&q=bubble%20chart
-Ralph Winters | How to generate user-friendly summaries of cluster analysis? | Bubble Charts are a good visual device that you can use to represent your cluster. Pick your 4 most important variables and plot each cluster using the x and y axis, size and color of bubble to repre | How to generate user-friendly summaries of cluster analysis?
Bubble Charts are a good visual device that you can use to represent your cluster. Pick your 4 most important variables and plot each cluster using the x and y axis, size and color of bubble to represent the 4 factors. If you have many variables you can perform a principal components analysis first to reduce them to 4 factors.
http://www.google.com/images?um=1&hl=en&rlz=1I7GGLD_en&tbs=isch:1&aq=f&aqi=g6&oq=&q=bubble%20chart
-Ralph Winters | How to generate user-friendly summaries of cluster analysis?
Bubble Charts are a good visual device that you can use to represent your cluster. Pick your 4 most important variables and plot each cluster using the x and y axis, size and color of bubble to repre |
39,114 | How to generate user-friendly summaries of cluster analysis? | The best method I have found for a non-technical audience is to present a table or plots of the centroids of each cluster along with a description of that cluster. It helps in the business world (not sure your domain) to give a name to each cluster describing it's principle characteristics. Example when clustering customers would be: "Long time loyals" for that cluster that is generally comprised of long tenured customers. | How to generate user-friendly summaries of cluster analysis? | The best method I have found for a non-technical audience is to present a table or plots of the centroids of each cluster along with a description of that cluster. It helps in the business world (not | How to generate user-friendly summaries of cluster analysis?
The best method I have found for a non-technical audience is to present a table or plots of the centroids of each cluster along with a description of that cluster. It helps in the business world (not sure your domain) to give a name to each cluster describing it's principle characteristics. Example when clustering customers would be: "Long time loyals" for that cluster that is generally comprised of long tenured customers. | How to generate user-friendly summaries of cluster analysis?
The best method I have found for a non-technical audience is to present a table or plots of the centroids of each cluster along with a description of that cluster. It helps in the business world (not |
39,115 | Predicting daily electricity load - fitting time series | I've played around with electrical demand models, and I can tell you that it's a good idea to start "zoomed out". Each region has its own characteristics, but the general idea is the same.
Electric demand is a function of many variables. Starting with the slowest moving terms.
General Economic Activity is the slowest moving term (typically the 3 to 8 year time frame). This term is typically related to Gross Domestic Product for the area. Electrical Demand may generally grow faster than GDP, but the electrical demand "ups" during good economic times, and demand "downs" during recessions provide an obvious link to GDP. See the blue line in the first graph below.
Next, is the Seasonal Term (annual time frame). For instance in the U.S., the Summer Peak shows up in August, the Winter Peak shows up in January, the Spring Trough shows up in April and the Fall Trough shows up in November. See the red line in top two graphs below. In the second graph, I have shown the Seasonal Term to be constant for each month, but you can easily improve that by a linear or non-linear relationship for each month (monthly time frame).
You are now down to the daily time frame. The bottom graph shows the Electrical Demand for Texas for one 24 hour period (12/22/2010). The Day-time Peak was at 7:00PM (19:00) and the Night-time Trough was at 4:00AM (04:00). This time frame is where you want to consider holidays, weekends, weather, etc. However, keep in mind that those other variables (in 1 and 2 above) are also affecting your results.
So, from your description, you have data for 11 months. Look at the first graph below and assume that you have data for 11 months. Is that enough to get an idea of the Seasonal Term for the year? I would use a minimum of 10 years of monthly data to get a feel for the Seasonal Term. The idea here is to tweek the structure of your daily model differently during months of "rapid seasonal change" versus months of "slow seasonal change".
Next, I would play around with the size and structure of the "data window" that you will use to estimate your daily model. For example, will you get a better daily model if you include daily fall and winter data when estimating a summer daily model? Or, is it better to use 10 rescaled "summer data windows", one for each year in 10 years of data, when estimating a summer daily model?
Once you get all of the deterministic terms working well, then, and only then would I go after the ARIMA terms. | Predicting daily electricity load - fitting time series | I've played around with electrical demand models, and I can tell you that it's a good idea to start "zoomed out". Each region has its own characteristics, but the general idea is the same.
Electric | Predicting daily electricity load - fitting time series
I've played around with electrical demand models, and I can tell you that it's a good idea to start "zoomed out". Each region has its own characteristics, but the general idea is the same.
Electric demand is a function of many variables. Starting with the slowest moving terms.
General Economic Activity is the slowest moving term (typically the 3 to 8 year time frame). This term is typically related to Gross Domestic Product for the area. Electrical Demand may generally grow faster than GDP, but the electrical demand "ups" during good economic times, and demand "downs" during recessions provide an obvious link to GDP. See the blue line in the first graph below.
Next, is the Seasonal Term (annual time frame). For instance in the U.S., the Summer Peak shows up in August, the Winter Peak shows up in January, the Spring Trough shows up in April and the Fall Trough shows up in November. See the red line in top two graphs below. In the second graph, I have shown the Seasonal Term to be constant for each month, but you can easily improve that by a linear or non-linear relationship for each month (monthly time frame).
You are now down to the daily time frame. The bottom graph shows the Electrical Demand for Texas for one 24 hour period (12/22/2010). The Day-time Peak was at 7:00PM (19:00) and the Night-time Trough was at 4:00AM (04:00). This time frame is where you want to consider holidays, weekends, weather, etc. However, keep in mind that those other variables (in 1 and 2 above) are also affecting your results.
So, from your description, you have data for 11 months. Look at the first graph below and assume that you have data for 11 months. Is that enough to get an idea of the Seasonal Term for the year? I would use a minimum of 10 years of monthly data to get a feel for the Seasonal Term. The idea here is to tweek the structure of your daily model differently during months of "rapid seasonal change" versus months of "slow seasonal change".
Next, I would play around with the size and structure of the "data window" that you will use to estimate your daily model. For example, will you get a better daily model if you include daily fall and winter data when estimating a summer daily model? Or, is it better to use 10 rescaled "summer data windows", one for each year in 10 years of data, when estimating a summer daily model?
Once you get all of the deterministic terms working well, then, and only then would I go after the ARIMA terms. | Predicting daily electricity load - fitting time series
I've played around with electrical demand models, and I can tell you that it's a good idea to start "zoomed out". Each region has its own characteristics, but the general idea is the same.
Electric |
39,116 | Predicting daily electricity load - fitting time series | @Grega:
I ran out of room in the comment area, so I started another answer.
From your comment, that's the problem. Each system/grid has its own "signature". It's a combination of system dynamics, age, weather, local economics, cultures, traditions, etc. In Japan, its worse. Some sections of the country run at 50Hz and other sections run at 60Hz. These grids connect at high voltage DC stations, which means some areas behave like electrical "islands" (totally different behavior than their neighbors just a few miles away). If you're zeroed-in on one factory, the predictability goes down even more. Fewer users means higher uncertainty.
No matter how you do this, it's going to be messy.
I would filter out the daily/weekday/weekend/holiday component to get a Seasonal Term. How? A 31 day CENTERED moving average? 51 day CMA? XX day CMA? You'll have to experiment with that, but I would make it a variable so you can tweek it later. Whatever filter you end up with, keep in mind that it stops short of either end of your data (a 31 day centered moving average will start on the 16th day of your raw data and end on the 16th day from the end of your raw data).
Next, the best you can do with 11 months of data is to "make up" month 12 (draw a line from your filtered series at its end to its beginning). Next, subtract your Seasonal Term (the filtered data) from your raw data to get a residual. Fit the residual to your weather data, allowing for factors for day-of-the-week and holidays.
Some factors that you may need to add are:
1) A "production run" factor. Whatever they make at this factory, if they switch from one product/category to another, the power demand required to make one product may be different than what is required for another product.
2) A "change over" factor. This is when they shift from one product/category to another. Sometimes it takes days of preparation for the switch.
3) A "work shift" factor. If they have three shifts per day, power demand for the late shift will probably be significantly different than the day shifts.
Good luck. As you probably know, this kind of a problem can get real frustrating.
====== Edit to answer Grega's first comment (01/25/2011) ====================
@Grega: Answering your first comment, I'm afraid it doesn't. The idea behind a model like this would be to have multiple "similar instances" of your 32 future points, so you can fit those points, and then predict new points. You don't have typical "similar instances" because yesterday was not the same day-of-the-week as today. You have to use last week's same day-of-the-week and the previous week's same day-of-the-week, etc. By the time you get several "similar instances" (say 20), you're typically more than 20 weeks in the past (a holiday may screw up one or more of your weeks). At that point, you're in a whole different season of the year. So, in order to use those days in another season, you need to remove the Seasonal Term from the raw data.
It's a sloppy situation, but it's the best you can do with 11 months of data. | Predicting daily electricity load - fitting time series | @Grega:
I ran out of room in the comment area, so I started another answer.
From your comment, that's the problem. Each system/grid has its own "signature". It's a combination of system dynamics, a | Predicting daily electricity load - fitting time series
@Grega:
I ran out of room in the comment area, so I started another answer.
From your comment, that's the problem. Each system/grid has its own "signature". It's a combination of system dynamics, age, weather, local economics, cultures, traditions, etc. In Japan, its worse. Some sections of the country run at 50Hz and other sections run at 60Hz. These grids connect at high voltage DC stations, which means some areas behave like electrical "islands" (totally different behavior than their neighbors just a few miles away). If you're zeroed-in on one factory, the predictability goes down even more. Fewer users means higher uncertainty.
No matter how you do this, it's going to be messy.
I would filter out the daily/weekday/weekend/holiday component to get a Seasonal Term. How? A 31 day CENTERED moving average? 51 day CMA? XX day CMA? You'll have to experiment with that, but I would make it a variable so you can tweek it later. Whatever filter you end up with, keep in mind that it stops short of either end of your data (a 31 day centered moving average will start on the 16th day of your raw data and end on the 16th day from the end of your raw data).
Next, the best you can do with 11 months of data is to "make up" month 12 (draw a line from your filtered series at its end to its beginning). Next, subtract your Seasonal Term (the filtered data) from your raw data to get a residual. Fit the residual to your weather data, allowing for factors for day-of-the-week and holidays.
Some factors that you may need to add are:
1) A "production run" factor. Whatever they make at this factory, if they switch from one product/category to another, the power demand required to make one product may be different than what is required for another product.
2) A "change over" factor. This is when they shift from one product/category to another. Sometimes it takes days of preparation for the switch.
3) A "work shift" factor. If they have three shifts per day, power demand for the late shift will probably be significantly different than the day shifts.
Good luck. As you probably know, this kind of a problem can get real frustrating.
====== Edit to answer Grega's first comment (01/25/2011) ====================
@Grega: Answering your first comment, I'm afraid it doesn't. The idea behind a model like this would be to have multiple "similar instances" of your 32 future points, so you can fit those points, and then predict new points. You don't have typical "similar instances" because yesterday was not the same day-of-the-week as today. You have to use last week's same day-of-the-week and the previous week's same day-of-the-week, etc. By the time you get several "similar instances" (say 20), you're typically more than 20 weeks in the past (a holiday may screw up one or more of your weeks). At that point, you're in a whole different season of the year. So, in order to use those days in another season, you need to remove the Seasonal Term from the raw data.
It's a sloppy situation, but it's the best you can do with 11 months of data. | Predicting daily electricity load - fitting time series
@Grega:
I ran out of room in the comment area, so I started another answer.
From your comment, that's the problem. Each system/grid has its own "signature". It's a combination of system dynamics, a |
39,117 | Determining trend significance in a time series | What you are describing is commonly referred to as auto correlated errors. I would suggest you look up resources on ARIMA modelling. ARIMA modelling will allow you to model the correlation in your error term, and hence allow you to assess your trend variable independent of this auto correlation (or other independent variables you are interested in).
My suggested reading for an into to ARIMA modelling would be
Applied Time Series Analysis for the Social Sciences 1980
by R McCleary ; R A Hay ; E E Meidinger ; D McDowall
But there are plenty of resources (time series analysis is a massive field of study). You would probably be able to turn up some good online resources with just a google search if you don't have access to an academic library. I just turned up this page, Statistica ARIMA, it has a brief but very concise description of ARIMA modelling as well as other methods for time series analysis. | Determining trend significance in a time series | What you are describing is commonly referred to as auto correlated errors. I would suggest you look up resources on ARIMA modelling. ARIMA modelling will allow you to model the correlation in your err | Determining trend significance in a time series
What you are describing is commonly referred to as auto correlated errors. I would suggest you look up resources on ARIMA modelling. ARIMA modelling will allow you to model the correlation in your error term, and hence allow you to assess your trend variable independent of this auto correlation (or other independent variables you are interested in).
My suggested reading for an into to ARIMA modelling would be
Applied Time Series Analysis for the Social Sciences 1980
by R McCleary ; R A Hay ; E E Meidinger ; D McDowall
But there are plenty of resources (time series analysis is a massive field of study). You would probably be able to turn up some good online resources with just a google search if you don't have access to an academic library. I just turned up this page, Statistica ARIMA, it has a brief but very concise description of ARIMA modelling as well as other methods for time series analysis. | Determining trend significance in a time series
What you are describing is commonly referred to as auto correlated errors. I would suggest you look up resources on ARIMA modelling. ARIMA modelling will allow you to model the correlation in your err |
39,118 | Determining trend significance in a time series | To add to the existing answers, if you are using R a simple way to proceed is to allow the ARMA errors to be modelled automatically using auto.arima(). If x is your time series, then you can proceed as follows.
t <- 1:length(x)
auto.arima(x,xreg=t,d=0)
This will fit the model $x_t = a + bt + e_t$ where $e_t\sim\text{ARMA}(p,q)$ and $p$ and $q$ are selected automatically using the AIC.
The resulting output will give the value of $b$ and its standard error. Here is an example:
Series: x
ARIMA(3,0,0) with non-zero mean
Call: auto.arima(x = x, xreg = t)
Coefficients:
ar1 ar2 ar3 intercept t
-0.3770 0.1454 -0.2351 563.9654 0.0376
s.e. 0.1107 0.1190 0.1145 11.4725 0.2378
sigma^2 estimated as 5541: log likelihood = -475.85
AIC = 963.7 AICc = 964.81 BIC = 978.21
In this case, $p=3$ and $q=0$. The first three coefficients give the autoregressive terms, $a$ is the intercept and $b$ is in the t column. In this (artificial) example, the slope is not significantly different from zero.
The auto.arima function is using MLE rather than GLS, but the two are asymptotically equivalent.
The use of a Cochrane-Orcutt procedure only works if the error is AR(1). So the above is much more general and flexible. | Determining trend significance in a time series | To add to the existing answers, if you are using R a simple way to proceed is to allow the ARMA errors to be modelled automatically using auto.arima(). If x is your time series, then you can proceed a | Determining trend significance in a time series
To add to the existing answers, if you are using R a simple way to proceed is to allow the ARMA errors to be modelled automatically using auto.arima(). If x is your time series, then you can proceed as follows.
t <- 1:length(x)
auto.arima(x,xreg=t,d=0)
This will fit the model $x_t = a + bt + e_t$ where $e_t\sim\text{ARMA}(p,q)$ and $p$ and $q$ are selected automatically using the AIC.
The resulting output will give the value of $b$ and its standard error. Here is an example:
Series: x
ARIMA(3,0,0) with non-zero mean
Call: auto.arima(x = x, xreg = t)
Coefficients:
ar1 ar2 ar3 intercept t
-0.3770 0.1454 -0.2351 563.9654 0.0376
s.e. 0.1107 0.1190 0.1145 11.4725 0.2378
sigma^2 estimated as 5541: log likelihood = -475.85
AIC = 963.7 AICc = 964.81 BIC = 978.21
In this case, $p=3$ and $q=0$. The first three coefficients give the autoregressive terms, $a$ is the intercept and $b$ is in the t column. In this (artificial) example, the slope is not significantly different from zero.
The auto.arima function is using MLE rather than GLS, but the two are asymptotically equivalent.
The use of a Cochrane-Orcutt procedure only works if the error is AR(1). So the above is much more general and flexible. | Determining trend significance in a time series
To add to the existing answers, if you are using R a simple way to proceed is to allow the ARMA errors to be modelled automatically using auto.arima(). If x is your time series, then you can proceed a |
39,119 | Determining trend significance in a time series | Generalised least squares (GLS) is one potential option here. The OLS estimates of the parameters are given by:
$$\hat{\beta} = (X^{T}\Sigma^{-1}X)^{-1}X^{T}\Sigma^{-1}y$$
Normally we leave out $\Sigma$ as in OLS it is defined as $\sigma^2 \mathbf{I}$, i.e. an identity matrix multiplied by the estimated residual standard error. $\mathbf{I}$ is the assumption of uncorrelated errors; an observation is perfectly correlated with itself and is uncorrelated with any other observation.
GLS relaxes this indepence assumption by allowing $\Sigma$ to take different forms. Usually we choose a simple process to parametrise $\Sigma$, such as an AR(1). In an AR(1) the correlation between two errors at times $t$ and $s$ is
$$\mathrm{cor}(\varepsilon_s \varepsilon_t) = \left\lbrace \begin{array}{ll}
1 & \mathrm{if} \; s = t \\
\rho^{|t-s|} & \mathrm{else} \\
\end{array}
\right. $$
Which would give us the following error covariance matrix:
$$\mathbf{\Sigma} = \sigma^2 \left( \begin{array}{ccccc}
1 & \rho & \rho^2 & \cdots & \rho^{n-1} \\
\rho & 1 & \rho & \cdots & \rho^{n-2} \\
\rho^2 & \rho & 1 & \cdots & \rho^{n-3} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\rho^{n-1} & \rho^{n-2} & \rho^{n-3} & \cdots & 1 \\
\end{array} \right)$$
An additional parameter estimate is required, $\rho$.
More complex processes for $\Sigma$ can be employed, including ARMA models. In R, these sorts of model s can be fitted using the gls() function in package nlme.
If you are an R user, you might also take a look at the sandwich package which allows for something similar to the above, but where you estimate the OLS model and then afterwards, estimate $\Sigma$ and use that as a plug-in value to correct the standard errors of the OLS parameters. | Determining trend significance in a time series | Generalised least squares (GLS) is one potential option here. The OLS estimates of the parameters are given by:
$$\hat{\beta} = (X^{T}\Sigma^{-1}X)^{-1}X^{T}\Sigma^{-1}y$$
Normally we leave out $\Sig | Determining trend significance in a time series
Generalised least squares (GLS) is one potential option here. The OLS estimates of the parameters are given by:
$$\hat{\beta} = (X^{T}\Sigma^{-1}X)^{-1}X^{T}\Sigma^{-1}y$$
Normally we leave out $\Sigma$ as in OLS it is defined as $\sigma^2 \mathbf{I}$, i.e. an identity matrix multiplied by the estimated residual standard error. $\mathbf{I}$ is the assumption of uncorrelated errors; an observation is perfectly correlated with itself and is uncorrelated with any other observation.
GLS relaxes this indepence assumption by allowing $\Sigma$ to take different forms. Usually we choose a simple process to parametrise $\Sigma$, such as an AR(1). In an AR(1) the correlation between two errors at times $t$ and $s$ is
$$\mathrm{cor}(\varepsilon_s \varepsilon_t) = \left\lbrace \begin{array}{ll}
1 & \mathrm{if} \; s = t \\
\rho^{|t-s|} & \mathrm{else} \\
\end{array}
\right. $$
Which would give us the following error covariance matrix:
$$\mathbf{\Sigma} = \sigma^2 \left( \begin{array}{ccccc}
1 & \rho & \rho^2 & \cdots & \rho^{n-1} \\
\rho & 1 & \rho & \cdots & \rho^{n-2} \\
\rho^2 & \rho & 1 & \cdots & \rho^{n-3} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\rho^{n-1} & \rho^{n-2} & \rho^{n-3} & \cdots & 1 \\
\end{array} \right)$$
An additional parameter estimate is required, $\rho$.
More complex processes for $\Sigma$ can be employed, including ARMA models. In R, these sorts of model s can be fitted using the gls() function in package nlme.
If you are an R user, you might also take a look at the sandwich package which allows for something similar to the above, but where you estimate the OLS model and then afterwards, estimate $\Sigma$ and use that as a plug-in value to correct the standard errors of the OLS parameters. | Determining trend significance in a time series
Generalised least squares (GLS) is one potential option here. The OLS estimates of the parameters are given by:
$$\hat{\beta} = (X^{T}\Sigma^{-1}X)^{-1}X^{T}\Sigma^{-1}y$$
Normally we leave out $\Sig |
39,120 | Determining trend significance in a time series | Along the lines of a previous answer, if all assumptions for OLS are met except for the fact that errors are correlated, maybe something as simple as a Cochrane-Orcutt correction would be enough to solve your problem. | Determining trend significance in a time series | Along the lines of a previous answer, if all assumptions for OLS are met except for the fact that errors are correlated, maybe something as simple as a Cochrane-Orcutt correction would be enough to so | Determining trend significance in a time series
Along the lines of a previous answer, if all assumptions for OLS are met except for the fact that errors are correlated, maybe something as simple as a Cochrane-Orcutt correction would be enough to solve your problem. | Determining trend significance in a time series
Along the lines of a previous answer, if all assumptions for OLS are met except for the fact that errors are correlated, maybe something as simple as a Cochrane-Orcutt correction would be enough to so |
39,121 | Fuzzy textbooks | here is the best book I would recomend for the subject:
http://www.amazon.com/Fuzzy-Sets-Logic-Theory-Applications/dp/0131011715
Here is an easy to read book:
http://www.amazon.com/Fuzzy-Logic-Revolutionary-Computer-Technology/dp/0671875353
Besides here are a list of links that might help:
http://www.seattlerobotics.org/encoder/mar98/fuz/flindex.html
http://www.fuzzy-logic.com/
http://videolectures.net/acai05_berthold_fl/ | Fuzzy textbooks | here is the best book I would recomend for the subject:
http://www.amazon.com/Fuzzy-Sets-Logic-Theory-Applications/dp/0131011715
Here is an easy to read book:
http://www.amazon.com/Fuzzy-Logic-Revolut | Fuzzy textbooks
here is the best book I would recomend for the subject:
http://www.amazon.com/Fuzzy-Sets-Logic-Theory-Applications/dp/0131011715
Here is an easy to read book:
http://www.amazon.com/Fuzzy-Logic-Revolutionary-Computer-Technology/dp/0671875353
Besides here are a list of links that might help:
http://www.seattlerobotics.org/encoder/mar98/fuz/flindex.html
http://www.fuzzy-logic.com/
http://videolectures.net/acai05_berthold_fl/ | Fuzzy textbooks
here is the best book I would recomend for the subject:
http://www.amazon.com/Fuzzy-Sets-Logic-Theory-Applications/dp/0131011715
Here is an easy to read book:
http://www.amazon.com/Fuzzy-Logic-Revolut |
39,122 | Fuzzy textbooks | I have no experience with fuzzy things (well, apart from Fuzzy Felt) but this book looks interesting:
Buckley, James J. Fuzzy probability and statistics. Springer, 2006. ISBN 9783540308416. | Fuzzy textbooks | I have no experience with fuzzy things (well, apart from Fuzzy Felt) but this book looks interesting:
Buckley, James J. Fuzzy probability and statistics. Springer, 2006. ISBN 9783540308416. | Fuzzy textbooks
I have no experience with fuzzy things (well, apart from Fuzzy Felt) but this book looks interesting:
Buckley, James J. Fuzzy probability and statistics. Springer, 2006. ISBN 9783540308416. | Fuzzy textbooks
I have no experience with fuzzy things (well, apart from Fuzzy Felt) but this book looks interesting:
Buckley, James J. Fuzzy probability and statistics. Springer, 2006. ISBN 9783540308416. |
39,123 | Fuzzy textbooks | If you are looking for exhaustiveness, Springer has the Handbook of Fuzzy Systems series which covers most topics. I particularly recommend Fundamentals of Fuzzy Sets edited by Dubois & Prade which was particularly helpful to me when I was taking a graduate class in uncertainty modelling & possibility theory (with one of the editors/authors). | Fuzzy textbooks | If you are looking for exhaustiveness, Springer has the Handbook of Fuzzy Systems series which covers most topics. I particularly recommend Fundamentals of Fuzzy Sets edited by Dubois & Prade which wa | Fuzzy textbooks
If you are looking for exhaustiveness, Springer has the Handbook of Fuzzy Systems series which covers most topics. I particularly recommend Fundamentals of Fuzzy Sets edited by Dubois & Prade which was particularly helpful to me when I was taking a graduate class in uncertainty modelling & possibility theory (with one of the editors/authors). | Fuzzy textbooks
If you are looking for exhaustiveness, Springer has the Handbook of Fuzzy Systems series which covers most topics. I particularly recommend Fundamentals of Fuzzy Sets edited by Dubois & Prade which wa |
39,124 | Combining repeated experiments into one dataset | Just add "experiment" as an effect to your model, that should account for the shift between experiments and let you gain the power of increased N across experiments to detect effects of concentration and time.
In R, if using ANOVA and treating time as a factor (e.g. not numeric), then do:
library(ez)
ezANOVA(
data = my_data
, dv = .(my_dv)
, wid = .(individual)
, within = .(time)
, between = .(concentration,experiment)
)
However, this:
treats experiment as a fixed effect, whereas it might more reasonably be considered a random effect (thanks Henrik!)
treats time as non-continuous
assumes sphericity across the levels of time
An approach that solves all three issues is to employ a mixed effects model. If you think that the effect of time is linear, then leave time as a numeric variable and do:
library(lme4)
lmer(
data = my_data
, formula = my_dv ~ time*concentration+(1|individual)+(1|experiment)
)
If you don't think time is linear, you could convert it to a factor and repeat the above, or use generalized additive mixed modelling:
library(gamm4)
fit <- gamm4(
data = my_data
, formula = my_dv ~ time+concentration+s(time,by=concentration,bs='tp')
, random = ~ (1|individual) + (1|experiment)
)
print(fit$gam)
That assumes that experiment only shifts the time function, but lets concentration change the shape of the time function. I have a hard time figuring out how to visualise the results from single gamm4 fits, so I usually obtain the fitted model's predictions across the fixed-effects space then bootstrap (in your case, sampling individuals with replacement within each experiment) confidence intervals around these predictions.
Also, all of the above assume that residuals are gaussian; if you're dealing with anything different (eg. binomial data), then you need to change the "family" arguments of lmer and gamm4 (ezANOVA can't do anything but gaussian). | Combining repeated experiments into one dataset | Just add "experiment" as an effect to your model, that should account for the shift between experiments and let you gain the power of increased N across experiments to detect effects of concentration | Combining repeated experiments into one dataset
Just add "experiment" as an effect to your model, that should account for the shift between experiments and let you gain the power of increased N across experiments to detect effects of concentration and time.
In R, if using ANOVA and treating time as a factor (e.g. not numeric), then do:
library(ez)
ezANOVA(
data = my_data
, dv = .(my_dv)
, wid = .(individual)
, within = .(time)
, between = .(concentration,experiment)
)
However, this:
treats experiment as a fixed effect, whereas it might more reasonably be considered a random effect (thanks Henrik!)
treats time as non-continuous
assumes sphericity across the levels of time
An approach that solves all three issues is to employ a mixed effects model. If you think that the effect of time is linear, then leave time as a numeric variable and do:
library(lme4)
lmer(
data = my_data
, formula = my_dv ~ time*concentration+(1|individual)+(1|experiment)
)
If you don't think time is linear, you could convert it to a factor and repeat the above, or use generalized additive mixed modelling:
library(gamm4)
fit <- gamm4(
data = my_data
, formula = my_dv ~ time+concentration+s(time,by=concentration,bs='tp')
, random = ~ (1|individual) + (1|experiment)
)
print(fit$gam)
That assumes that experiment only shifts the time function, but lets concentration change the shape of the time function. I have a hard time figuring out how to visualise the results from single gamm4 fits, so I usually obtain the fitted model's predictions across the fixed-effects space then bootstrap (in your case, sampling individuals with replacement within each experiment) confidence intervals around these predictions.
Also, all of the above assume that residuals are gaussian; if you're dealing with anything different (eg. binomial data), then you need to change the "family" arguments of lmer and gamm4 (ezANOVA can't do anything but gaussian). | Combining repeated experiments into one dataset
Just add "experiment" as an effect to your model, that should account for the shift between experiments and let you gain the power of increased N across experiments to detect effects of concentration |
39,125 | Combining repeated experiments into one dataset | I think a way in which you could analyze the two experiments together is by defining a multilevel/hierarchical model. The individuals are nested within each experiment.
The standard for this approach is Gelman's book (I think).
The Journal of Memory and Language had a special issue on analyzing data in 2008 which covered hierarchical models and even some examples in R.
Other's here can probably provide good web-resources as well. | Combining repeated experiments into one dataset | I think a way in which you could analyze the two experiments together is by defining a multilevel/hierarchical model. The individuals are nested within each experiment.
The standard for this approach | Combining repeated experiments into one dataset
I think a way in which you could analyze the two experiments together is by defining a multilevel/hierarchical model. The individuals are nested within each experiment.
The standard for this approach is Gelman's book (I think).
The Journal of Memory and Language had a special issue on analyzing data in 2008 which covered hierarchical models and even some examples in R.
Other's here can probably provide good web-resources as well. | Combining repeated experiments into one dataset
I think a way in which you could analyze the two experiments together is by defining a multilevel/hierarchical model. The individuals are nested within each experiment.
The standard for this approach |
39,126 | Negative R2 on Simple Linear Regression (with intercept) | Detailed explanation of the problem:
In the case of X being near-singular (high colinearity/covariance between features), different issues where coming both from scipy.linalg.lstsq() and sklearn.linear_model.LinearRegession()
Source of error 1: As @SextusEmpiricus explained, the matrix being near-singular leads to rounding errors that impact enormously the final predictions. In this sense, scipy.linalg.lstsq() is silently failing WITHOUT raising any warning or error.
Source of error 2: The matrix coming from pandas was F-contiguous. sklearn converts it to C-contiguous before calling scipy.linalg.lstsq() and then use the predict() by using a matrix multiplication right from the F-contiguous array. This lead to another layer of rounding errors. I opened another question here on Stack Overflow
Source of error 3: The first thing that LinearRegression() is doing is to center the dataframe. This goes badly in my case, I still struggle to understand why exactly.
Note: Please note that these rounding errors also depends on CPUs and hardware, which makes it even hard to achieve reproducibility.
(Partial) Work-Around:
To work around the sklearn problems, one can:
Ensure input matrix/array are C-contiguous
Stop rely on LinearRegression's fit_intercept=True but instead center data manually first:
for seed in range(1000):
np.random.seed(seed)
s = pd.Series(np.random.normal(10, 1, size=1_000))
l_com = np.arange(100)
df_Xy = pd.concat([s.ewm(com=com).mean() for com in l_com], axis=1)
df_Xy['y'] = s.shift(-1)
df_Xy.dropna(inplace=True)
X = np.ascontiguousarray(df_Xy[l_com].values)
y = np.ascontiguousarray(df_Xy.y.values)
X_offset = X.mean(axis=0)
y_offset = y.mean()
X_centered = X - X_offset
y_centered = y - y_offset
model = LinearRegression(fit_intercept=False) # We don't rely on sklearn fit_intercept anymore
model.fit(X_centered, y_centered)
assert model.score(X_centered, y_centered) > 0 # ALL GOOD
Moving forward / Long-term Solution:
I opened an issue in scipy Github to raise a Warning in scipy.linalg.lstsq when the X matrix is near-singular.
I opened an issue in sklearn project on Github, about inconsistency between C-cont vs F-cont arrays | Negative R2 on Simple Linear Regression (with intercept) | Detailed explanation of the problem:
In the case of X being near-singular (high colinearity/covariance between features), different issues where coming both from scipy.linalg.lstsq() and sklearn.linea | Negative R2 on Simple Linear Regression (with intercept)
Detailed explanation of the problem:
In the case of X being near-singular (high colinearity/covariance between features), different issues where coming both from scipy.linalg.lstsq() and sklearn.linear_model.LinearRegession()
Source of error 1: As @SextusEmpiricus explained, the matrix being near-singular leads to rounding errors that impact enormously the final predictions. In this sense, scipy.linalg.lstsq() is silently failing WITHOUT raising any warning or error.
Source of error 2: The matrix coming from pandas was F-contiguous. sklearn converts it to C-contiguous before calling scipy.linalg.lstsq() and then use the predict() by using a matrix multiplication right from the F-contiguous array. This lead to another layer of rounding errors. I opened another question here on Stack Overflow
Source of error 3: The first thing that LinearRegression() is doing is to center the dataframe. This goes badly in my case, I still struggle to understand why exactly.
Note: Please note that these rounding errors also depends on CPUs and hardware, which makes it even hard to achieve reproducibility.
(Partial) Work-Around:
To work around the sklearn problems, one can:
Ensure input matrix/array are C-contiguous
Stop rely on LinearRegression's fit_intercept=True but instead center data manually first:
for seed in range(1000):
np.random.seed(seed)
s = pd.Series(np.random.normal(10, 1, size=1_000))
l_com = np.arange(100)
df_Xy = pd.concat([s.ewm(com=com).mean() for com in l_com], axis=1)
df_Xy['y'] = s.shift(-1)
df_Xy.dropna(inplace=True)
X = np.ascontiguousarray(df_Xy[l_com].values)
y = np.ascontiguousarray(df_Xy.y.values)
X_offset = X.mean(axis=0)
y_offset = y.mean()
X_centered = X - X_offset
y_centered = y - y_offset
model = LinearRegression(fit_intercept=False) # We don't rely on sklearn fit_intercept anymore
model.fit(X_centered, y_centered)
assert model.score(X_centered, y_centered) > 0 # ALL GOOD
Moving forward / Long-term Solution:
I opened an issue in scipy Github to raise a Warning in scipy.linalg.lstsq when the X matrix is near-singular.
I opened an issue in sklearn project on Github, about inconsistency between C-cont vs F-cont arrays | Negative R2 on Simple Linear Regression (with intercept)
Detailed explanation of the problem:
In the case of X being near-singular (high colinearity/covariance between features), different issues where coming both from scipy.linalg.lstsq() and sklearn.linea |
39,127 | Negative R2 on Simple Linear Regression (with intercept) | I can reproduce it with np.random.seed(15) and dig a bit deeper. It seems a lot like a computational round-off error due to the high collinearity.
The steps I took
I manually added a column with ones to the matrix X
#X = df_Xy[l_com]
X = pd.concat([pd.DataFrame(np.repeat(1, 999)),df_Xy[l_com]], axis=1)
and used directly the function lstsq(X, y_true), for which the function LinearRegression is a wrapper, and manually compute the sum of squares
p, res, rnk, sin = lstsq(X, y_true)
pred = np.matmul(X, p)
RSS = np.sum(np.power(y_true - model.predict(X=X), 2)) ## 1007.6190
RSS2 = np.sum(np.power(y_true - pred, 2)) ## 1007.6190
TSS = np.sum(np.power(y_true - mean(y_true), 2)) ## 995.24937
The R-squared is still negative.
(the above is for intercept=False interestingly, when I set it true then the result becomes even worse, but I can't yet figure out what the sourcecode does with that boolean parameter)
A reason for this behavior might be that the parameters are very large because you generated the coefficients with some exponentially weighting (I do not know enough about python to figure out easily what you are doing there, and you might provide more comments about your code) and created highly correlated features. The lstsq gives as output a very small effective rank and the coefficients are in the order of $\pm 10^{10}$.
At the moment this is as far as I can get. I find the code in python libraries difficult to decipher. The linearmodel from sklearn refers to a function lstsq from scipy/linalg and that function refers to 'gelsd', 'gelsy', 'gelss' functions from lapack which is a fortran code that is somewhere but through all the layers of wrappers and imported packages it is difficult to figure out what happens in the blackbox between input and output of linearmodel.
The influence of the processor
print(model.score(X, y_true))
# -0.15802176533843926 = NEGATIVE R2 on VM 1
# -0.05854780689129546 on VM 2 (? dependent on CPU ?)
In the answer to this question on stack overflow it is explained that the algorithm might take slightly different steps for different CPU architectures and as a consequence there can be small differences in results for different CPUs: Is it reasonable to expect identical results for LAPACK routines on two different processor architectures?
Example: for a computer we have $(7+8) \times (1/9) \neq (7/9+8/9)$. Demonstration with R-code
options(digits=22)
(7/9) + (8/9) # 1.666666666666666518637
15/9 # 1.666666666666666740682
An algorithm might make such subtle changes in the computation to optimize the calculation speed for a processor.
In the case of a matrix inversion of a nearly singular matrix these small errors might get amplified due to the sensitivity of the computation on small differences.
(I don't have the ability to verify this with different processors, but I will check whether changing the number of cores can have a similar influence.) | Negative R2 on Simple Linear Regression (with intercept) | I can reproduce it with np.random.seed(15) and dig a bit deeper. It seems a lot like a computational round-off error due to the high collinearity.
The steps I took
I manually added a column with ones | Negative R2 on Simple Linear Regression (with intercept)
I can reproduce it with np.random.seed(15) and dig a bit deeper. It seems a lot like a computational round-off error due to the high collinearity.
The steps I took
I manually added a column with ones to the matrix X
#X = df_Xy[l_com]
X = pd.concat([pd.DataFrame(np.repeat(1, 999)),df_Xy[l_com]], axis=1)
and used directly the function lstsq(X, y_true), for which the function LinearRegression is a wrapper, and manually compute the sum of squares
p, res, rnk, sin = lstsq(X, y_true)
pred = np.matmul(X, p)
RSS = np.sum(np.power(y_true - model.predict(X=X), 2)) ## 1007.6190
RSS2 = np.sum(np.power(y_true - pred, 2)) ## 1007.6190
TSS = np.sum(np.power(y_true - mean(y_true), 2)) ## 995.24937
The R-squared is still negative.
(the above is for intercept=False interestingly, when I set it true then the result becomes even worse, but I can't yet figure out what the sourcecode does with that boolean parameter)
A reason for this behavior might be that the parameters are very large because you generated the coefficients with some exponentially weighting (I do not know enough about python to figure out easily what you are doing there, and you might provide more comments about your code) and created highly correlated features. The lstsq gives as output a very small effective rank and the coefficients are in the order of $\pm 10^{10}$.
At the moment this is as far as I can get. I find the code in python libraries difficult to decipher. The linearmodel from sklearn refers to a function lstsq from scipy/linalg and that function refers to 'gelsd', 'gelsy', 'gelss' functions from lapack which is a fortran code that is somewhere but through all the layers of wrappers and imported packages it is difficult to figure out what happens in the blackbox between input and output of linearmodel.
The influence of the processor
print(model.score(X, y_true))
# -0.15802176533843926 = NEGATIVE R2 on VM 1
# -0.05854780689129546 on VM 2 (? dependent on CPU ?)
In the answer to this question on stack overflow it is explained that the algorithm might take slightly different steps for different CPU architectures and as a consequence there can be small differences in results for different CPUs: Is it reasonable to expect identical results for LAPACK routines on two different processor architectures?
Example: for a computer we have $(7+8) \times (1/9) \neq (7/9+8/9)$. Demonstration with R-code
options(digits=22)
(7/9) + (8/9) # 1.666666666666666518637
15/9 # 1.666666666666666740682
An algorithm might make such subtle changes in the computation to optimize the calculation speed for a processor.
In the case of a matrix inversion of a nearly singular matrix these small errors might get amplified due to the sensitivity of the computation on small differences.
(I don't have the ability to verify this with different processors, but I will check whether changing the number of cores can have a similar influence.) | Negative R2 on Simple Linear Regression (with intercept)
I can reproduce it with np.random.seed(15) and dig a bit deeper. It seems a lot like a computational round-off error due to the high collinearity.
The steps I took
I manually added a column with ones |
39,128 | Is "Uncensored Data" necessarily more "Informative" when compared to "Censored Data"? | We could rephrase your question asking whether methods based on full data (i.e. noncensored data) are necessarily more efficient than methods based on observed data (i.e. censored data). This question can be answered in general by semiparametric efficiency theory.
Let $Z$ denote the full data (such as covariates and failure time). Suppose we have a data set of i.i.d. draws $Z_1, \dots Z_n$. A full data estimator $\hat\beta$ for an estimand $\beta^*$ is asymptotically linear with influence function $\varphi^F$ if $$\sqrt{n} ( \hat\beta - \beta^*) = \frac{1}{\sqrt{n}} \sum_{i=1}^n \varphi^F(Z_i) + o_P(n^{-1/2}).$$ Such an estimator has asymptotic variance $\mathrm{var}\left\{ \varphi^F(Z) \right\}$. Likewise, let $\mathcal{O}$ be the observed data, which denotes the full data $Z$ subject to coarsening or missingness. We can similarly define the influence function $\varphi$ for an observed data estimator.
This suggests that we can compare the efficiency of observed data estimators and full data estimators through comparisons of their influence functions. Rather than studying the influence function of a given estimator, we can study the class of influence functions of all regular estimators of the estimand $\beta^*$.
Lemma 7.4 in Tsiatis (2006) establishes the relationship between the class of influence functions of observed data estimators and the corresponding class for full data estimators. He shows that the class of observed data influence functions equals
\begin{equation*}
\frac{I(\mathcal{C}=\infty)}{\varpi(\infty, Z)} \varphi^F(Z) + L_2(\mathcal{O}),
\end{equation*}
where $\mathcal{C}=\infty$ denotes that the full data is observed ( i.e. $T \leq C$ in survival analysis), $\varpi(\infty, Z) = \mathbb{P}[\mathcal{C}=\infty \mid Z]$ is the conditional probability of observing the full data $L_2$ is an arbitrary function satisfying $\mathbb{E}[L_2(\mathcal{O})\mid Z] = 0$, and $\varphi^F$ is an arbitrary full data influence function.
Based on this identity, we can derive the asymptotic variance of an observed data asymptotically linear estimator with influence function $\varphi$ as
\begin{align*}
& \mathrm{var} \left\{ \varphi(\mathcal{O}) \right\} \\
=\, & \mathrm{var} \left[ \mathbb{E} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] + \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
=\, & \mathrm{var} \left[ \mathbb{E} \left\{ \frac{I(\mathcal{C}=\infty)}{\varpi(\infty, Z)} \varphi^F(Z) + L_2(\mathcal{O}) \mid Z \right\} \right] + \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
=\, & \mathrm{var} \left[ \mathbb{E} \left\{ \frac{I(\mathcal{C}=\infty)}{\varpi(\infty, Z)} \varphi^F(Z) \mid Z \right\} \right] + \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
=\, & \mathrm{var} \left[ \varphi^F(Z) \right]
+ \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
\succcurlyeq\, & \mathrm{var} \left[ \varphi^F(Z) \right] &
\end{align*}
This shows that any observed data estimator has higher variance than its corresonding full data estimator. The inequality is tight when the second summand has conditional variance zero: this means that the observed data equals the full data. In a survival analysis setting, this shows that whenever censoring is present, the observed data estimators are less efficient than the full data estimators. | Is "Uncensored Data" necessarily more "Informative" when compared to "Censored Data"? | We could rephrase your question asking whether methods based on full data (i.e. noncensored data) are necessarily more efficient than methods based on observed data (i.e. censored data). This question | Is "Uncensored Data" necessarily more "Informative" when compared to "Censored Data"?
We could rephrase your question asking whether methods based on full data (i.e. noncensored data) are necessarily more efficient than methods based on observed data (i.e. censored data). This question can be answered in general by semiparametric efficiency theory.
Let $Z$ denote the full data (such as covariates and failure time). Suppose we have a data set of i.i.d. draws $Z_1, \dots Z_n$. A full data estimator $\hat\beta$ for an estimand $\beta^*$ is asymptotically linear with influence function $\varphi^F$ if $$\sqrt{n} ( \hat\beta - \beta^*) = \frac{1}{\sqrt{n}} \sum_{i=1}^n \varphi^F(Z_i) + o_P(n^{-1/2}).$$ Such an estimator has asymptotic variance $\mathrm{var}\left\{ \varphi^F(Z) \right\}$. Likewise, let $\mathcal{O}$ be the observed data, which denotes the full data $Z$ subject to coarsening or missingness. We can similarly define the influence function $\varphi$ for an observed data estimator.
This suggests that we can compare the efficiency of observed data estimators and full data estimators through comparisons of their influence functions. Rather than studying the influence function of a given estimator, we can study the class of influence functions of all regular estimators of the estimand $\beta^*$.
Lemma 7.4 in Tsiatis (2006) establishes the relationship between the class of influence functions of observed data estimators and the corresponding class for full data estimators. He shows that the class of observed data influence functions equals
\begin{equation*}
\frac{I(\mathcal{C}=\infty)}{\varpi(\infty, Z)} \varphi^F(Z) + L_2(\mathcal{O}),
\end{equation*}
where $\mathcal{C}=\infty$ denotes that the full data is observed ( i.e. $T \leq C$ in survival analysis), $\varpi(\infty, Z) = \mathbb{P}[\mathcal{C}=\infty \mid Z]$ is the conditional probability of observing the full data $L_2$ is an arbitrary function satisfying $\mathbb{E}[L_2(\mathcal{O})\mid Z] = 0$, and $\varphi^F$ is an arbitrary full data influence function.
Based on this identity, we can derive the asymptotic variance of an observed data asymptotically linear estimator with influence function $\varphi$ as
\begin{align*}
& \mathrm{var} \left\{ \varphi(\mathcal{O}) \right\} \\
=\, & \mathrm{var} \left[ \mathbb{E} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] + \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
=\, & \mathrm{var} \left[ \mathbb{E} \left\{ \frac{I(\mathcal{C}=\infty)}{\varpi(\infty, Z)} \varphi^F(Z) + L_2(\mathcal{O}) \mid Z \right\} \right] + \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
=\, & \mathrm{var} \left[ \mathbb{E} \left\{ \frac{I(\mathcal{C}=\infty)}{\varpi(\infty, Z)} \varphi^F(Z) \mid Z \right\} \right] + \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
=\, & \mathrm{var} \left[ \varphi^F(Z) \right]
+ \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
\succcurlyeq\, & \mathrm{var} \left[ \varphi^F(Z) \right] &
\end{align*}
This shows that any observed data estimator has higher variance than its corresonding full data estimator. The inequality is tight when the second summand has conditional variance zero: this means that the observed data equals the full data. In a survival analysis setting, this shows that whenever censoring is present, the observed data estimators are less efficient than the full data estimators. | Is "Uncensored Data" necessarily more "Informative" when compared to "Censored Data"?
We could rephrase your question asking whether methods based on full data (i.e. noncensored data) are necessarily more efficient than methods based on observed data (i.e. censored data). This question |
39,129 | Is "Uncensored Data" necessarily more "Informative" when compared to "Censored Data"? | This comes down to the central role of events in survival models. Consider, for example, the Nelson-Aalen non-parametric estimate of a cumulative hazard function $\Lambda(t)$ and the variance of the estimate (Equations 2.2 and 2.4 of Therneau and Grambsch):
$$ \hat \Lambda(t) = \sum_{i:t_i\le t}\frac{\Delta \bar N (t_i)}{\bar Y(t_i)}$$
$$\text{var}[\hat \Lambda(t)] = \sum_{i:t_i\le t}\frac{\Delta \bar N (t_i)}{\bar Y^2(t_i)}$$
where $i$ indexes event times, $\Delta \bar N(t_i)$ is the increase in event numbers at time $t_i$, and $\bar Y(t_i)$ is the number at risk at time $t_i$.
Right censoring an event time that would have been at $t_j$ removes information about the change in cumulative hazard that should have been seen at time $t_j$. It also lowers the number at risk at times between its censoring time and $t_j$, increasing the variance of the curve calculated from the remaining events. Censoring event times thus harms survival-model estimates in two ways.
This page discusses how adding censored cases doesn't contribute to power in Cox models. For parametric models, a known event time provides a contribution to likelihood proportional to the probability density of an event at that specific time. A right-censored event time only provides a term proportional to the survival function up to the censoring time. This page shows the formulas. | Is "Uncensored Data" necessarily more "Informative" when compared to "Censored Data"? | This comes down to the central role of events in survival models. Consider, for example, the Nelson-Aalen non-parametric estimate of a cumulative hazard function $\Lambda(t)$ and the variance of the e | Is "Uncensored Data" necessarily more "Informative" when compared to "Censored Data"?
This comes down to the central role of events in survival models. Consider, for example, the Nelson-Aalen non-parametric estimate of a cumulative hazard function $\Lambda(t)$ and the variance of the estimate (Equations 2.2 and 2.4 of Therneau and Grambsch):
$$ \hat \Lambda(t) = \sum_{i:t_i\le t}\frac{\Delta \bar N (t_i)}{\bar Y(t_i)}$$
$$\text{var}[\hat \Lambda(t)] = \sum_{i:t_i\le t}\frac{\Delta \bar N (t_i)}{\bar Y^2(t_i)}$$
where $i$ indexes event times, $\Delta \bar N(t_i)$ is the increase in event numbers at time $t_i$, and $\bar Y(t_i)$ is the number at risk at time $t_i$.
Right censoring an event time that would have been at $t_j$ removes information about the change in cumulative hazard that should have been seen at time $t_j$. It also lowers the number at risk at times between its censoring time and $t_j$, increasing the variance of the curve calculated from the remaining events. Censoring event times thus harms survival-model estimates in two ways.
This page discusses how adding censored cases doesn't contribute to power in Cox models. For parametric models, a known event time provides a contribution to likelihood proportional to the probability density of an event at that specific time. A right-censored event time only provides a term proportional to the survival function up to the censoring time. This page shows the formulas. | Is "Uncensored Data" necessarily more "Informative" when compared to "Censored Data"?
This comes down to the central role of events in survival models. Consider, for example, the Nelson-Aalen non-parametric estimate of a cumulative hazard function $\Lambda(t)$ and the variance of the e |
39,130 | How do I interpret or explain loess plot? | Bootstrapping or a permutation test (as suggested by Stephan Kolassa) will help you assess the significance of the apparent (but complex) association in the plot.
You need to adopt a reasonable measure of what the Loess fit has accomplished. One is the mean squared residual. (Many others are possible, but a discussion of that is not pertinent here.) Let's call this the "loss."
You also need to formulate a specific null hypothesis. The simplest is that the data exhibit no trend: that is, they vary randomly and independently around a common value. The best way to estimate this common value with mean squared error loss is to compute its arithmetic mean. With a sufficient amount of data--often ten or more observations suffice--the differences between the response ("Value" in the plot) and this mean will be only slightly (negatively) correlated and can stand in as surrogates for the true random errors.
The permutation distribution in this setting is the distribution of the losses associated with all possible permutations (reorderings) of the residuals after conducting a Loess fit. Under the null hypothesis, all these permutations are equally likely.
The permutation test compares the actual loss to the permutation distribution of losses. As a practical matter, the latter is estimated by means of a few random permutations. (There are far too many permutations to permit generating them all.)
Here, to illustrate, are data generated with no inherent trend along with a permutation distribution estimated from 500 draws. The vertical red line shows the loss for these data: it is close to the middle. Its p-value is computed as usual for a two-sided test: the red line splits the histogram left and right into two areas and the p-value is twice the smaller area. Very small p-values are called "significant" and taken as evidence of some kind of trend. The shape of the Loess plot (shown at left) helps you interpret just what that trend might be.
The large, anodyne p-value is consistent with the trend-free method of generating these data.
For data closer to those in the question, the result is different:
The actual mean squared error of $0.000735$ is inconsistent with the squared errors typical of the permutation distribution: this is a significant trend. The data plot at the left suggests the trend is principally a decrease in mean values from $0.31$ at age 60 to $0.25$ at ages 80 and over.
BTW, it is no surprise that the actual statistics in both cases have approximately the same values: they both estimate the error variance, which was equal to $0.025^2 = 0.000625$ in both cases. The curvilinear trends in the second instance, though, cause the simple fit (under the null hypothesis) to be poorer, thereby shifting the permutation distribution to higher values, as you can see by comparing the two figures.
The R code needed is simple, clear, and efficient. fit performs the Loess fit while stat uses that to compute the mean squared error.
fit <- function(y, x, ...) lowess(x, y, ...)
stat <- function(y, x) mean((y - fit(y, x)$y)^2) # Mean squared error loss
Given a data frame object X with Value and Age columns to store the response and explanatory variables, respectively, the permutation distribution is estimated by computing predicted values and residuals under the null hypothesis and then iteratively permuting the residuals (with the sample function) and recomputing the loss.
predicted <- mean(X$Value)
residuals <- X$Value - predicted
dsample <- replicate(5e3, stat(predicted + sample(residuals), X$Age))
In this case, after about one second of computation, dsample winds up with 5e3 ($5000$) values randomly drawn from the permutation distribution. The figures are then created by applying hist to dsample to show these values.
# Compute the p-value
actual <- with(X, stat(Value, Age))
stats <- c(actual, dsample)
p <- mean(stats <= actual)
p <- 2 * min(1/2, p, 1-p)
# Display the results
hist(dsample, freq=FALSE, xlim=range(stats),
col=gray(.95),
sub=paste("p-value is approximately", signif(p, 2)),
main="Simulated Null Permutation Distribution",
xlab = "Mean Squared Difference")
abline(v = actual, lwd=2, col="Red") # The statistic for the data
One caveat: I had to tune this Loess fit by specifying a relatively short width for its search radius. This frequently is the case. An honest permutation test must implement this fine-tuning in some automatic way and apply it to each permutation. Otherwise, the original (hand-tuned) fit will be too good and the resulting p-value will be too small--perhaps far too small. People often use some sort of cross-validation technique for such automatic tuning. | How do I interpret or explain loess plot? | Bootstrapping or a permutation test (as suggested by Stephan Kolassa) will help you assess the significance of the apparent (but complex) association in the plot.
You need to adopt a reasonable measur | How do I interpret or explain loess plot?
Bootstrapping or a permutation test (as suggested by Stephan Kolassa) will help you assess the significance of the apparent (but complex) association in the plot.
You need to adopt a reasonable measure of what the Loess fit has accomplished. One is the mean squared residual. (Many others are possible, but a discussion of that is not pertinent here.) Let's call this the "loss."
You also need to formulate a specific null hypothesis. The simplest is that the data exhibit no trend: that is, they vary randomly and independently around a common value. The best way to estimate this common value with mean squared error loss is to compute its arithmetic mean. With a sufficient amount of data--often ten or more observations suffice--the differences between the response ("Value" in the plot) and this mean will be only slightly (negatively) correlated and can stand in as surrogates for the true random errors.
The permutation distribution in this setting is the distribution of the losses associated with all possible permutations (reorderings) of the residuals after conducting a Loess fit. Under the null hypothesis, all these permutations are equally likely.
The permutation test compares the actual loss to the permutation distribution of losses. As a practical matter, the latter is estimated by means of a few random permutations. (There are far too many permutations to permit generating them all.)
Here, to illustrate, are data generated with no inherent trend along with a permutation distribution estimated from 500 draws. The vertical red line shows the loss for these data: it is close to the middle. Its p-value is computed as usual for a two-sided test: the red line splits the histogram left and right into two areas and the p-value is twice the smaller area. Very small p-values are called "significant" and taken as evidence of some kind of trend. The shape of the Loess plot (shown at left) helps you interpret just what that trend might be.
The large, anodyne p-value is consistent with the trend-free method of generating these data.
For data closer to those in the question, the result is different:
The actual mean squared error of $0.000735$ is inconsistent with the squared errors typical of the permutation distribution: this is a significant trend. The data plot at the left suggests the trend is principally a decrease in mean values from $0.31$ at age 60 to $0.25$ at ages 80 and over.
BTW, it is no surprise that the actual statistics in both cases have approximately the same values: they both estimate the error variance, which was equal to $0.025^2 = 0.000625$ in both cases. The curvilinear trends in the second instance, though, cause the simple fit (under the null hypothesis) to be poorer, thereby shifting the permutation distribution to higher values, as you can see by comparing the two figures.
The R code needed is simple, clear, and efficient. fit performs the Loess fit while stat uses that to compute the mean squared error.
fit <- function(y, x, ...) lowess(x, y, ...)
stat <- function(y, x) mean((y - fit(y, x)$y)^2) # Mean squared error loss
Given a data frame object X with Value and Age columns to store the response and explanatory variables, respectively, the permutation distribution is estimated by computing predicted values and residuals under the null hypothesis and then iteratively permuting the residuals (with the sample function) and recomputing the loss.
predicted <- mean(X$Value)
residuals <- X$Value - predicted
dsample <- replicate(5e3, stat(predicted + sample(residuals), X$Age))
In this case, after about one second of computation, dsample winds up with 5e3 ($5000$) values randomly drawn from the permutation distribution. The figures are then created by applying hist to dsample to show these values.
# Compute the p-value
actual <- with(X, stat(Value, Age))
stats <- c(actual, dsample)
p <- mean(stats <= actual)
p <- 2 * min(1/2, p, 1-p)
# Display the results
hist(dsample, freq=FALSE, xlim=range(stats),
col=gray(.95),
sub=paste("p-value is approximately", signif(p, 2)),
main="Simulated Null Permutation Distribution",
xlab = "Mean Squared Difference")
abline(v = actual, lwd=2, col="Red") # The statistic for the data
One caveat: I had to tune this Loess fit by specifying a relatively short width for its search radius. This frequently is the case. An honest permutation test must implement this fine-tuning in some automatic way and apply it to each permutation. Otherwise, the original (hand-tuned) fit will be too good and the resulting p-value will be too small--perhaps far too small. People often use some sort of cross-validation technique for such automatic tuning. | How do I interpret or explain loess plot?
Bootstrapping or a permutation test (as suggested by Stephan Kolassa) will help you assess the significance of the apparent (but complex) association in the plot.
You need to adopt a reasonable measur |
39,131 | How do I interpret or explain loess plot? | First off, it's extremely good practice to also show the original data, which puts the loess plot into context. Here, the context is that there is still a lot of variation in the data. For instance, the initial dip looks rather strange and could easily be due to noise - and we only see this because we see the full point cloud, not only the loess line with the confidence band. So this is good.
You ask two questions: one about interpretation, the other about significance. In terms of interpretation, I would discuss the downward slope at the right end, but per above, not really trust the dip in the middle.
In terms of significance, this is harder. You could try to assess whether your loess model explains significantly more variation in the data than a comparison model, like an intercept-only model (a horizontal flat line), or a simple linear regression (a slanted straight line). This is what ANOVA does. The problem is that the standard F test in ANOVA requires knowing how many parameters (degrees of freedom) your model used - and that is notoriously difficult to know in the case of a loess model. Greg Snow's answer to How do I find a p-value of smooth spline / loess regression? gives a few tentative ideas in his penultimate paragraph (though not in terms of variance explained, but his ideas could be adapted to this test statistic).
However, as Greg also notes in his answer, assessing the significance of a spline fit can be done in an ANOVA framework. Given the suspicious behavior of your loess fit, I would suggest that you try a fit using natural or restricted cubic splines with few knots, then test this spline fit against the more parsimonious linear fit. | How do I interpret or explain loess plot? | First off, it's extremely good practice to also show the original data, which puts the loess plot into context. Here, the context is that there is still a lot of variation in the data. For instance, t | How do I interpret or explain loess plot?
First off, it's extremely good practice to also show the original data, which puts the loess plot into context. Here, the context is that there is still a lot of variation in the data. For instance, the initial dip looks rather strange and could easily be due to noise - and we only see this because we see the full point cloud, not only the loess line with the confidence band. So this is good.
You ask two questions: one about interpretation, the other about significance. In terms of interpretation, I would discuss the downward slope at the right end, but per above, not really trust the dip in the middle.
In terms of significance, this is harder. You could try to assess whether your loess model explains significantly more variation in the data than a comparison model, like an intercept-only model (a horizontal flat line), or a simple linear regression (a slanted straight line). This is what ANOVA does. The problem is that the standard F test in ANOVA requires knowing how many parameters (degrees of freedom) your model used - and that is notoriously difficult to know in the case of a loess model. Greg Snow's answer to How do I find a p-value of smooth spline / loess regression? gives a few tentative ideas in his penultimate paragraph (though not in terms of variance explained, but his ideas could be adapted to this test statistic).
However, as Greg also notes in his answer, assessing the significance of a spline fit can be done in an ANOVA framework. Given the suspicious behavior of your loess fit, I would suggest that you try a fit using natural or restricted cubic splines with few knots, then test this spline fit against the more parsimonious linear fit. | How do I interpret or explain loess plot?
First off, it's extremely good practice to also show the original data, which puts the loess plot into context. Here, the context is that there is still a lot of variation in the data. For instance, t |
39,132 | matrix-calculus - Understanding numerator/denominator layouts | If you think of $L$ as a column vector, then I think both your sources agree that $\frac{dJ}{dL}$ should be a row vector.
But what if you really want $L$ as a row vector. Surely, the math shouldn't "care" about how you arrange your collection of numbers. One way to clarify this is by designating dimensions of your objects as "covariant" or "contravariant".
Many things are contravariant, meaning they change opposite to a change in basis (if you go from a bigger unit, "hours" to a smaller unit "seconds", your measurements become bigger). On the other hand, a derivative, like "m/hour" becomes smaller when you change the units to "m/second", hence "co".
Things which are "co" can be multiplied with things which are "contra", e.g. 5 m/second * 10 seconds = 50m. Yet it makes much less sense to multiply two "contra" or two "co" together (admittedly, second^2 or m^2/second^2 are sometimes useful units, but this is not always the case).
So yes, you could say that $\frac{dJ}{dL}$ is a "column" covector with size $m$, and $\frac{dL}{da}$ is a matrix with shape (contra-$m$, co-$m$). We could write $\left(\frac{dJ}{dL}\right)^i = \frac{\partial J}{\partial L_i}$, and $\left(\frac{dL}{da}\right)_i^j = \frac{\partial L_i}{ \partial a_j}$ (we give superscripts to "co" dimensions, and subscripts to "contra", to make things clear). Then, following our rule that co can only be multipled by contra, we see that
$$\left(\frac{dJ}{da}\right)^j = \sum_{i=1}^m \left(\frac{dJ}{dL}\right)^i \left(\frac{dL}{da}\right)_i^j = \left(\frac{dJ}{dL}^T \frac{dL}{da} \right)^j$$
So even if you "force" $\frac{dJ}{dL}$ into a column, if you want to respect our new multiplication rule, you need to transpose before applying matrix mult.
To take this a step further, let's say we are interested in $\frac{da}{dX}$, which has shape (contra-$m$, co-$(n,m)$): $\left( \frac{da}{dX} \right)_j^{u,v} = \frac{\partial a_j}{\partial X_{u,v}}$. Then we have
$$\left(\frac{dJ}{dX}\right)^{u,v} = \sum_{j=1}^m \left(\frac{dJ}{da}\right)^j \left(\frac{da}{dX}\right)_j^{u,v}$$
To translate this back to "numerator layout" matrix calculus terms, you could say that column vectors are always contravariant, row vectors are always covariant or "covectors", gradients are covariant, hence always row vectors. An $m$ by $n$ Jacobian matrix is contra-$m$, co-$n$. This works nicely because if you think of a column vector as a (contra-$n$, co-1) matrix or a row vector as a (contra-1, co-$m$) matrix, notice that by following the ordinary rules of matrix mutliplcation, you'll never accidentally multiply two contra / two co together, and the product of two objects will always be in a (contra, co) form. On the other hand, "denominator layout" has everything in (co, contra) form, which is just as fine and accomplishes the same thing.
However, if you start working with less standard objects, like the derivative of a matrix with respect to a vector, or the derivative of a row vector with respect to a column vector (as in our example above), then you'll need to keep track for yourself what is covariant and what is contravariant. | matrix-calculus - Understanding numerator/denominator layouts | If you think of $L$ as a column vector, then I think both your sources agree that $\frac{dJ}{dL}$ should be a row vector.
But what if you really want $L$ as a row vector. Surely, the math shouldn't " | matrix-calculus - Understanding numerator/denominator layouts
If you think of $L$ as a column vector, then I think both your sources agree that $\frac{dJ}{dL}$ should be a row vector.
But what if you really want $L$ as a row vector. Surely, the math shouldn't "care" about how you arrange your collection of numbers. One way to clarify this is by designating dimensions of your objects as "covariant" or "contravariant".
Many things are contravariant, meaning they change opposite to a change in basis (if you go from a bigger unit, "hours" to a smaller unit "seconds", your measurements become bigger). On the other hand, a derivative, like "m/hour" becomes smaller when you change the units to "m/second", hence "co".
Things which are "co" can be multiplied with things which are "contra", e.g. 5 m/second * 10 seconds = 50m. Yet it makes much less sense to multiply two "contra" or two "co" together (admittedly, second^2 or m^2/second^2 are sometimes useful units, but this is not always the case).
So yes, you could say that $\frac{dJ}{dL}$ is a "column" covector with size $m$, and $\frac{dL}{da}$ is a matrix with shape (contra-$m$, co-$m$). We could write $\left(\frac{dJ}{dL}\right)^i = \frac{\partial J}{\partial L_i}$, and $\left(\frac{dL}{da}\right)_i^j = \frac{\partial L_i}{ \partial a_j}$ (we give superscripts to "co" dimensions, and subscripts to "contra", to make things clear). Then, following our rule that co can only be multipled by contra, we see that
$$\left(\frac{dJ}{da}\right)^j = \sum_{i=1}^m \left(\frac{dJ}{dL}\right)^i \left(\frac{dL}{da}\right)_i^j = \left(\frac{dJ}{dL}^T \frac{dL}{da} \right)^j$$
So even if you "force" $\frac{dJ}{dL}$ into a column, if you want to respect our new multiplication rule, you need to transpose before applying matrix mult.
To take this a step further, let's say we are interested in $\frac{da}{dX}$, which has shape (contra-$m$, co-$(n,m)$): $\left( \frac{da}{dX} \right)_j^{u,v} = \frac{\partial a_j}{\partial X_{u,v}}$. Then we have
$$\left(\frac{dJ}{dX}\right)^{u,v} = \sum_{j=1}^m \left(\frac{dJ}{da}\right)^j \left(\frac{da}{dX}\right)_j^{u,v}$$
To translate this back to "numerator layout" matrix calculus terms, you could say that column vectors are always contravariant, row vectors are always covariant or "covectors", gradients are covariant, hence always row vectors. An $m$ by $n$ Jacobian matrix is contra-$m$, co-$n$. This works nicely because if you think of a column vector as a (contra-$n$, co-1) matrix or a row vector as a (contra-1, co-$m$) matrix, notice that by following the ordinary rules of matrix mutliplcation, you'll never accidentally multiply two contra / two co together, and the product of two objects will always be in a (contra, co) form. On the other hand, "denominator layout" has everything in (co, contra) form, which is just as fine and accomplishes the same thing.
However, if you start working with less standard objects, like the derivative of a matrix with respect to a vector, or the derivative of a row vector with respect to a column vector (as in our example above), then you'll need to keep track for yourself what is covariant and what is contravariant. | matrix-calculus - Understanding numerator/denominator layouts
If you think of $L$ as a column vector, then I think both your sources agree that $\frac{dJ}{dL}$ should be a row vector.
But what if you really want $L$ as a row vector. Surely, the math shouldn't " |
39,133 | matrix-calculus - Understanding numerator/denominator layouts | Unfortunately, I didn't come across a resource that doesn't leave gaps. It's a disputed area. Even the chain rule may sometimes not make a lot sense, e.g. some terms might be 3D tensors that the matrix multiplication is not well-defined because of matrices differentiated by vectors or vice versa.
Having said that, these rules are also very useful if you comply. For ex, define your vectors as column vectors, i.e. $n\times 1$. The chain rule looks like:
$$\frac{\partial J}{\partial \mathbf a^T} = \frac{\partial J}{\partial \mathbf L^T}\frac{\partial \mathbf L^T}{\partial \mathbf a^T}$$
where all the row vectors are transposed. Then, the matrix multiplication sizes would match. All equations you see in the sources assume a harmony, e.g. a chain rule expanding right as above would not make sense with denominator notation, so it'd expand left instead, but the sources rarely mention it. Therefore, whenever a vector is of concern, it'd be better to assume it's a column vector (wikipedia equations note this). | matrix-calculus - Understanding numerator/denominator layouts | Unfortunately, I didn't come across a resource that doesn't leave gaps. It's a disputed area. Even the chain rule may sometimes not make a lot sense, e.g. some terms might be 3D tensors that the matri | matrix-calculus - Understanding numerator/denominator layouts
Unfortunately, I didn't come across a resource that doesn't leave gaps. It's a disputed area. Even the chain rule may sometimes not make a lot sense, e.g. some terms might be 3D tensors that the matrix multiplication is not well-defined because of matrices differentiated by vectors or vice versa.
Having said that, these rules are also very useful if you comply. For ex, define your vectors as column vectors, i.e. $n\times 1$. The chain rule looks like:
$$\frac{\partial J}{\partial \mathbf a^T} = \frac{\partial J}{\partial \mathbf L^T}\frac{\partial \mathbf L^T}{\partial \mathbf a^T}$$
where all the row vectors are transposed. Then, the matrix multiplication sizes would match. All equations you see in the sources assume a harmony, e.g. a chain rule expanding right as above would not make sense with denominator notation, so it'd expand left instead, but the sources rarely mention it. Therefore, whenever a vector is of concern, it'd be better to assume it's a column vector (wikipedia equations note this). | matrix-calculus - Understanding numerator/denominator layouts
Unfortunately, I didn't come across a resource that doesn't leave gaps. It's a disputed area. Even the chain rule may sometimes not make a lot sense, e.g. some terms might be 3D tensors that the matri |
39,134 | Why getting very high values for MSE/MAE/MAPE when R2 score is very good | I don’t see how you tell from those metrics that the results are “very bad”. Compare the metrics to things like mean, range, or standard deviations, in all the cases MSE or RMSE (square root of MSE) is much smaller than the variability of the data.
The metrics don’t have an absolute numeric value, so you need some kind of benchmark for them. The most trivial model minimizing squared error is predicting mean for all the samples, in such case, RMSE would be equal to standard deviation, your model is better than this. For MAE the trivial model would be predicting median, with MAE equal to MAD, my guess is that you’re still better. For a less trivial model, you can compare the results to something like linear regression.
The only exception is MAPE, which for the second dataset is very high, but the dataset has zeros in it, and in such case, you should not use MAPE as a metric because whatever you divide by a value close to zero, it would be extremely high and destroy the metric. For example, say that the true value is 0 and you predict mean for it:
> abs(0.34 - 0) / (0 + 1e-5)
[1] 34000
See What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? for more details, but MAPE is a tricky metric that should not be used blindly. | Why getting very high values for MSE/MAE/MAPE when R2 score is very good | I don’t see how you tell from those metrics that the results are “very bad”. Compare the metrics to things like mean, range, or standard deviations, in all the cases MSE or RMSE (square root of MSE) i | Why getting very high values for MSE/MAE/MAPE when R2 score is very good
I don’t see how you tell from those metrics that the results are “very bad”. Compare the metrics to things like mean, range, or standard deviations, in all the cases MSE or RMSE (square root of MSE) is much smaller than the variability of the data.
The metrics don’t have an absolute numeric value, so you need some kind of benchmark for them. The most trivial model minimizing squared error is predicting mean for all the samples, in such case, RMSE would be equal to standard deviation, your model is better than this. For MAE the trivial model would be predicting median, with MAE equal to MAD, my guess is that you’re still better. For a less trivial model, you can compare the results to something like linear regression.
The only exception is MAPE, which for the second dataset is very high, but the dataset has zeros in it, and in such case, you should not use MAPE as a metric because whatever you divide by a value close to zero, it would be extremely high and destroy the metric. For example, say that the true value is 0 and you predict mean for it:
> abs(0.34 - 0) / (0 + 1e-5)
[1] 34000
See What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? for more details, but MAPE is a tricky metric that should not be used blindly. | Why getting very high values for MSE/MAE/MAPE when R2 score is very good
I don’t see how you tell from those metrics that the results are “very bad”. Compare the metrics to things like mean, range, or standard deviations, in all the cases MSE or RMSE (square root of MSE) i |
39,135 | Why getting very high values for MSE/MAE/MAPE when R2 score is very good | The quick answer is that $R^2$ measures a reduction in variance, compared to always guessing $\bar y$, no matter the predictors. What your results tell me is that the variance from always guessing $\bar y$ is so gigantic that even a huge $R^2$ value like $0.9$ or $0.99$ still does not let you get as accurate as you want or need for your application. Maybe you need $R^2>0.9999$ for your task (ten-thousand-fold reduction in variance). | Why getting very high values for MSE/MAE/MAPE when R2 score is very good | The quick answer is that $R^2$ measures a reduction in variance, compared to always guessing $\bar y$, no matter the predictors. What your results tell me is that the variance from always guessing $\b | Why getting very high values for MSE/MAE/MAPE when R2 score is very good
The quick answer is that $R^2$ measures a reduction in variance, compared to always guessing $\bar y$, no matter the predictors. What your results tell me is that the variance from always guessing $\bar y$ is so gigantic that even a huge $R^2$ value like $0.9$ or $0.99$ still does not let you get as accurate as you want or need for your application. Maybe you need $R^2>0.9999$ for your task (ten-thousand-fold reduction in variance). | Why getting very high values for MSE/MAE/MAPE when R2 score is very good
The quick answer is that $R^2$ measures a reduction in variance, compared to always guessing $\bar y$, no matter the predictors. What your results tell me is that the variance from always guessing $\b |
39,136 | "Histomancy": What does McElreath propose we do instead? | McElreath appears to be maligning the oft repeated practice of requiring regressors (to the right of the equality sign, sometimes 'independent' or 'predictor' variables) and regressands (to the left of the equality sign, sometimes 'dependent' or 'outcome' variables) to be normally distributed (i.e. "Gaussian") in the context of something like OLS regression.
In fact none of these variable needs to be normally, or even nearly normally distributed. The residuals require this assumption, as in the simple model here:
$$y_i = \beta_0 + \beta_x x_i + \varepsilon_i\text{; where }\varepsilon \sim \mathcal{N}(0,\sigma)$$
It is relatively easy to demonstrate this:
n <- 200
x <- runif(n)
b0 <- 10
bx <- -2
s <- 0.1
e <- rnorm(n,0,s)
y <- b0 + bx*x + e
summary(lm(y~x))
hist(y)
hist(x)
hist(e)
Notice that:
You quite adequately estimate $\beta_0$, $\beta_x$, and $\sigma$ using the OLS MLE estimators:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 10.00007 0.01337 747.85 <2e-16 ***
x -2.00687 0.02235 -89.81 <2e-16 ***
Residual standard error: 0.0957 on 198 degrees of freedom
The histograms of $y$ and $x$ are nothing like normally distributed:
The histogram of $\varepsilon$ is approximately normal:
Of course there are other linear regression models than OLS (including multiple regression), but MLE estimation is quite often used for such models, and the conflation of distributions of variables with residuals is reflected widely in questions on this site, in the literature, and in research meetings.
The upshot is that we should strive to understand our modeling assumptions (whether our data are continuous, count, or what have you) in application rather than waste time in pointless efforts (i.e. "Histomancy") like normalizing all our regression variables. | "Histomancy": What does McElreath propose we do instead? | McElreath appears to be maligning the oft repeated practice of requiring regressors (to the right of the equality sign, sometimes 'independent' or 'predictor' variables) and regressands (to the left o | "Histomancy": What does McElreath propose we do instead?
McElreath appears to be maligning the oft repeated practice of requiring regressors (to the right of the equality sign, sometimes 'independent' or 'predictor' variables) and regressands (to the left of the equality sign, sometimes 'dependent' or 'outcome' variables) to be normally distributed (i.e. "Gaussian") in the context of something like OLS regression.
In fact none of these variable needs to be normally, or even nearly normally distributed. The residuals require this assumption, as in the simple model here:
$$y_i = \beta_0 + \beta_x x_i + \varepsilon_i\text{; where }\varepsilon \sim \mathcal{N}(0,\sigma)$$
It is relatively easy to demonstrate this:
n <- 200
x <- runif(n)
b0 <- 10
bx <- -2
s <- 0.1
e <- rnorm(n,0,s)
y <- b0 + bx*x + e
summary(lm(y~x))
hist(y)
hist(x)
hist(e)
Notice that:
You quite adequately estimate $\beta_0$, $\beta_x$, and $\sigma$ using the OLS MLE estimators:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 10.00007 0.01337 747.85 <2e-16 ***
x -2.00687 0.02235 -89.81 <2e-16 ***
Residual standard error: 0.0957 on 198 degrees of freedom
The histograms of $y$ and $x$ are nothing like normally distributed:
The histogram of $\varepsilon$ is approximately normal:
Of course there are other linear regression models than OLS (including multiple regression), but MLE estimation is quite often used for such models, and the conflation of distributions of variables with residuals is reflected widely in questions on this site, in the literature, and in research meetings.
The upshot is that we should strive to understand our modeling assumptions (whether our data are continuous, count, or what have you) in application rather than waste time in pointless efforts (i.e. "Histomancy") like normalizing all our regression variables. | "Histomancy": What does McElreath propose we do instead?
McElreath appears to be maligning the oft repeated practice of requiring regressors (to the right of the equality sign, sometimes 'independent' or 'predictor' variables) and regressands (to the left o |
39,137 | Train and Validation vs. Train, Test, and Validation | In your two-way split, as you also mentioned, your validation set is actually your test set. In your way, you haven't mentioned about hyperparameter optimisation (HPO), but it's a key step in many machine learning algorithms. When you need HPO, you'll either need to have a separate validation set to tune the HPs or tune them using cross validation over the training set. In the end, the model is trained over the whole training dataset and tested over the test set.
In your ML algorithm, if you don't need to optimise HPs, you can obtain loss metrics using cross-validation over the training set as you did, but this could have been done by using the entire dataset as well, i.e. you have five 80-20 splits, and average the loss across folds. You don't need a two-level test. | Train and Validation vs. Train, Test, and Validation | In your two-way split, as you also mentioned, your validation set is actually your test set. In your way, you haven't mentioned about hyperparameter optimisation (HPO), but it's a key step in many mac | Train and Validation vs. Train, Test, and Validation
In your two-way split, as you also mentioned, your validation set is actually your test set. In your way, you haven't mentioned about hyperparameter optimisation (HPO), but it's a key step in many machine learning algorithms. When you need HPO, you'll either need to have a separate validation set to tune the HPs or tune them using cross validation over the training set. In the end, the model is trained over the whole training dataset and tested over the test set.
In your ML algorithm, if you don't need to optimise HPs, you can obtain loss metrics using cross-validation over the training set as you did, but this could have been done by using the entire dataset as well, i.e. you have five 80-20 splits, and average the loss across folds. You don't need a two-level test. | Train and Validation vs. Train, Test, and Validation
In your two-way split, as you also mentioned, your validation set is actually your test set. In your way, you haven't mentioned about hyperparameter optimisation (HPO), but it's a key step in many mac |
39,138 | Train and Validation vs. Train, Test, and Validation | The two methods you are describing are essentially the same thing. When you describe using cross validation, this is analogous to using a train test split just repeated multiple times. Train/validation/test and train/test with cross validation on the training set are exactly the same but using cross validation repeats for different splits of train/test. | Train and Validation vs. Train, Test, and Validation | The two methods you are describing are essentially the same thing. When you describe using cross validation, this is analogous to using a train test split just repeated multiple times. Train/validatio | Train and Validation vs. Train, Test, and Validation
The two methods you are describing are essentially the same thing. When you describe using cross validation, this is analogous to using a train test split just repeated multiple times. Train/validation/test and train/test with cross validation on the training set are exactly the same but using cross validation repeats for different splits of train/test. | Train and Validation vs. Train, Test, and Validation
The two methods you are describing are essentially the same thing. When you describe using cross validation, this is analogous to using a train test split just repeated multiple times. Train/validatio |
39,139 | Train and Validation vs. Train, Test, and Validation | In general, you need a split of your data set into test/training whenever there is danger of using information from the test set on the trainig set. The information might flow through you as a modeller. Let this sink in for a moment.
For example, if you build one model with one method, use a 80/20 split or cross validation. If you compare many methods on a 80/20 split you implicitly use information from performance on the test set in your modelling. Add a validation data set in this case. If you choose some of these models for hyperparameter optimization, you need an additional test test, or optimize models when they are built (this is hard). And so on.
In any case, keep the validation data set off the table for as long as possible and use it as little as possible. | Train and Validation vs. Train, Test, and Validation | In general, you need a split of your data set into test/training whenever there is danger of using information from the test set on the trainig set. The information might flow through you as a modelle | Train and Validation vs. Train, Test, and Validation
In general, you need a split of your data set into test/training whenever there is danger of using information from the test set on the trainig set. The information might flow through you as a modeller. Let this sink in for a moment.
For example, if you build one model with one method, use a 80/20 split or cross validation. If you compare many methods on a 80/20 split you implicitly use information from performance on the test set in your modelling. Add a validation data set in this case. If you choose some of these models for hyperparameter optimization, you need an additional test test, or optimize models when they are built (this is hard). And so on.
In any case, keep the validation data set off the table for as long as possible and use it as little as possible. | Train and Validation vs. Train, Test, and Validation
In general, you need a split of your data set into test/training whenever there is danger of using information from the test set on the trainig set. The information might flow through you as a modelle |
39,140 | Train and Validation vs. Train, Test, and Validation | “All models are wrong, but some are useful”. George E. P. Box
When we build a model, our main materials is data. But it is ultimate goal that probably will never be achieved is to use our model to different data and performs like we trained with our trained data. But we can obviously try and then comes this split.
Now I will not discuss the size of the split. But I will focus on whether we should split 2 ways or 3 ways. To me, it is unnecessary to split three part for the data you control. For example you have ECG data from 100 patients. You can train to see whether your model works by spliting 80/20 as train and test. If it performs well in test set, there is no need for validation as your test set was different tan your train set.
Question is when you need a validation set. For example, clinicians have extra 20 patients ECG data that were not used in you train and test set. You trained your model, tuned your parameters through your test set. But now you have completely different data which clinician has. They will try your model for validation. So the intuition can be, doctors want to have some model. They have 120 patients data. They give you 100 data. You split them as train and test. And after that doctors use your model in validation set.
In any competition for example kaggle, data probably divided this way. | Train and Validation vs. Train, Test, and Validation | “All models are wrong, but some are useful”. George E. P. Box
When we build a model, our main materials is data. But it is ultimate goal that probably will never be achieved is to use our model to dif | Train and Validation vs. Train, Test, and Validation
“All models are wrong, but some are useful”. George E. P. Box
When we build a model, our main materials is data. But it is ultimate goal that probably will never be achieved is to use our model to different data and performs like we trained with our trained data. But we can obviously try and then comes this split.
Now I will not discuss the size of the split. But I will focus on whether we should split 2 ways or 3 ways. To me, it is unnecessary to split three part for the data you control. For example you have ECG data from 100 patients. You can train to see whether your model works by spliting 80/20 as train and test. If it performs well in test set, there is no need for validation as your test set was different tan your train set.
Question is when you need a validation set. For example, clinicians have extra 20 patients ECG data that were not used in you train and test set. You trained your model, tuned your parameters through your test set. But now you have completely different data which clinician has. They will try your model for validation. So the intuition can be, doctors want to have some model. They have 120 patients data. They give you 100 data. You split them as train and test. And after that doctors use your model in validation set.
In any competition for example kaggle, data probably divided this way. | Train and Validation vs. Train, Test, and Validation
“All models are wrong, but some are useful”. George E. P. Box
When we build a model, our main materials is data. But it is ultimate goal that probably will never be achieved is to use our model to dif |
39,141 | Mean squared error of OLS smaller than Ridge? | That is correct because $b_{OLS}$ is the minimizer of MSE by definition. The problem ($X^TX$ is invertible here) has only one minimum and any value other than $b_{OLS}$ will have higher MSE on the training dataset. | Mean squared error of OLS smaller than Ridge? | That is correct because $b_{OLS}$ is the minimizer of MSE by definition. The problem ($X^TX$ is invertible here) has only one minimum and any value other than $b_{OLS}$ will have higher MSE on the tra | Mean squared error of OLS smaller than Ridge?
That is correct because $b_{OLS}$ is the minimizer of MSE by definition. The problem ($X^TX$ is invertible here) has only one minimum and any value other than $b_{OLS}$ will have higher MSE on the training dataset. | Mean squared error of OLS smaller than Ridge?
That is correct because $b_{OLS}$ is the minimizer of MSE by definition. The problem ($X^TX$ is invertible here) has only one minimum and any value other than $b_{OLS}$ will have higher MSE on the tra |
39,142 | Mean squared error of OLS smaller than Ridge? | like gunes said, the hastie quote applies to out-of-sample (test) MSE, whereas in your question you are showing us in-sample (training) MSE, which Hastie is not referring to.
For your in-sample case, maybe check mean absolute error instead, MAE, which will put the OLS and ridge on equal footing. Otherwise OLS has the upper hand if MSE is the performance criterion since it actively solves the plain MSE formula whereas ridge doesn't | Mean squared error of OLS smaller than Ridge? | like gunes said, the hastie quote applies to out-of-sample (test) MSE, whereas in your question you are showing us in-sample (training) MSE, which Hastie is not referring to.
For your in-sample case, | Mean squared error of OLS smaller than Ridge?
like gunes said, the hastie quote applies to out-of-sample (test) MSE, whereas in your question you are showing us in-sample (training) MSE, which Hastie is not referring to.
For your in-sample case, maybe check mean absolute error instead, MAE, which will put the OLS and ridge on equal footing. Otherwise OLS has the upper hand if MSE is the performance criterion since it actively solves the plain MSE formula whereas ridge doesn't | Mean squared error of OLS smaller than Ridge?
like gunes said, the hastie quote applies to out-of-sample (test) MSE, whereas in your question you are showing us in-sample (training) MSE, which Hastie is not referring to.
For your in-sample case, |
39,143 | Mean squared error of OLS smaller than Ridge? | Ordinary least squares (OLS) minimizes the residual sum of squares (RSS)
$$
RSS=\sum_{i}\left( \varepsilon _{i}\right) ^{2}=\varepsilon ^{\prime
}\varepsilon =\sum_{i}\left( y_{i}-\hat{y}_{i}\right) ^{2}
$$
The mean squared deviation (in the version you are using it) equals
$$
MSE=\frac{RSS}{n}
$$
where $n$ is the number of observations. Since $n$ is a constant, minimizing
the RSS is equivalent to minimizing the MSE. It is for this reason, that the
Ridge-MSE cannot be smaller than the OLS-MSE. Ridge minimizes the RSS as well but
under a constraint and as long $\lambda >0$, this constraint is binding. The
answers of gunes and develarist already point in this direction.
As gunes said, your version of the MSE is the in-sample MSE.
When we calculate the mean squared error of a Ridge regression, we usually mean a different MSE. We are
typically interested in how well the Ridge estimator allows us to predict
out-of-sample. It is here, where Ridge may for certain
values of $\lambda $ outperform OLS.
We usually do not have out-of-sample observations so we split our sample
into two parts.
Training sample, which we use to estimate the coefficients, say $\hat{\beta}^{Training}$
Test sample, which we use to assess our prediction $\hat{y}%
_{i}^{Test}=X_{i}^{Test}\hat{\beta}^{Training}$
The test sample plays the role of the out-of-sample observations. The
test-MSE is then given by
$$
MSE_{Test}=\sum_{i}\left( y_{i}^{Test}-\hat{y}_{i}^{Test}\right) ^{2}
$$
Your example is rather small, but it is still possible to illustrate the
procedure.
% Generate Data.
X = [3, 3
1.1 1
-2.1 -2
-2 -2];
y = [1 1 -1 -1]';
% Specify the size of the penalty factor
lambda = 4;
% Initialize
MSE_Test_OLS_vector = zeros(1,m);
MSE_Test_Ridge_vector = zeros(1,m);
% Looping over the m obserations
for i = 1:m
% Generate the training sample
X1 = X; X1(i,:) = [];
y1 = y; y1(i,:) = [];
% Generate the test sample
x0 = X(i,:);
y0 = y(i);
% The OLS and the Ridge estimators
b_OLS = ((X1')*X1)^(-1)*((X1')*y1);
b_Ridge = ((X1')*X1+lambda*eye(n))^(-1)*((X1')*y1);
% Prediction and MSEs
yhat0_OLS = x0*b_OLS;
yhat0_Ridge = x0*b_Ridge;
mse_ols = sum((y0-yhat0_OLS).^2);
mse_ridge = sum((y0-yhat0_Ridge).^2);
% Collect Results
MSE_Test_OLS_vector(i) = mse_ols;
MSE_Test_Ridge_vector(i) = mse_ridge;
end
% Mean MSEs
MMSE_Test_OLS = mean(MSE_Test_OLS_vector)
MMSE_Test_Ridge = mean(MSE_Test_Ridge_vector)
% Median MSEs
MedMSE_Test_OLS = median(MSE_Test_OLS_vector)
MedMSE_Test_Ridge = median(MSE_Test_Ridge_vector)
With $\lambda =4$, for example, Ridge outperforms OLS. We find the following median MSEs:
MedMSE_Test_OLS = 0.1418
MedMSE_Test_Ridge = 0.1123.
Interestingly, I could not find any value of $\lambda $ for which Ridge
performs better when we use the average MSE rather than the median. This may
be because the data set is rather small and single observations (outliers)
may have a large bearing on the average. Maybe some others want to comment on
this.
The first two columns of the table above show the results of a regression of $x_{1}$ and $x_{2}$ on $y$ separately. Both coefficients positively correlate with $y$. The large and apparently erratic sign change in column 3 is a result of the high correlation of your regressors. It is probably quite intuitive that any prediction based on the erratic OLS estimates in column 3 will not be very reliable. Column 4 shows the result of a Ridge regression with $\lambda=4$.
Important note: Your data are already centered (have a mean of zero), which allowed us to ignore the constant term. Centering is crucial here if the data do not have a mean of zero, as you do not want the shrinkage to be applied to the constant term. In addition to centering, we usually normalize the data so that they have a standard deviation of one. Normalizing the data assures that your results do not depend on the units in which your data are measured. Only if your data are in the same units, as you may assume here to keep things simple, you may ignore the normalization. | Mean squared error of OLS smaller than Ridge? | Ordinary least squares (OLS) minimizes the residual sum of squares (RSS)
$$
RSS=\sum_{i}\left( \varepsilon _{i}\right) ^{2}=\varepsilon ^{\prime
}\varepsilon =\sum_{i}\left( y_{i}-\hat{y}_{i}\right) ^ | Mean squared error of OLS smaller than Ridge?
Ordinary least squares (OLS) minimizes the residual sum of squares (RSS)
$$
RSS=\sum_{i}\left( \varepsilon _{i}\right) ^{2}=\varepsilon ^{\prime
}\varepsilon =\sum_{i}\left( y_{i}-\hat{y}_{i}\right) ^{2}
$$
The mean squared deviation (in the version you are using it) equals
$$
MSE=\frac{RSS}{n}
$$
where $n$ is the number of observations. Since $n$ is a constant, minimizing
the RSS is equivalent to minimizing the MSE. It is for this reason, that the
Ridge-MSE cannot be smaller than the OLS-MSE. Ridge minimizes the RSS as well but
under a constraint and as long $\lambda >0$, this constraint is binding. The
answers of gunes and develarist already point in this direction.
As gunes said, your version of the MSE is the in-sample MSE.
When we calculate the mean squared error of a Ridge regression, we usually mean a different MSE. We are
typically interested in how well the Ridge estimator allows us to predict
out-of-sample. It is here, where Ridge may for certain
values of $\lambda $ outperform OLS.
We usually do not have out-of-sample observations so we split our sample
into two parts.
Training sample, which we use to estimate the coefficients, say $\hat{\beta}^{Training}$
Test sample, which we use to assess our prediction $\hat{y}%
_{i}^{Test}=X_{i}^{Test}\hat{\beta}^{Training}$
The test sample plays the role of the out-of-sample observations. The
test-MSE is then given by
$$
MSE_{Test}=\sum_{i}\left( y_{i}^{Test}-\hat{y}_{i}^{Test}\right) ^{2}
$$
Your example is rather small, but it is still possible to illustrate the
procedure.
% Generate Data.
X = [3, 3
1.1 1
-2.1 -2
-2 -2];
y = [1 1 -1 -1]';
% Specify the size of the penalty factor
lambda = 4;
% Initialize
MSE_Test_OLS_vector = zeros(1,m);
MSE_Test_Ridge_vector = zeros(1,m);
% Looping over the m obserations
for i = 1:m
% Generate the training sample
X1 = X; X1(i,:) = [];
y1 = y; y1(i,:) = [];
% Generate the test sample
x0 = X(i,:);
y0 = y(i);
% The OLS and the Ridge estimators
b_OLS = ((X1')*X1)^(-1)*((X1')*y1);
b_Ridge = ((X1')*X1+lambda*eye(n))^(-1)*((X1')*y1);
% Prediction and MSEs
yhat0_OLS = x0*b_OLS;
yhat0_Ridge = x0*b_Ridge;
mse_ols = sum((y0-yhat0_OLS).^2);
mse_ridge = sum((y0-yhat0_Ridge).^2);
% Collect Results
MSE_Test_OLS_vector(i) = mse_ols;
MSE_Test_Ridge_vector(i) = mse_ridge;
end
% Mean MSEs
MMSE_Test_OLS = mean(MSE_Test_OLS_vector)
MMSE_Test_Ridge = mean(MSE_Test_Ridge_vector)
% Median MSEs
MedMSE_Test_OLS = median(MSE_Test_OLS_vector)
MedMSE_Test_Ridge = median(MSE_Test_Ridge_vector)
With $\lambda =4$, for example, Ridge outperforms OLS. We find the following median MSEs:
MedMSE_Test_OLS = 0.1418
MedMSE_Test_Ridge = 0.1123.
Interestingly, I could not find any value of $\lambda $ for which Ridge
performs better when we use the average MSE rather than the median. This may
be because the data set is rather small and single observations (outliers)
may have a large bearing on the average. Maybe some others want to comment on
this.
The first two columns of the table above show the results of a regression of $x_{1}$ and $x_{2}$ on $y$ separately. Both coefficients positively correlate with $y$. The large and apparently erratic sign change in column 3 is a result of the high correlation of your regressors. It is probably quite intuitive that any prediction based on the erratic OLS estimates in column 3 will not be very reliable. Column 4 shows the result of a Ridge regression with $\lambda=4$.
Important note: Your data are already centered (have a mean of zero), which allowed us to ignore the constant term. Centering is crucial here if the data do not have a mean of zero, as you do not want the shrinkage to be applied to the constant term. In addition to centering, we usually normalize the data so that they have a standard deviation of one. Normalizing the data assures that your results do not depend on the units in which your data are measured. Only if your data are in the same units, as you may assume here to keep things simple, you may ignore the normalization. | Mean squared error of OLS smaller than Ridge?
Ordinary least squares (OLS) minimizes the residual sum of squares (RSS)
$$
RSS=\sum_{i}\left( \varepsilon _{i}\right) ^{2}=\varepsilon ^{\prime
}\varepsilon =\sum_{i}\left( y_{i}-\hat{y}_{i}\right) ^ |
39,144 | Mean squared error of OLS smaller than Ridge? | As others have pointed out, the reason $β_{λ=0}$ (OLS) appears to have lower MSE than $β_{λ>0}$ (ridge) in your example is that you computed both values of $β$ from a matrix of
four (more generally, $N$) observations of two (more generally, $P$) predictors $X$ and corresponding four response values $Y$ and then computed the loss on these same four observations. Forgetting OLS versus ridge for a moment, let's compute $β$ manually; specifically, we seek $β$ such that it minimizes the MSE of the in-sample data (the four observations). Given that $\hat{Y}=Xβ$, we need to express in-sample MSE in terms of $β$.
$MSE_{in-sample}=\frac{1}{N}\|Y-Xβ\|^2$
$MSE_{in-sample}=\frac{1}{N}[(Y-Xβ)^T(Y-Xβ)]$
$MSE_{in-sample}=\frac{1}{N}[Y^TY-2β^TX^TY+β^TX^TXβ]$
To find the value of $β$ minimizing this expression, we differentiate the expression with respect to $β$, set it equal to zero, and solve for $β$. I will omit the $\frac{1}{N}$ at this point since it's just a scalar and has no impact on the solution.
$\frac{d}{dβ}[Y^TY-2β^TX^TY+β^TX^TXβ]=0$
$-2X^TY+2X^TXβ=0$
$X^TXβ=X^TY$
$β=(X^TX)^{-1}X^TY$
Which is a familiar result. By construction, this is the value of $β$ that results in the minimum in-sample MSE. Let's generalize this to include a ridge penalty $λ$.
$β=(X^TX+λI)^{-1}X^TY$
Given the foregoing, it's clear that for $λ>0$, the in-sample MSE must be greater than that for $λ=0$.
Another way of looking at this is to consider the parameter space of $β$ explicitly. In your example there are two columns and hence three elements of $β$ (including the intercept):
$
\begin{bmatrix}
β_0 \\
β_1 \\
β_2 \\
\end{bmatrix}
$
Now let us further consider a point of which I will offer no proof (but of which proof is readily available elsewhere): linear models' optimization surfaces are convex, which means that there is only one minimum (i.e., there are no local minima). Hence, if the fitted values of parameters $β_0$, $β_1$, and $β_2$ minimize in-sample MSE, there can be no other set of these parameters' values with in-sample MSE equal to, or less than, the in-sample MSE associated with these values. Therefore, $β$ obtained by any process not mathematically equivalent to the one I walked through above will result in greater in-sample MSE. Since we found that in-sample MSE is minimized when $λ=0$, it is apparent that in-sample MSE must be greater than this minimum when $λ>0$.
$\Large{\text{A note on MSE estimators, in/out of sample, and populations:}}$
The usefulness of the ridge penalty emerges when predicting on out-of-sample data (values of the predictors $X$ on which the model was not trained, but for which the relationships identified in the in-sample data between the predictors and the response are expected to hold), where the expected MSE applies. There are numerous resources online that go into great detail on the relationship between $λ$ and the expected bias and variance, so in the interest of brevity (and my own laziness) I will not expand on that here. However, I will point out the following relationship:
$\hat{MSE}=\hat{bias}^2+\hat{var}$
This is the decomposition of the MSE estimator into its constituent bias and variance components. Within the context of linear models permitting a ridge penalty ($λ>=0$), it is generally the case that there is some nonzero value of $λ$ that results in its minimization. That is, the reduction (attributable to $λ$) in $\hat{var}$ eclipses the increase in $\hat{bias}^2$. This has absolutely nothing to do with the training of the model (the foregoing mathematical derivation) but rather has to do with estimating its performance on out-of-sample data. The "population," as some choose to call it, is the same as the out-of-sample data I reference because even though the "population" implicitly includes the in-sample data, the concept of a "population" suggests that infinite samples may be drawn from the underlying process (quantified by a distribution) and hence the influence of the in-sample data's idiosyncracies on the population vanish to insignificance.
Personally, after writing the foregoing paragraph, I'm even more sure that the discussion of "populations" adds needless complexity to this matter. Data were either used to train the model (in-sample) or they weren't (out-of-sample). If there's a scenario in which this distinction is impossible/impractical I've yet to see it. | Mean squared error of OLS smaller than Ridge? | As others have pointed out, the reason $β_{λ=0}$ (OLS) appears to have lower MSE than $β_{λ>0}$ (ridge) in your example is that you computed both values of $β$ from a matrix of
four (more generally, $ | Mean squared error of OLS smaller than Ridge?
As others have pointed out, the reason $β_{λ=0}$ (OLS) appears to have lower MSE than $β_{λ>0}$ (ridge) in your example is that you computed both values of $β$ from a matrix of
four (more generally, $N$) observations of two (more generally, $P$) predictors $X$ and corresponding four response values $Y$ and then computed the loss on these same four observations. Forgetting OLS versus ridge for a moment, let's compute $β$ manually; specifically, we seek $β$ such that it minimizes the MSE of the in-sample data (the four observations). Given that $\hat{Y}=Xβ$, we need to express in-sample MSE in terms of $β$.
$MSE_{in-sample}=\frac{1}{N}\|Y-Xβ\|^2$
$MSE_{in-sample}=\frac{1}{N}[(Y-Xβ)^T(Y-Xβ)]$
$MSE_{in-sample}=\frac{1}{N}[Y^TY-2β^TX^TY+β^TX^TXβ]$
To find the value of $β$ minimizing this expression, we differentiate the expression with respect to $β$, set it equal to zero, and solve for $β$. I will omit the $\frac{1}{N}$ at this point since it's just a scalar and has no impact on the solution.
$\frac{d}{dβ}[Y^TY-2β^TX^TY+β^TX^TXβ]=0$
$-2X^TY+2X^TXβ=0$
$X^TXβ=X^TY$
$β=(X^TX)^{-1}X^TY$
Which is a familiar result. By construction, this is the value of $β$ that results in the minimum in-sample MSE. Let's generalize this to include a ridge penalty $λ$.
$β=(X^TX+λI)^{-1}X^TY$
Given the foregoing, it's clear that for $λ>0$, the in-sample MSE must be greater than that for $λ=0$.
Another way of looking at this is to consider the parameter space of $β$ explicitly. In your example there are two columns and hence three elements of $β$ (including the intercept):
$
\begin{bmatrix}
β_0 \\
β_1 \\
β_2 \\
\end{bmatrix}
$
Now let us further consider a point of which I will offer no proof (but of which proof is readily available elsewhere): linear models' optimization surfaces are convex, which means that there is only one minimum (i.e., there are no local minima). Hence, if the fitted values of parameters $β_0$, $β_1$, and $β_2$ minimize in-sample MSE, there can be no other set of these parameters' values with in-sample MSE equal to, or less than, the in-sample MSE associated with these values. Therefore, $β$ obtained by any process not mathematically equivalent to the one I walked through above will result in greater in-sample MSE. Since we found that in-sample MSE is minimized when $λ=0$, it is apparent that in-sample MSE must be greater than this minimum when $λ>0$.
$\Large{\text{A note on MSE estimators, in/out of sample, and populations:}}$
The usefulness of the ridge penalty emerges when predicting on out-of-sample data (values of the predictors $X$ on which the model was not trained, but for which the relationships identified in the in-sample data between the predictors and the response are expected to hold), where the expected MSE applies. There are numerous resources online that go into great detail on the relationship between $λ$ and the expected bias and variance, so in the interest of brevity (and my own laziness) I will not expand on that here. However, I will point out the following relationship:
$\hat{MSE}=\hat{bias}^2+\hat{var}$
This is the decomposition of the MSE estimator into its constituent bias and variance components. Within the context of linear models permitting a ridge penalty ($λ>=0$), it is generally the case that there is some nonzero value of $λ$ that results in its minimization. That is, the reduction (attributable to $λ$) in $\hat{var}$ eclipses the increase in $\hat{bias}^2$. This has absolutely nothing to do with the training of the model (the foregoing mathematical derivation) but rather has to do with estimating its performance on out-of-sample data. The "population," as some choose to call it, is the same as the out-of-sample data I reference because even though the "population" implicitly includes the in-sample data, the concept of a "population" suggests that infinite samples may be drawn from the underlying process (quantified by a distribution) and hence the influence of the in-sample data's idiosyncracies on the population vanish to insignificance.
Personally, after writing the foregoing paragraph, I'm even more sure that the discussion of "populations" adds needless complexity to this matter. Data were either used to train the model (in-sample) or they weren't (out-of-sample). If there's a scenario in which this distinction is impossible/impractical I've yet to see it. | Mean squared error of OLS smaller than Ridge?
As others have pointed out, the reason $β_{λ=0}$ (OLS) appears to have lower MSE than $β_{λ>0}$ (ridge) in your example is that you computed both values of $β$ from a matrix of
four (more generally, $ |
39,145 | Mean squared error of OLS smaller than Ridge? | The result that gunes underscore, efficiency of OLS estimators, is valid among unbiased estimators. The RIDGE estimator induce bias in the estimates but can achieve lower MSE. See the start of the story that bring at the theorem that you cited (bias-variance tradeoff). At practical level RIDGE estimator is useful in prediction, mainly in context of big data (many predictors). In this context the out of sample performance of naive OLS regression are usually poorer than the RIDGE one.
uploading: the question/title is:
Mean squared error of OLS smaller than Ridge?
then, in order to contextualize and remove ambiguity, we have to consider not only his explanation but also the argument that Aristide Herve suggested in the comment to gunes (the first) answer (Gauss Markov theorem and another theorem [1.2 pag 15] in those lecture note [https://arxiv.org/pdf/1509.09169;Lecture]; unfortunately the link was deleted by he). My reply was based on those consideration.
The definition of MSE can be written on estimation of parameters or predicted values (https://en.wikipedia.org/wiki/Mean_squared_error) but from the above arguments the relevant here is that related to parameters. Then:
$MSE(\hat{\beta})=E[(\hat{\beta} - \beta)^2 ]$ given $\beta$ (true value)
note that at least in those definition the sample split train/test is not considered. All data are considered. Moreover the term $bias^2$ emerge.
Now from the lecture note we can check that for $\lambda>0$
$E[\hat{\beta}_{RIDGE}] \neq \beta$ then it is a biased estimator
$V[\hat{\beta}_{RIDGE}] < V[\hat{\beta}_{OLS}]$
and for some value of $\lambda>0$
$MSE[\hat{\beta}_{RIDGE}] < MSE[\hat{\beta}_{OLS}]$
infact we can read (pag 16):
Theorem 1.2 can also be used to conclude on the biasedness of the ridge regression estimator. The Gauss-Markov theorem (Rao, 1973) states (under some assumptions) that the ML regression estimator is the best linear unbiased estimator (BLUE) with the smallest MSE. As the ridge regression estimator is a linear estimator and outperforms (in terms ofMSE) this ML estimator, it must be biased (for it would otherwise refute the Gauss-Markov theorem).
for OLS the same consideration hold.
Therefore the reply of gunes
That is correct because $b_{OLS}$ is the minimizer of MSE by definition.
is wrong, and the consideration of develarist
like gunes said, the hastie quote applies to out-of-sample (test) MSE,
whereas in your question you are showing us in-sample (training) MSE,
which Hastie is not referring to.
is wrong too, the Gauss Markov theorem do not consider the sample split and the no bias condition is crucial there (https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem).
Therefore: Mean squared error of OLS smaller than Ridge? No, not always. It depends on the value of $\lambda$.
Now remain to say what went wrong in the computation of Aristide Herve. There are at least two problems. The first is that his suggestion are referred to $MSE$ in parameters estimation sense while your computation is focused on fitted/predicted values. In the last sense is usual to refers on Expected Prediction Error ($EPE$) and not on the Residual Sum of Square ($RSS$). Actually, for any linear model, is not possible to minimize $RSS$ more than OLS case. The explanation/comments of gunes sound like this and it is correct in this sense; however the minimization of $MSE$ is not the same thing.
More important, in order to check the $MSE$ capability of several techniques or models in theoretical ground we have to consider the true model also, then to know the bias. Aristide Herve procedure do not consider this element, therefore cannot be adequate.
Finally we can also note that something like “in sample MSE” written on fitted values, that Dave, develarist and gunes refers on, have a dubious meaning. Infact in the spirit of $MSE$ we must to take into account the bias also, as I already said specification matters, while if we are focused only on residuals (in sample errors) it cannot emerge. Worse, regardless the linearity of the estimated model is always possible to achieve a perfect in sample fit, then to achieve “in sample MSE=0”. This discussion give us the last clarifications: Is MSE decreasing with increasing number of explanatory variables?
infact Cagdas Ozgenc show there that $MSE$ should be intended as population
metrics and
$E[\hat{MSE_{in}}]<MSE$ (downward biased, after all this is obvious)
while $E[\hat{MSE_{out}}]=MSE$
therefore $\hat{MSE_{in}}$ is not what we need. This conclude the story. | Mean squared error of OLS smaller than Ridge? | The result that gunes underscore, efficiency of OLS estimators, is valid among unbiased estimators. The RIDGE estimator induce bias in the estimates but can achieve lower MSE. See the start of the sto | Mean squared error of OLS smaller than Ridge?
The result that gunes underscore, efficiency of OLS estimators, is valid among unbiased estimators. The RIDGE estimator induce bias in the estimates but can achieve lower MSE. See the start of the story that bring at the theorem that you cited (bias-variance tradeoff). At practical level RIDGE estimator is useful in prediction, mainly in context of big data (many predictors). In this context the out of sample performance of naive OLS regression are usually poorer than the RIDGE one.
uploading: the question/title is:
Mean squared error of OLS smaller than Ridge?
then, in order to contextualize and remove ambiguity, we have to consider not only his explanation but also the argument that Aristide Herve suggested in the comment to gunes (the first) answer (Gauss Markov theorem and another theorem [1.2 pag 15] in those lecture note [https://arxiv.org/pdf/1509.09169;Lecture]; unfortunately the link was deleted by he). My reply was based on those consideration.
The definition of MSE can be written on estimation of parameters or predicted values (https://en.wikipedia.org/wiki/Mean_squared_error) but from the above arguments the relevant here is that related to parameters. Then:
$MSE(\hat{\beta})=E[(\hat{\beta} - \beta)^2 ]$ given $\beta$ (true value)
note that at least in those definition the sample split train/test is not considered. All data are considered. Moreover the term $bias^2$ emerge.
Now from the lecture note we can check that for $\lambda>0$
$E[\hat{\beta}_{RIDGE}] \neq \beta$ then it is a biased estimator
$V[\hat{\beta}_{RIDGE}] < V[\hat{\beta}_{OLS}]$
and for some value of $\lambda>0$
$MSE[\hat{\beta}_{RIDGE}] < MSE[\hat{\beta}_{OLS}]$
infact we can read (pag 16):
Theorem 1.2 can also be used to conclude on the biasedness of the ridge regression estimator. The Gauss-Markov theorem (Rao, 1973) states (under some assumptions) that the ML regression estimator is the best linear unbiased estimator (BLUE) with the smallest MSE. As the ridge regression estimator is a linear estimator and outperforms (in terms ofMSE) this ML estimator, it must be biased (for it would otherwise refute the Gauss-Markov theorem).
for OLS the same consideration hold.
Therefore the reply of gunes
That is correct because $b_{OLS}$ is the minimizer of MSE by definition.
is wrong, and the consideration of develarist
like gunes said, the hastie quote applies to out-of-sample (test) MSE,
whereas in your question you are showing us in-sample (training) MSE,
which Hastie is not referring to.
is wrong too, the Gauss Markov theorem do not consider the sample split and the no bias condition is crucial there (https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem).
Therefore: Mean squared error of OLS smaller than Ridge? No, not always. It depends on the value of $\lambda$.
Now remain to say what went wrong in the computation of Aristide Herve. There are at least two problems. The first is that his suggestion are referred to $MSE$ in parameters estimation sense while your computation is focused on fitted/predicted values. In the last sense is usual to refers on Expected Prediction Error ($EPE$) and not on the Residual Sum of Square ($RSS$). Actually, for any linear model, is not possible to minimize $RSS$ more than OLS case. The explanation/comments of gunes sound like this and it is correct in this sense; however the minimization of $MSE$ is not the same thing.
More important, in order to check the $MSE$ capability of several techniques or models in theoretical ground we have to consider the true model also, then to know the bias. Aristide Herve procedure do not consider this element, therefore cannot be adequate.
Finally we can also note that something like “in sample MSE” written on fitted values, that Dave, develarist and gunes refers on, have a dubious meaning. Infact in the spirit of $MSE$ we must to take into account the bias also, as I already said specification matters, while if we are focused only on residuals (in sample errors) it cannot emerge. Worse, regardless the linearity of the estimated model is always possible to achieve a perfect in sample fit, then to achieve “in sample MSE=0”. This discussion give us the last clarifications: Is MSE decreasing with increasing number of explanatory variables?
infact Cagdas Ozgenc show there that $MSE$ should be intended as population
metrics and
$E[\hat{MSE_{in}}]<MSE$ (downward biased, after all this is obvious)
while $E[\hat{MSE_{out}}]=MSE$
therefore $\hat{MSE_{in}}$ is not what we need. This conclude the story. | Mean squared error of OLS smaller than Ridge?
The result that gunes underscore, efficiency of OLS estimators, is valid among unbiased estimators. The RIDGE estimator induce bias in the estimates but can achieve lower MSE. See the start of the sto |
39,146 | Why do we use parametric distributions instead of empirical distributions? | An enormous amount of data is needed to accurately estimate a distribution nonparametrically, especially a continuous one. Even then, some assumptions about the smoothness of the distribution are needed for filling the gaps (interpolating) between the observed values and other assumptions are needed for extrapolating outside the observed data range. With a small or moderate sample, you would usually expect poor accuracy from a nonparametric estimation. It would take a large discrepancy between the true distribution and a modelled parametric one used to approximate it to make the nonparametric approach more accurate. This is especially true in higher dimensions, as data become sparser when the dimension grows. | Why do we use parametric distributions instead of empirical distributions? | An enormous amount of data is needed to accurately estimate a distribution nonparametrically, especially a continuous one. Even then, some assumptions about the smoothness of the distribution are need | Why do we use parametric distributions instead of empirical distributions?
An enormous amount of data is needed to accurately estimate a distribution nonparametrically, especially a continuous one. Even then, some assumptions about the smoothness of the distribution are needed for filling the gaps (interpolating) between the observed values and other assumptions are needed for extrapolating outside the observed data range. With a small or moderate sample, you would usually expect poor accuracy from a nonparametric estimation. It would take a large discrepancy between the true distribution and a modelled parametric one used to approximate it to make the nonparametric approach more accurate. This is especially true in higher dimensions, as data become sparser when the dimension grows. | Why do we use parametric distributions instead of empirical distributions?
An enormous amount of data is needed to accurately estimate a distribution nonparametrically, especially a continuous one. Even then, some assumptions about the smoothness of the distribution are need |
39,147 | Given uniform distributions of X and Y and the mean 0 and standard deviation 1 for both, what’s the probability of 2X>Y [closed] | As to what the question is asking: you have two random variables, $X$ and $Y$. Both are uniformly distributed in some interval, each with mean 0 and standard deviation 1. (From this information, we can calculate the interval in which $X$ and $Y$ actually live.)
We thus draw $X$ and $Y$ randomly and independently from their uniform distribution. Now, it might happen that $2X>Y$, or it might not - this event is again random.
And the question asks how high the probability for this event is.
One often useful first step is to simulate, just to get an idea of the likely result. As per Ben's answer, $X, Y \sim U[-a,a]$ with $a=\sqrt{3}$ (though it doesn't matter at all, see below). Here is a simulation in R:
nn <- 1e5
aa <- sqrt(3)
xx <- runif(nn,-aa,aa)
yy <- runif(nn,-aa,aa)
sum(2*xx>yy)/nn
# [1] 0.49964
This looks very much like $\frac{1}{2}$ might be a possible answer for that probability.
Here is a possible graphical approach. $X$ and $Y$ are uniformly distributed in the square with corners at $(-a,-a)$, $(a,-a)$, $(a,a)$ and $(-a,a)$.
Draw a line through that square, with equation $y=2x$. The points below that line are exactly the ones that satisfy your condition $2X>Y$. Thus, the probability we are looking for is the proportion of the square's area below the line.
And since the line exactly bisects your square, we see that the probability is $\frac{1}{2}$.
Finally, since it doesn't matter at all what numbers we put on the axes as long as everything is centered around zero, we see that it doesn't matter what the standard deviations are. Or even what the constant is in $cX>Y$. And it even works for other bivariate distributions whose density is point symmetric around zero (like a bivariate normal distribution with equal marginal variances and possibly non-zero covariance, where the underlying density would not be a square but an infinite elliptical cloud). The probability always comes out to $\frac{1}{2}$.
R code:
aa <- sqrt(3)
cc <- 2
plot(c(-aa,aa),c(-aa,aa),type="n",xlab="X",ylab="Y",las=1)
polygon(c(-aa,aa,aa,-aa),c(-aa,-aa,aa,aa),col="grey",border=NA)
abline(h=0,lty=2)
abline(v=0,lty=2)
abline(a=0,b=cc,col="red",lwd=2)
polygon(c(-aa/cc,aa,aa,aa/cc),c(-aa,-aa,aa,aa),col="red",density=20,border=NA) | Given uniform distributions of X and Y and the mean 0 and standard deviation 1 for both, what’s the | As to what the question is asking: you have two random variables, $X$ and $Y$. Both are uniformly distributed in some interval, each with mean 0 and standard deviation 1. (From this information, we ca | Given uniform distributions of X and Y and the mean 0 and standard deviation 1 for both, what’s the probability of 2X>Y [closed]
As to what the question is asking: you have two random variables, $X$ and $Y$. Both are uniformly distributed in some interval, each with mean 0 and standard deviation 1. (From this information, we can calculate the interval in which $X$ and $Y$ actually live.)
We thus draw $X$ and $Y$ randomly and independently from their uniform distribution. Now, it might happen that $2X>Y$, or it might not - this event is again random.
And the question asks how high the probability for this event is.
One often useful first step is to simulate, just to get an idea of the likely result. As per Ben's answer, $X, Y \sim U[-a,a]$ with $a=\sqrt{3}$ (though it doesn't matter at all, see below). Here is a simulation in R:
nn <- 1e5
aa <- sqrt(3)
xx <- runif(nn,-aa,aa)
yy <- runif(nn,-aa,aa)
sum(2*xx>yy)/nn
# [1] 0.49964
This looks very much like $\frac{1}{2}$ might be a possible answer for that probability.
Here is a possible graphical approach. $X$ and $Y$ are uniformly distributed in the square with corners at $(-a,-a)$, $(a,-a)$, $(a,a)$ and $(-a,a)$.
Draw a line through that square, with equation $y=2x$. The points below that line are exactly the ones that satisfy your condition $2X>Y$. Thus, the probability we are looking for is the proportion of the square's area below the line.
And since the line exactly bisects your square, we see that the probability is $\frac{1}{2}$.
Finally, since it doesn't matter at all what numbers we put on the axes as long as everything is centered around zero, we see that it doesn't matter what the standard deviations are. Or even what the constant is in $cX>Y$. And it even works for other bivariate distributions whose density is point symmetric around zero (like a bivariate normal distribution with equal marginal variances and possibly non-zero covariance, where the underlying density would not be a square but an infinite elliptical cloud). The probability always comes out to $\frac{1}{2}$.
R code:
aa <- sqrt(3)
cc <- 2
plot(c(-aa,aa),c(-aa,aa),type="n",xlab="X",ylab="Y",las=1)
polygon(c(-aa,aa,aa,-aa),c(-aa,-aa,aa,aa),col="grey",border=NA)
abline(h=0,lty=2)
abline(v=0,lty=2)
abline(a=0,b=cc,col="red",lwd=2)
polygon(c(-aa/cc,aa,aa,aa/cc),c(-aa,-aa,aa,aa),col="red",density=20,border=NA) | Given uniform distributions of X and Y and the mean 0 and standard deviation 1 for both, what’s the
As to what the question is asking: you have two random variables, $X$ and $Y$. Both are uniformly distributed in some interval, each with mean 0 and standard deviation 1. (From this information, we ca |
39,148 | Given uniform distributions of X and Y and the mean 0 and standard deviation 1 for both, what’s the probability of 2X>Y [closed] | This question actually does not require specification of the standard deviation, since the answer is the same for any IID uniform random variables with zero mean. Moreover, the result holds for any event based on cutting the domain into two equal parts via a straight line segment through the origin (hat tip to Stephen Kolassa for pointing this out). Assuming you are talking about the continuous uniform distribution, in order to have a mean of zero they should have bounds:
$$X,Y \sim \text{IID U}(-a, a).$$
(In the case where they have unit variance you can show that $a=\sqrt{3}$. This result is easily obtained by using the formulae for the mean and variance of the continuous uniform distribution and solving the two equations in two unknowns.) So, for any value $c>0$ you then have:
$$\begin{aligned}
\mathbb{P}(cX > Y)
&= \mathbb{P}(X > Y/c) \\[12pt]
&= \int \mathbb{P}(X > y/c) \cdot f_Y(y) \ dy \\[6pt]
&= \int \Bigg( \int \mathbb{I}(x > y/c) f_X(x) \ dx \Bigg) f_Y(y) \ dy \\[6pt]
&= \int \limits_{-a}^{a} \Bigg( \ \int \limits_{y/c}^{a} f_X(x) \ dx \Bigg) f_Y(y) \ dy \\[6pt]
&= \frac{1}{4 a^2} \int \limits_{-a}^{a} \int \limits_{y/c}^{a} \ dx \ dy \\[6pt]
&= \frac{1}{4 a^2} \int \limits_{-a}^{a} \Big( a - \frac{y}{c} \Big) \ dy \\[6pt]
&= \frac{1}{4 a^2} \Bigg[ a y - \frac{y^2}{2c} \Bigg]_{-a}^{a} \\[6pt]
&= \frac{1}{4 a^2} \Bigg[ \bigg( a^2 - \frac{a^2}{2c} \bigg) - \bigg( -a^2 - \frac{a^2}{2c} \bigg) \Bigg] \\[6pt]
&= \frac{1}{4 a^2} \Bigg[ 2 a^2 \Bigg] \\[6pt]
&= \frac{1}{2}. \\[6pt]
\end{aligned}$$
(Note that this result holds also for $c \leqslant 0$ and the proof in this case is analogous. When $c<0$ we "flip" the inequality sign, but the result comes out the same.) The intuitive reason for this result is quite simple. The line $cx=y$ goes through the origin, and so it cuts the support of the random variables into two parts with equal area. Since both random variables are uniformly distributed over that support, the probability that $cX>Y$ must be one-half. | Given uniform distributions of X and Y and the mean 0 and standard deviation 1 for both, what’s the | This question actually does not require specification of the standard deviation, since the answer is the same for any IID uniform random variables with zero mean. Moreover, the result holds for any e | Given uniform distributions of X and Y and the mean 0 and standard deviation 1 for both, what’s the probability of 2X>Y [closed]
This question actually does not require specification of the standard deviation, since the answer is the same for any IID uniform random variables with zero mean. Moreover, the result holds for any event based on cutting the domain into two equal parts via a straight line segment through the origin (hat tip to Stephen Kolassa for pointing this out). Assuming you are talking about the continuous uniform distribution, in order to have a mean of zero they should have bounds:
$$X,Y \sim \text{IID U}(-a, a).$$
(In the case where they have unit variance you can show that $a=\sqrt{3}$. This result is easily obtained by using the formulae for the mean and variance of the continuous uniform distribution and solving the two equations in two unknowns.) So, for any value $c>0$ you then have:
$$\begin{aligned}
\mathbb{P}(cX > Y)
&= \mathbb{P}(X > Y/c) \\[12pt]
&= \int \mathbb{P}(X > y/c) \cdot f_Y(y) \ dy \\[6pt]
&= \int \Bigg( \int \mathbb{I}(x > y/c) f_X(x) \ dx \Bigg) f_Y(y) \ dy \\[6pt]
&= \int \limits_{-a}^{a} \Bigg( \ \int \limits_{y/c}^{a} f_X(x) \ dx \Bigg) f_Y(y) \ dy \\[6pt]
&= \frac{1}{4 a^2} \int \limits_{-a}^{a} \int \limits_{y/c}^{a} \ dx \ dy \\[6pt]
&= \frac{1}{4 a^2} \int \limits_{-a}^{a} \Big( a - \frac{y}{c} \Big) \ dy \\[6pt]
&= \frac{1}{4 a^2} \Bigg[ a y - \frac{y^2}{2c} \Bigg]_{-a}^{a} \\[6pt]
&= \frac{1}{4 a^2} \Bigg[ \bigg( a^2 - \frac{a^2}{2c} \bigg) - \bigg( -a^2 - \frac{a^2}{2c} \bigg) \Bigg] \\[6pt]
&= \frac{1}{4 a^2} \Bigg[ 2 a^2 \Bigg] \\[6pt]
&= \frac{1}{2}. \\[6pt]
\end{aligned}$$
(Note that this result holds also for $c \leqslant 0$ and the proof in this case is analogous. When $c<0$ we "flip" the inequality sign, but the result comes out the same.) The intuitive reason for this result is quite simple. The line $cx=y$ goes through the origin, and so it cuts the support of the random variables into two parts with equal area. Since both random variables are uniformly distributed over that support, the probability that $cX>Y$ must be one-half. | Given uniform distributions of X and Y and the mean 0 and standard deviation 1 for both, what’s the
This question actually does not require specification of the standard deviation, since the answer is the same for any IID uniform random variables with zero mean. Moreover, the result holds for any e |
39,149 | Overdispersion in fitted generalized linear model with insignificant regression coefficients | Yes, that is true.
There are only two commonly-used generalized linear model families for which the concept of overdispersion is relevant. These are Poisson regression or binomial regression when the number of trials is greater than one. If the data is genuinely overdispersed then switching from one of these glm regression models to a model that allows for overdispersion will result in larger p-values for the same hypothesis tests.
Note however that it is also possible for data to be underdispersed and, in those circumstances, quasi-Poisson regression or quasi-binomial regression will estimate quasi-dispersions less than one and hence can give smaller p-values than the correponding Poisson or binomial regressions, especially if the number of observations is large.
On the other hand, if you use a mixture model to model the overdispersion then getting smaller p-values is not possible.
Commonly used mixture models include negative binomial glms to model overdispersion relative to Poisson or beta-binomial regression to model overdispersion relative to the binomial. | Overdispersion in fitted generalized linear model with insignificant regression coefficients | Yes, that is true.
There are only two commonly-used generalized linear model families for which the concept of overdispersion is relevant. These are Poisson regression or binomial regression when the | Overdispersion in fitted generalized linear model with insignificant regression coefficients
Yes, that is true.
There are only two commonly-used generalized linear model families for which the concept of overdispersion is relevant. These are Poisson regression or binomial regression when the number of trials is greater than one. If the data is genuinely overdispersed then switching from one of these glm regression models to a model that allows for overdispersion will result in larger p-values for the same hypothesis tests.
Note however that it is also possible for data to be underdispersed and, in those circumstances, quasi-Poisson regression or quasi-binomial regression will estimate quasi-dispersions less than one and hence can give smaller p-values than the correponding Poisson or binomial regressions, especially if the number of observations is large.
On the other hand, if you use a mixture model to model the overdispersion then getting smaller p-values is not possible.
Commonly used mixture models include negative binomial glms to model overdispersion relative to Poisson or beta-binomial regression to model overdispersion relative to the binomial. | Overdispersion in fitted generalized linear model with insignificant regression coefficients
Yes, that is true.
There are only two commonly-used generalized linear model families for which the concept of overdispersion is relevant. These are Poisson regression or binomial regression when the |
39,150 | Overdispersion in fitted generalized linear model with insignificant regression coefficients | Just to add to @GordonSmyth's answer, when you are fitting a quasipoisson or quasibinomial, the variance-covariance matrix is scaled by the dispersion value. This means the standard error of your coefficients are multiplied by sqrt(dispersion). So
For example, we fit a poisson:
library(pscl)
fm_pois <- glm(art ~ ., data = bioChemists, family = poisson)
coefficients(summary(fm_pois))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.30461683 0.102981443 2.9579779 3.096643e-03
femWomen -0.22459423 0.054613488 -4.1124315 3.915137e-05
marMarried 0.15524338 0.061374395 2.5294487 1.142419e-02
kid5 -0.18488270 0.040126898 -4.6074506 4.076360e-06
phd 0.01282258 0.026397045 0.4857582 6.271386e-01
ment 0.02554275 0.002006073 12.7327095 3.890982e-37
And a quasipoisson:
fm_qpois <- glm(art ~ ., data = bioChemists, family = quasipoisson)
coefficients(summary(fm_qpois))
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.30461683 0.139272885 2.1871941 2.898252e-02
femWomen -0.22459423 0.073859696 -3.0408225 2.426991e-03
marMarried 0.15524338 0.083003199 1.8703301 6.175917e-02
kid5 -0.18488270 0.054267922 -3.4068506 6.859925e-04
phd 0.01282258 0.035699564 0.3591803 7.195436e-01
ment 0.02554275 0.002713028 9.4148462 3.777939e-20
sqrt(summary(fm_qpois)$dispersion)
[1] 1.352408
You can work out 1.352408 * the standard error of coefficients from poisson model is equal to the standard error of coefficients from the quasipoisson.
The one exception I can think of is when your overdispersion is caused by zero counts, in that case, if you do a zero-inflated model, some of the estimates might change. | Overdispersion in fitted generalized linear model with insignificant regression coefficients | Just to add to @GordonSmyth's answer, when you are fitting a quasipoisson or quasibinomial, the variance-covariance matrix is scaled by the dispersion value. This means the standard error of your coef | Overdispersion in fitted generalized linear model with insignificant regression coefficients
Just to add to @GordonSmyth's answer, when you are fitting a quasipoisson or quasibinomial, the variance-covariance matrix is scaled by the dispersion value. This means the standard error of your coefficients are multiplied by sqrt(dispersion). So
For example, we fit a poisson:
library(pscl)
fm_pois <- glm(art ~ ., data = bioChemists, family = poisson)
coefficients(summary(fm_pois))
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.30461683 0.102981443 2.9579779 3.096643e-03
femWomen -0.22459423 0.054613488 -4.1124315 3.915137e-05
marMarried 0.15524338 0.061374395 2.5294487 1.142419e-02
kid5 -0.18488270 0.040126898 -4.6074506 4.076360e-06
phd 0.01282258 0.026397045 0.4857582 6.271386e-01
ment 0.02554275 0.002006073 12.7327095 3.890982e-37
And a quasipoisson:
fm_qpois <- glm(art ~ ., data = bioChemists, family = quasipoisson)
coefficients(summary(fm_qpois))
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.30461683 0.139272885 2.1871941 2.898252e-02
femWomen -0.22459423 0.073859696 -3.0408225 2.426991e-03
marMarried 0.15524338 0.083003199 1.8703301 6.175917e-02
kid5 -0.18488270 0.054267922 -3.4068506 6.859925e-04
phd 0.01282258 0.035699564 0.3591803 7.195436e-01
ment 0.02554275 0.002713028 9.4148462 3.777939e-20
sqrt(summary(fm_qpois)$dispersion)
[1] 1.352408
You can work out 1.352408 * the standard error of coefficients from poisson model is equal to the standard error of coefficients from the quasipoisson.
The one exception I can think of is when your overdispersion is caused by zero counts, in that case, if you do a zero-inflated model, some of the estimates might change. | Overdispersion in fitted generalized linear model with insignificant regression coefficients
Just to add to @GordonSmyth's answer, when you are fitting a quasipoisson or quasibinomial, the variance-covariance matrix is scaled by the dispersion value. This means the standard error of your coef |
39,151 | Feature selection for Logistic Regression | My questions is this right approach to do feature selection when data volume is high?
Simply, no.
Basing feature selection on p values is a bad idea, especially when data are large. First, p-values tell you nothing about the effect of the variable. I can always construct a model with a highly significant feature but which performs negligibly different with respect to any classification metric you choose. This is because significant effects can be extremely small.
When data is large, the null is essentially a straw man. You have so much data that you can detect small effects because you have immense power to do so. The effect of any variable is never exactly 0 and you are finding that.
My advice is to use some principled modelling approach. People seem to like AIC (I'm not one of them), you could do forward feature selection (again, not my cup of tea), you could do lasso or ridge regression (I'm more keen on this), or frankly you could do none of them (my preference from what you've said in your post). If you have 12 variables which you know to be important, why aren't you using all of them? That's a rhetorical question.
In short, inference breaks down when you have so much data. The null becomes a straw man, so you reject near everything. People's obsession with p values leads to them using p values for things which they were not intended for (model selection). You should lean on methods which evaluate what you care about via a validation set or lean on your business knowledge.
EDIT:
I claim I can always make a model which performs negligibly better even when the p value is significant. Here is an example using linear regression:
library(tidyverse)
library(Metrics)
set.seed(0)
X = rnorm(1000000)
Z = rnorm(1000000)
y = 2*X + 0.01*Z + rnorm(1000000, 0, 0.3)
d = tibble(X = X, Z = Z, y = y, set =
sample(c('test','train'), replace = T,
size = 1000000))
test = filter(d, set=='test')
train = filter(d, set=='train')
model1 = lm(y~X + Z, data = train)
model2 = lm(y~X, data= train)
rmse(test$y, predict(model1, newdata = test))
#> [1] 0.2996978
rmse(test$y, predict(model2, newdata = test))
#> [1] 0.2998523
Created on 2022-01-06 by the reprex package (v2.0.1)
The rmse for both models agrees up to 3 decimal places. That is good for all intents and purposes in my opinion. Note that the coefficient for Z is highly significant (it gives the smallest p value R can give). The combination of tiny effect size and massive sample is what causes this phenomena. | Feature selection for Logistic Regression | My questions is this right approach to do feature selection when data volume is high?
Simply, no.
Basing feature selection on p values is a bad idea, especially when data are large. First, p-values t | Feature selection for Logistic Regression
My questions is this right approach to do feature selection when data volume is high?
Simply, no.
Basing feature selection on p values is a bad idea, especially when data are large. First, p-values tell you nothing about the effect of the variable. I can always construct a model with a highly significant feature but which performs negligibly different with respect to any classification metric you choose. This is because significant effects can be extremely small.
When data is large, the null is essentially a straw man. You have so much data that you can detect small effects because you have immense power to do so. The effect of any variable is never exactly 0 and you are finding that.
My advice is to use some principled modelling approach. People seem to like AIC (I'm not one of them), you could do forward feature selection (again, not my cup of tea), you could do lasso or ridge regression (I'm more keen on this), or frankly you could do none of them (my preference from what you've said in your post). If you have 12 variables which you know to be important, why aren't you using all of them? That's a rhetorical question.
In short, inference breaks down when you have so much data. The null becomes a straw man, so you reject near everything. People's obsession with p values leads to them using p values for things which they were not intended for (model selection). You should lean on methods which evaluate what you care about via a validation set or lean on your business knowledge.
EDIT:
I claim I can always make a model which performs negligibly better even when the p value is significant. Here is an example using linear regression:
library(tidyverse)
library(Metrics)
set.seed(0)
X = rnorm(1000000)
Z = rnorm(1000000)
y = 2*X + 0.01*Z + rnorm(1000000, 0, 0.3)
d = tibble(X = X, Z = Z, y = y, set =
sample(c('test','train'), replace = T,
size = 1000000))
test = filter(d, set=='test')
train = filter(d, set=='train')
model1 = lm(y~X + Z, data = train)
model2 = lm(y~X, data= train)
rmse(test$y, predict(model1, newdata = test))
#> [1] 0.2996978
rmse(test$y, predict(model2, newdata = test))
#> [1] 0.2998523
Created on 2022-01-06 by the reprex package (v2.0.1)
The rmse for both models agrees up to 3 decimal places. That is good for all intents and purposes in my opinion. Note that the coefficient for Z is highly significant (it gives the smallest p value R can give). The combination of tiny effect size and massive sample is what causes this phenomena. | Feature selection for Logistic Regression
My questions is this right approach to do feature selection when data volume is high?
Simply, no.
Basing feature selection on p values is a bad idea, especially when data are large. First, p-values t |
39,152 | Feature selection for Logistic Regression | From computational perspective, 1M data points and 12 features for logistic regression is nothing, i.e., the computer can return results in seconds.
try this example in R, and you will see how fast we can fit.
d=data.frame(matrix(runif(1e6*12),ncol=12))
d$y=sample(c(0,1),1e6, replace = T)
fit = glm(y~.,d,family='binomial')
So if your concern is the computation. It is not necessary to do the feature selection.
On the other and, if you do feature selection, in most cases, the performance (classification accuracy) will be worse. This is because, intuitively, more information does not hurt, even the feature is completely irrelevant to the label, the algorithm will just set the coefficient to zero.
If your focus is classification accuracy instead of interpretability, I would use logistic regression with regularization. See my another answer for details
Regularization methods for logistic regression
Note that "stepwise regression, is now considered a statistical sin."
See this post
What are modern, easily used alternatives to stepwise regression? | Feature selection for Logistic Regression | From computational perspective, 1M data points and 12 features for logistic regression is nothing, i.e., the computer can return results in seconds.
try this example in R, and you will see how fast we | Feature selection for Logistic Regression
From computational perspective, 1M data points and 12 features for logistic regression is nothing, i.e., the computer can return results in seconds.
try this example in R, and you will see how fast we can fit.
d=data.frame(matrix(runif(1e6*12),ncol=12))
d$y=sample(c(0,1),1e6, replace = T)
fit = glm(y~.,d,family='binomial')
So if your concern is the computation. It is not necessary to do the feature selection.
On the other and, if you do feature selection, in most cases, the performance (classification accuracy) will be worse. This is because, intuitively, more information does not hurt, even the feature is completely irrelevant to the label, the algorithm will just set the coefficient to zero.
If your focus is classification accuracy instead of interpretability, I would use logistic regression with regularization. See my another answer for details
Regularization methods for logistic regression
Note that "stepwise regression, is now considered a statistical sin."
See this post
What are modern, easily used alternatives to stepwise regression? | Feature selection for Logistic Regression
From computational perspective, 1M data points and 12 features for logistic regression is nothing, i.e., the computer can return results in seconds.
try this example in R, and you will see how fast we |
39,153 | Feature selection for Logistic Regression | I agree with the others that p values are not useful here and that regularized regression (ridge, elastic net, lasso) are potential way to go (elastic net might be more useful if the variables are correlated - but which one is best is an empirical question).
I would also decide whether theoretical or potential interactions in the predictors or nonlinearities in the relationships between the predictor and outcome are important to you. If so you will either need to create them ahead of time - here is a resource with potential considerations looking at interactions in a regularized regression. Also, if interested in interactions or nonlinear relationships you could consider using or combining your model with a random forest model. One popular option that I have also found success with is Boruta, which is a wrapper around a random forest model that examines whether your features are better than randomly permuted versions of the features. As Demetri pointed out above, any predictor with your sample size would likely have some nonzero relationship with the outcome, making p values for that purpose not useful. Yet, comparing whether the features are significantly better than their random permutations as a Boruta does is a way that a significant difference using p values can become useful again.
Either way, if the 12 variables you have are considered theoretically useful, you seem to have three options - keep them all (that's not a lot of features - why not include them all), trying to figure out if some can be dropped with two large a loss in prediction accuracy), or trying to figure out what relationships between these predictors and the outcome is the most useful for prediction. The second option seems to be what you are asking and might be fastest, but the third option might help you the most with prediction over time. | Feature selection for Logistic Regression | I agree with the others that p values are not useful here and that regularized regression (ridge, elastic net, lasso) are potential way to go (elastic net might be more useful if the variables are cor | Feature selection for Logistic Regression
I agree with the others that p values are not useful here and that regularized regression (ridge, elastic net, lasso) are potential way to go (elastic net might be more useful if the variables are correlated - but which one is best is an empirical question).
I would also decide whether theoretical or potential interactions in the predictors or nonlinearities in the relationships between the predictor and outcome are important to you. If so you will either need to create them ahead of time - here is a resource with potential considerations looking at interactions in a regularized regression. Also, if interested in interactions or nonlinear relationships you could consider using or combining your model with a random forest model. One popular option that I have also found success with is Boruta, which is a wrapper around a random forest model that examines whether your features are better than randomly permuted versions of the features. As Demetri pointed out above, any predictor with your sample size would likely have some nonzero relationship with the outcome, making p values for that purpose not useful. Yet, comparing whether the features are significantly better than their random permutations as a Boruta does is a way that a significant difference using p values can become useful again.
Either way, if the 12 variables you have are considered theoretically useful, you seem to have three options - keep them all (that's not a lot of features - why not include them all), trying to figure out if some can be dropped with two large a loss in prediction accuracy), or trying to figure out what relationships between these predictors and the outcome is the most useful for prediction. The second option seems to be what you are asking and might be fastest, but the third option might help you the most with prediction over time. | Feature selection for Logistic Regression
I agree with the others that p values are not useful here and that regularized regression (ridge, elastic net, lasso) are potential way to go (elastic net might be more useful if the variables are cor |
39,154 | Feature selection for Logistic Regression | As with any regression it is best to either be well versed in the subject matter or work with a Subject Matter Expert (SME) to help determine which variables make sense.
A significant step in the process is to look at the stepwise results and see when the point of diminishing returns is reached. In other words, look at the amount of variance explained at each step. At some point the variance explained with significantly diminish which should help you determine a stopping point. Of course, you should always look at the correlation between predictors and how coefficients change as new variables are added and of course consult with the SME to determine which variables make the most sense.
Also, I would always recommend consulting with an experienced modeler and have them review the final product and the steps along the way, including any initial variable cleansing and transformations. BTW, I always recommend binning be considered with logistic regression.
Other factors are how easy are the variables to monitor and implement, which should always be a practical consideration.
BTW, I noticed the reference to the article on "Problems with stepwise.." and it is certainly valid; however, stepwise, used judiciously can yield effective, useful results. As with any modeling technique, the results should be tested on an independent (preferably out of time, if applicable) sample to validate the results. | Feature selection for Logistic Regression | As with any regression it is best to either be well versed in the subject matter or work with a Subject Matter Expert (SME) to help determine which variables make sense.
A significant step in the proc | Feature selection for Logistic Regression
As with any regression it is best to either be well versed in the subject matter or work with a Subject Matter Expert (SME) to help determine which variables make sense.
A significant step in the process is to look at the stepwise results and see when the point of diminishing returns is reached. In other words, look at the amount of variance explained at each step. At some point the variance explained with significantly diminish which should help you determine a stopping point. Of course, you should always look at the correlation between predictors and how coefficients change as new variables are added and of course consult with the SME to determine which variables make the most sense.
Also, I would always recommend consulting with an experienced modeler and have them review the final product and the steps along the way, including any initial variable cleansing and transformations. BTW, I always recommend binning be considered with logistic regression.
Other factors are how easy are the variables to monitor and implement, which should always be a practical consideration.
BTW, I noticed the reference to the article on "Problems with stepwise.." and it is certainly valid; however, stepwise, used judiciously can yield effective, useful results. As with any modeling technique, the results should be tested on an independent (preferably out of time, if applicable) sample to validate the results. | Feature selection for Logistic Regression
As with any regression it is best to either be well versed in the subject matter or work with a Subject Matter Expert (SME) to help determine which variables make sense.
A significant step in the proc |
39,155 | Feature selection for Logistic Regression | Do you have a stepwise option on your Logistic Regression? That would be preferred.
While all 12 features may yield a significant p-value individually, they may not all be significant when considered in combination with one or more other features. You need to find the best subset.
In any case, it is not the p-values you want to be comparing. If you have significant p-values, what you want to compare is the proportion of variance accounted for. Choose the feature accounting for the largest proportion of variance. Once that is found, run 11 2-feature regressions using that first selected feature combined with each of the remaining 11 features in turn. Then pick the feature that accounts for the most additional variance (as long as the additional amount still has a significant p-value). That gives you the 2 best features. Continue with additional ones until you can no longer account for a significant amount of additional variance.
Obviously, this is a lot of work! But a step-wise option using all 12 variables will do all of this for you automatically. Sometimes there is also a "best subset" option that will effectively test all possible combinations of features to arrive at the best subset. This may not always give the same result as the stepwise option. | Feature selection for Logistic Regression | Do you have a stepwise option on your Logistic Regression? That would be preferred.
While all 12 features may yield a significant p-value individually, they may not all be significant when considered | Feature selection for Logistic Regression
Do you have a stepwise option on your Logistic Regression? That would be preferred.
While all 12 features may yield a significant p-value individually, they may not all be significant when considered in combination with one or more other features. You need to find the best subset.
In any case, it is not the p-values you want to be comparing. If you have significant p-values, what you want to compare is the proportion of variance accounted for. Choose the feature accounting for the largest proportion of variance. Once that is found, run 11 2-feature regressions using that first selected feature combined with each of the remaining 11 features in turn. Then pick the feature that accounts for the most additional variance (as long as the additional amount still has a significant p-value). That gives you the 2 best features. Continue with additional ones until you can no longer account for a significant amount of additional variance.
Obviously, this is a lot of work! But a step-wise option using all 12 variables will do all of this for you automatically. Sometimes there is also a "best subset" option that will effectively test all possible combinations of features to arrive at the best subset. This may not always give the same result as the stepwise option. | Feature selection for Logistic Regression
Do you have a stepwise option on your Logistic Regression? That would be preferred.
While all 12 features may yield a significant p-value individually, they may not all be significant when considered |
39,156 | Where does the underlying statistical model for inference for a proportion come from? | The answer to your question can be very simple or very deep, even with some philosophical idea involved (e.g., Bayesian and frequentist).
I will try to answer it using frequentist's point of view and with Maximize Likelihood Estimation.
We will start with the coin flip example. Let's assume each coin has its own attributes, may be this attributes is related to the physical mass distribution of the coin or the exact shape of the coin (may be the mass is not evenly distributed or not perfect round shape), but for one given coin, it has one parameter $\theta$ (probability of getting head), and this parameter has a "true value", we want to estimate.
Note that, this "true" value is "fixed" and unknown andWe can use experiment to estimate $\theta$.
Suppose we flip this coin $10$ times and we get $6$ head. How would we estimate the $\theta$? By intuition, we may say, we use the occurrence divided by the sample size. But why we have this intuition?
The answer is this is the Maximize the Likelihood Estimation (MLE). Note that, we can estimate $\theta$ by other estimator and the $\theta$ does not need to be 6/10. But it is very reasonable to use MLE. Here is why.
Assume independent samples, the probability of getting data is
$$
\theta^6(1-\theta)^4
$$
On the other hand, we know $\theta$ is between $0$ and $1$. If we plot the probability of getting data respect $\theta$ we get:
Note that, the probability of getting data (likelihood function) is maximized when we set $\theta$ to $0.6$. And this is why we use the empirical frequency to estimate the unknown parameter | Where does the underlying statistical model for inference for a proportion come from? | The answer to your question can be very simple or very deep, even with some philosophical idea involved (e.g., Bayesian and frequentist).
I will try to answer it using frequentist's point of view and | Where does the underlying statistical model for inference for a proportion come from?
The answer to your question can be very simple or very deep, even with some philosophical idea involved (e.g., Bayesian and frequentist).
I will try to answer it using frequentist's point of view and with Maximize Likelihood Estimation.
We will start with the coin flip example. Let's assume each coin has its own attributes, may be this attributes is related to the physical mass distribution of the coin or the exact shape of the coin (may be the mass is not evenly distributed or not perfect round shape), but for one given coin, it has one parameter $\theta$ (probability of getting head), and this parameter has a "true value", we want to estimate.
Note that, this "true" value is "fixed" and unknown andWe can use experiment to estimate $\theta$.
Suppose we flip this coin $10$ times and we get $6$ head. How would we estimate the $\theta$? By intuition, we may say, we use the occurrence divided by the sample size. But why we have this intuition?
The answer is this is the Maximize the Likelihood Estimation (MLE). Note that, we can estimate $\theta$ by other estimator and the $\theta$ does not need to be 6/10. But it is very reasonable to use MLE. Here is why.
Assume independent samples, the probability of getting data is
$$
\theta^6(1-\theta)^4
$$
On the other hand, we know $\theta$ is between $0$ and $1$. If we plot the probability of getting data respect $\theta$ we get:
Note that, the probability of getting data (likelihood function) is maximized when we set $\theta$ to $0.6$. And this is why we use the empirical frequency to estimate the unknown parameter | Where does the underlying statistical model for inference for a proportion come from?
The answer to your question can be very simple or very deep, even with some philosophical idea involved (e.g., Bayesian and frequentist).
I will try to answer it using frequentist's point of view and |
39,157 | Where does the underlying statistical model for inference for a proportion come from? | Since Haitao gave an explanation of a frequentist approach, I will supply a Bayesian one.
In a Bayesian setting, we still generally believe that there is a "true" $p$. We want to understand how probable different values of $p$ are given the data we have observed. In the coin flip example, $p$ is the probability of observing heads. Say we have a fair coin, and we flip it 100 times, and get 40 heads.
p <- 0.5
flips <- 100
heads <- 40
We can then use the binomial distribution to tell us how likely we would be to observe these results for different values of $p$.
s <- seq(0, 1, length.out = 1000)
plot(
s, dbinom(heads, size = flips, prob = s), type="l",
xlab = "p (probability of heads)",
ylab = "Binomial likelihood"
)
In this case, a maximum likelihood estimator would give us a result of $p=0.4$.
However, imagine we have prior knowledge about how "fair" coins are in general. We could say that the distribution of $p$ (chance of heads) across all coins we've ever seen is described by a Beta distribution, say $\text{Beta}(50, 50)$
prior_alpha <- 50
prior_beta <- 50
plot(
s, dbeta(s, prior_alpha, prior_beta), type="l",
xlab = "p (probability of heads)",
ylab = "Proportion of all coins"
)
We would like to combine this prior knowledge with our likelihood distribution.
Formally, this is allowed by Bayes theorem:
$$p(a|b) = \frac{p(b|a) p(a)}{\int p(b|a)p(a) da}$$
I'll skip the math here. Suffice to say that when we combine out beta and binomial distribution, we get a Beta distribution with updated parameters. This is because beta is a conjugate prior for the binomial distribution. In this case we take the $\alpha$ of our prior and add heads, and take the $\beta$ of our prior and add flips - heads.
plot(
s,
dbeta(
s,
shape1 = prior_alpha + heads,
shape2 = prior_beta + flips - heads
),
type = "l",
xlab = "p (probability of heads)",
ylab = "Posterior density"
)
By using prior information like this, you can see that we've shrunk our guesses towards what we believe about coins in general. | Where does the underlying statistical model for inference for a proportion come from? | Since Haitao gave an explanation of a frequentist approach, I will supply a Bayesian one.
In a Bayesian setting, we still generally believe that there is a "true" $p$. We want to understand how probab | Where does the underlying statistical model for inference for a proportion come from?
Since Haitao gave an explanation of a frequentist approach, I will supply a Bayesian one.
In a Bayesian setting, we still generally believe that there is a "true" $p$. We want to understand how probable different values of $p$ are given the data we have observed. In the coin flip example, $p$ is the probability of observing heads. Say we have a fair coin, and we flip it 100 times, and get 40 heads.
p <- 0.5
flips <- 100
heads <- 40
We can then use the binomial distribution to tell us how likely we would be to observe these results for different values of $p$.
s <- seq(0, 1, length.out = 1000)
plot(
s, dbinom(heads, size = flips, prob = s), type="l",
xlab = "p (probability of heads)",
ylab = "Binomial likelihood"
)
In this case, a maximum likelihood estimator would give us a result of $p=0.4$.
However, imagine we have prior knowledge about how "fair" coins are in general. We could say that the distribution of $p$ (chance of heads) across all coins we've ever seen is described by a Beta distribution, say $\text{Beta}(50, 50)$
prior_alpha <- 50
prior_beta <- 50
plot(
s, dbeta(s, prior_alpha, prior_beta), type="l",
xlab = "p (probability of heads)",
ylab = "Proportion of all coins"
)
We would like to combine this prior knowledge with our likelihood distribution.
Formally, this is allowed by Bayes theorem:
$$p(a|b) = \frac{p(b|a) p(a)}{\int p(b|a)p(a) da}$$
I'll skip the math here. Suffice to say that when we combine out beta and binomial distribution, we get a Beta distribution with updated parameters. This is because beta is a conjugate prior for the binomial distribution. In this case we take the $\alpha$ of our prior and add heads, and take the $\beta$ of our prior and add flips - heads.
plot(
s,
dbeta(
s,
shape1 = prior_alpha + heads,
shape2 = prior_beta + flips - heads
),
type = "l",
xlab = "p (probability of heads)",
ylab = "Posterior density"
)
By using prior information like this, you can see that we've shrunk our guesses towards what we believe about coins in general. | Where does the underlying statistical model for inference for a proportion come from?
Since Haitao gave an explanation of a frequentist approach, I will supply a Bayesian one.
In a Bayesian setting, we still generally believe that there is a "true" $p$. We want to understand how probab |
39,158 | Where does the underlying statistical model for inference for a proportion come from? | This is saying that we're assuming there is some p such that the event happens with probability p. At this point, we are not making any assumptions about p other than it exists. We later make an estimate of p, but p itself is not obtained; it remains an unknown theoretical quantity. | Where does the underlying statistical model for inference for a proportion come from? | This is saying that we're assuming there is some p such that the event happens with probability p. At this point, we are not making any assumptions about p other than it exists. We later make an estim | Where does the underlying statistical model for inference for a proportion come from?
This is saying that we're assuming there is some p such that the event happens with probability p. At this point, we are not making any assumptions about p other than it exists. We later make an estimate of p, but p itself is not obtained; it remains an unknown theoretical quantity. | Where does the underlying statistical model for inference for a proportion come from?
This is saying that we're assuming there is some p such that the event happens with probability p. At this point, we are not making any assumptions about p other than it exists. We later make an estim |
39,159 | Does Bayes theorem apply to joint distributions of discrete and continuous random variables? | Yes.
A joint distribution $f_{X,Z}(x, z)$ of continuous variable $X \sim f_X$, and discrete variable $Z \sim p_Z$, is defined as any non-negative function of $x$ and $z$ that satisfies
$$
\int f_{X,Z}(x, z) dx = p_Z(z),
$$
$$
\sum_z f_{X,Z}(x, z) = f_X(x).
$$
For a given distribution $f_{X,Z}$, the conditional distributions are defined:
$$
p_{Z \mid X}(z) \equiv \frac{f_{X,Z}(x, z)}{f_X(x)},
$$
and
$$
f_{X \mid Z}(x) \equiv \frac{f_{X,Z}(x, z)}{p_Z(z)}.
$$
Note that both expressions satisfy the proper unity condition when you apply the sum or integral from earlier.
The mixed form of Bayes theorem can be obtained simply by rearranging the above formulas for the conditional distribution. Rearranging the second equation for $f_{X,Z}(x, z)$ and substituting the result into the first equation, you get,
$$
p_{Z \mid X}(z) = \frac{f_{X \mid Z}(x) p_Z(z)}{f_X(x)}.
$$ | Does Bayes theorem apply to joint distributions of discrete and continuous random variables? | Yes.
A joint distribution $f_{X,Z}(x, z)$ of continuous variable $X \sim f_X$, and discrete variable $Z \sim p_Z$, is defined as any non-negative function of $x$ and $z$ that satisfies
$$
\int f_{X,Z} | Does Bayes theorem apply to joint distributions of discrete and continuous random variables?
Yes.
A joint distribution $f_{X,Z}(x, z)$ of continuous variable $X \sim f_X$, and discrete variable $Z \sim p_Z$, is defined as any non-negative function of $x$ and $z$ that satisfies
$$
\int f_{X,Z}(x, z) dx = p_Z(z),
$$
$$
\sum_z f_{X,Z}(x, z) = f_X(x).
$$
For a given distribution $f_{X,Z}$, the conditional distributions are defined:
$$
p_{Z \mid X}(z) \equiv \frac{f_{X,Z}(x, z)}{f_X(x)},
$$
and
$$
f_{X \mid Z}(x) \equiv \frac{f_{X,Z}(x, z)}{p_Z(z)}.
$$
Note that both expressions satisfy the proper unity condition when you apply the sum or integral from earlier.
The mixed form of Bayes theorem can be obtained simply by rearranging the above formulas for the conditional distribution. Rearranging the second equation for $f_{X,Z}(x, z)$ and substituting the result into the first equation, you get,
$$
p_{Z \mid X}(z) = \frac{f_{X \mid Z}(x) p_Z(z)}{f_X(x)}.
$$ | Does Bayes theorem apply to joint distributions of discrete and continuous random variables?
Yes.
A joint distribution $f_{X,Z}(x, z)$ of continuous variable $X \sim f_X$, and discrete variable $Z \sim p_Z$, is defined as any non-negative function of $x$ and $z$ that satisfies
$$
\int f_{X,Z} |
39,160 | Does Bayes theorem apply to joint distributions of discrete and continuous random variables? | [This is just rough intuition, avoiding measure theory]
For continuous random variables ${ X }$ and ${ Y ,}$ their joint density ${ f _{X, Y} }$ (if it exists) is a map ${ f _{ {\color{purple}{X}}, {\color{blue}{Y}} } : {\color{purple}{\mathbb{R}}} \times {\color{blue}{\mathbb{R}}} \to \mathbb{R} _{\geq 0} }$ such that probability of any event ${ (X,Y) \in [a,b] \times [c,d] }$ is the volume under graph of ${ f _{X,Y} }$ and over ${ [a,b] \times [c,d] }.$
${ \mathbb{P}(X \in [a,b], Y \in [c,d]) }$ ${ = \int \int _{[a,b] \times [c,d]} f _{X,Y} (x,y) \, dx \, dy }$
Intuitively, ${ \mathbb{P}(X \in [x, x + \Delta x], Y \in [y, y + \Delta y]) }$ ${ \approx f _{X,Y} (x, y) \Delta x \Delta y }$ for small ${ \Delta x, \Delta y \gt 0 }$ (assuming continuity of ${ f _{X,Y} }$ at ${ (x,y) }$).
Similarly for a discrete random variable ${ X }$ with range ${ \lbrace x _1, x _2, \ldots \rbrace }$ and a continuous random variable ${ Y },$ their joint density ${ f _{X, Y} }$ (if it exists) is a map ${ f _{ {\color{purple}{X}}, {\color{blue}{Y}} } : {\color{purple}{\lbrace x _1, x _2, \ldots \rbrace}} \times {\color{blue}{\mathbb{R}}} \to \mathbb{R} _{\geq 0} }$ such that probability of any event ${ (X, Y) \in \lbrace x _i \rbrace \times [c,d] }$ is the area under graph of ${ f _{X,Y} }$ and over ${ \lbrace x _i \rbrace \times [c,d] }.$
${ \mathbb{P}(X = x _i, Y \in [c,d]) }$ ${ = \int _{[c,d]} f _{X, Y} (x _i, y) \, dy }$
Intuitively, ${ \mathbb{P}(X = x _i, Y \in [y, y + \Delta y]) }$ ${ \approx f _{X, Y} (x _i, y) \Delta y }$ for small ${ \Delta y \gt 0 }$ (assuming continuity of ${ f _{X,Y} (x _i, \cdot) }$ at ${ y }$).
Eg: From this, in the ${ X }$ discrete and ${ Y }$ continuous case:
[Marginals] PMF ${ \mathbb{P}(X = x _i) }$ ${ = \int f _{X,Y} (x _i, y) dy .}$ Also ${ \mathbb{P}(Y \in [y, y + \Delta y]) }$ ${ = \sum _i \mathbb{P}(X = x _i, Y \in [y, y + \Delta y]) }$ ${ \approx \sum _{i} f _{X,Y} (x _i, y) \Delta y}$ suggesting ${ f _Y (y) = \sum _{i} f _{X,Y} (x _i, y) .}$
[Conditionals] Intuitively, similar to how ${ f _Y (y) \Delta y \approx \mathbb{P}(Y \in [y, y + \Delta y]) },$ conditional density ${ f _{Y \vert X } ( \cdot \vert x _i) }$ is such that ${ f _{Y \vert X} (y {\color{red}{\vert x _i)}} \Delta y }$ ${ \approx \mathbb{P}(Y \in [y, y + \Delta y] {\color{red}{\vert X = x _i)}} }.$ So ${ f _{Y \vert X} (y \vert x _i) \Delta y }$ ${ \approx \frac{f _{X,Y} (x _i, y) \Delta y}{\int f _{X,Y} (x _i, y) dy} }$ suggesting ${ f _{Y \vert X} (y \vert x _i) }$ ${ = \frac{f _{X,Y} (x _i, y)}{\int f _{X,Y} (x _i, y) dy} .}$
Intuitively, conditional PMF ${ p _{X \vert Y} (\cdot \vert y) }$ is such that ${ p _{X \vert Y} (x _i \vert y) }$ is limit of ${ \mathbb{P}(X = x _i \vert Y \in [y, y + \Delta y]) }$ as ${ \Delta y \to 0 ^{+} }.$ But ${ \mathbb{P}(X = x _i \vert Y \in [y, y + \Delta y]) }$ ${ \approx \frac{f _{X,Y} (x _i, y) \Delta y }{ \sum _i f _{X,Y} (x _i, y) \Delta y} }$ suggesting ${ p _{X \vert Y} (x _i \vert y) }$ ${ = \frac{ f _{X,Y} (x _i, y)}{\sum _i f _{X,Y} (x _i, y) }. }$
To summarise, marginals are ${ p _{X} (x _i) = \int f _{X,Y} (x _i, y) \, dy }$ and ${ f _Y (y) = \sum _i f _{X,Y} (x _i, y) },$ and conditionals are ${ f _{Y \vert X} (y \vert x _i) = \frac{f _{X,Y}(x _i, y)}{p _{X} (x _i)} }$ and ${ p _{X \vert Y} (x _i \vert y) = \frac{f _{X,Y} (x _i, y)}{f _Y (y)} .}$
Especially ${ X, Y }$ are independent implies ${ f _{Y \vert X} (y \vert x _i) = f _Y (y) }$ i.e. ${ f _{X,Y} (x _i, y) = p _{X} (x _i) f _Y (y) },$ and vice versa. We also have Bayes rule ${ f _{Y \vert X} (y \vert x _i) }$ ${ = \frac{p _{X \vert Y} (x _i \vert y) f _Y (y) }{p _X (x _i)} }.$ | Does Bayes theorem apply to joint distributions of discrete and continuous random variables? | [This is just rough intuition, avoiding measure theory]
For continuous random variables ${ X }$ and ${ Y ,}$ their joint density ${ f _{X, Y} }$ (if it exists) is a map ${ f _{ {\color{purple}{X}}, {\ | Does Bayes theorem apply to joint distributions of discrete and continuous random variables?
[This is just rough intuition, avoiding measure theory]
For continuous random variables ${ X }$ and ${ Y ,}$ their joint density ${ f _{X, Y} }$ (if it exists) is a map ${ f _{ {\color{purple}{X}}, {\color{blue}{Y}} } : {\color{purple}{\mathbb{R}}} \times {\color{blue}{\mathbb{R}}} \to \mathbb{R} _{\geq 0} }$ such that probability of any event ${ (X,Y) \in [a,b] \times [c,d] }$ is the volume under graph of ${ f _{X,Y} }$ and over ${ [a,b] \times [c,d] }.$
${ \mathbb{P}(X \in [a,b], Y \in [c,d]) }$ ${ = \int \int _{[a,b] \times [c,d]} f _{X,Y} (x,y) \, dx \, dy }$
Intuitively, ${ \mathbb{P}(X \in [x, x + \Delta x], Y \in [y, y + \Delta y]) }$ ${ \approx f _{X,Y} (x, y) \Delta x \Delta y }$ for small ${ \Delta x, \Delta y \gt 0 }$ (assuming continuity of ${ f _{X,Y} }$ at ${ (x,y) }$).
Similarly for a discrete random variable ${ X }$ with range ${ \lbrace x _1, x _2, \ldots \rbrace }$ and a continuous random variable ${ Y },$ their joint density ${ f _{X, Y} }$ (if it exists) is a map ${ f _{ {\color{purple}{X}}, {\color{blue}{Y}} } : {\color{purple}{\lbrace x _1, x _2, \ldots \rbrace}} \times {\color{blue}{\mathbb{R}}} \to \mathbb{R} _{\geq 0} }$ such that probability of any event ${ (X, Y) \in \lbrace x _i \rbrace \times [c,d] }$ is the area under graph of ${ f _{X,Y} }$ and over ${ \lbrace x _i \rbrace \times [c,d] }.$
${ \mathbb{P}(X = x _i, Y \in [c,d]) }$ ${ = \int _{[c,d]} f _{X, Y} (x _i, y) \, dy }$
Intuitively, ${ \mathbb{P}(X = x _i, Y \in [y, y + \Delta y]) }$ ${ \approx f _{X, Y} (x _i, y) \Delta y }$ for small ${ \Delta y \gt 0 }$ (assuming continuity of ${ f _{X,Y} (x _i, \cdot) }$ at ${ y }$).
Eg: From this, in the ${ X }$ discrete and ${ Y }$ continuous case:
[Marginals] PMF ${ \mathbb{P}(X = x _i) }$ ${ = \int f _{X,Y} (x _i, y) dy .}$ Also ${ \mathbb{P}(Y \in [y, y + \Delta y]) }$ ${ = \sum _i \mathbb{P}(X = x _i, Y \in [y, y + \Delta y]) }$ ${ \approx \sum _{i} f _{X,Y} (x _i, y) \Delta y}$ suggesting ${ f _Y (y) = \sum _{i} f _{X,Y} (x _i, y) .}$
[Conditionals] Intuitively, similar to how ${ f _Y (y) \Delta y \approx \mathbb{P}(Y \in [y, y + \Delta y]) },$ conditional density ${ f _{Y \vert X } ( \cdot \vert x _i) }$ is such that ${ f _{Y \vert X} (y {\color{red}{\vert x _i)}} \Delta y }$ ${ \approx \mathbb{P}(Y \in [y, y + \Delta y] {\color{red}{\vert X = x _i)}} }.$ So ${ f _{Y \vert X} (y \vert x _i) \Delta y }$ ${ \approx \frac{f _{X,Y} (x _i, y) \Delta y}{\int f _{X,Y} (x _i, y) dy} }$ suggesting ${ f _{Y \vert X} (y \vert x _i) }$ ${ = \frac{f _{X,Y} (x _i, y)}{\int f _{X,Y} (x _i, y) dy} .}$
Intuitively, conditional PMF ${ p _{X \vert Y} (\cdot \vert y) }$ is such that ${ p _{X \vert Y} (x _i \vert y) }$ is limit of ${ \mathbb{P}(X = x _i \vert Y \in [y, y + \Delta y]) }$ as ${ \Delta y \to 0 ^{+} }.$ But ${ \mathbb{P}(X = x _i \vert Y \in [y, y + \Delta y]) }$ ${ \approx \frac{f _{X,Y} (x _i, y) \Delta y }{ \sum _i f _{X,Y} (x _i, y) \Delta y} }$ suggesting ${ p _{X \vert Y} (x _i \vert y) }$ ${ = \frac{ f _{X,Y} (x _i, y)}{\sum _i f _{X,Y} (x _i, y) }. }$
To summarise, marginals are ${ p _{X} (x _i) = \int f _{X,Y} (x _i, y) \, dy }$ and ${ f _Y (y) = \sum _i f _{X,Y} (x _i, y) },$ and conditionals are ${ f _{Y \vert X} (y \vert x _i) = \frac{f _{X,Y}(x _i, y)}{p _{X} (x _i)} }$ and ${ p _{X \vert Y} (x _i \vert y) = \frac{f _{X,Y} (x _i, y)}{f _Y (y)} .}$
Especially ${ X, Y }$ are independent implies ${ f _{Y \vert X} (y \vert x _i) = f _Y (y) }$ i.e. ${ f _{X,Y} (x _i, y) = p _{X} (x _i) f _Y (y) },$ and vice versa. We also have Bayes rule ${ f _{Y \vert X} (y \vert x _i) }$ ${ = \frac{p _{X \vert Y} (x _i \vert y) f _Y (y) }{p _X (x _i)} }.$ | Does Bayes theorem apply to joint distributions of discrete and continuous random variables?
[This is just rough intuition, avoiding measure theory]
For continuous random variables ${ X }$ and ${ Y ,}$ their joint density ${ f _{X, Y} }$ (if it exists) is a map ${ f _{ {\color{purple}{X}}, {\ |
39,161 | What are the assumptions in bayesian statistics? | Let me use the linear regression example, that you mentioned. The simple linear regression model is
$$
y_i = \alpha + \beta x_i + \varepsilon_i
$$
with noise being independent, normally distributed random variables $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$. This is equivalent of stating the model in terms of normal likelihood function
$$
y_i \sim \mathcal{N}(\alpha + \beta x_i, \;\sigma^2)
$$
The assumptions that we make follow from the probabilistic model that we defined:
we assumed that the model is linear,
we assumed i.i.d. variables,
variance $\sigma^2$ is the same for every $i$-th observation, so the homoscedasticity,
we assumed that the likelihood (or noise, in first formulation) follows normal distribution, so we do not expect to see heavy tails etc.
Plus some more "technical" things like no multicollinearity, that follow from the choice of method for estimating the parameters (ordinary least squares).
(Notice that those assumptions are needed for things like confidence intervals, and testing, not for the least squares linear regression. For details check What is a complete list of the usual assumptions for linear regression? )
The only thing that changes with Bayesian linear regression, is that instead of using optimization to find point estimates for the parameters, we treat them as random variables, assign priors for them, and use Bayes theorem to derive the posterior distribution. So Bayesian model would inherit all the assumptions we made for frequentist model, since those are the assumptions about the likelihood function. Basically, the assumptions that we make, are that the likelihood function that we've chosen is a reasonable representation of the data.
As about priors, we do not make assumptions about priors, since priors are our a priori assumptions that we made about the parameters. | What are the assumptions in bayesian statistics? | Let me use the linear regression example, that you mentioned. The simple linear regression model is
$$
y_i = \alpha + \beta x_i + \varepsilon_i
$$
with noise being independent, normally distributed ra | What are the assumptions in bayesian statistics?
Let me use the linear regression example, that you mentioned. The simple linear regression model is
$$
y_i = \alpha + \beta x_i + \varepsilon_i
$$
with noise being independent, normally distributed random variables $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$. This is equivalent of stating the model in terms of normal likelihood function
$$
y_i \sim \mathcal{N}(\alpha + \beta x_i, \;\sigma^2)
$$
The assumptions that we make follow from the probabilistic model that we defined:
we assumed that the model is linear,
we assumed i.i.d. variables,
variance $\sigma^2$ is the same for every $i$-th observation, so the homoscedasticity,
we assumed that the likelihood (or noise, in first formulation) follows normal distribution, so we do not expect to see heavy tails etc.
Plus some more "technical" things like no multicollinearity, that follow from the choice of method for estimating the parameters (ordinary least squares).
(Notice that those assumptions are needed for things like confidence intervals, and testing, not for the least squares linear regression. For details check What is a complete list of the usual assumptions for linear regression? )
The only thing that changes with Bayesian linear regression, is that instead of using optimization to find point estimates for the parameters, we treat them as random variables, assign priors for them, and use Bayes theorem to derive the posterior distribution. So Bayesian model would inherit all the assumptions we made for frequentist model, since those are the assumptions about the likelihood function. Basically, the assumptions that we make, are that the likelihood function that we've chosen is a reasonable representation of the data.
As about priors, we do not make assumptions about priors, since priors are our a priori assumptions that we made about the parameters. | What are the assumptions in bayesian statistics?
Let me use the linear regression example, that you mentioned. The simple linear regression model is
$$
y_i = \alpha + \beta x_i + \varepsilon_i
$$
with noise being independent, normally distributed ra |
39,162 | What are the assumptions in bayesian statistics? | Assumptions in bayesian statistics are generally stronger than those, because you need, in every model, to specify the full distribution of your data and parameters.
In many cases, gaussian distribution is used, because of its relation to expected value and arithmetic mean, without really believing in the assumption of normality, and it has been shown that the results are quite robust to departures from normality, in case the same conditions as above are respected.
One other example of a distribution used in bayesian statistics even if data is not really believed to follow it, is asymmetric Laplace, for quantile regression. Bayesian models are very varied, I don't know which are you talking about, but most probably it's gaussian ones. In that case, if you respect the same assumptions as for frequentist models, you should be ok (homoskedasticity is one of those, unless heteroskedasticity is explicitly addressed). | What are the assumptions in bayesian statistics? | Assumptions in bayesian statistics are generally stronger than those, because you need, in every model, to specify the full distribution of your data and parameters.
In many cases, gaussian distributi | What are the assumptions in bayesian statistics?
Assumptions in bayesian statistics are generally stronger than those, because you need, in every model, to specify the full distribution of your data and parameters.
In many cases, gaussian distribution is used, because of its relation to expected value and arithmetic mean, without really believing in the assumption of normality, and it has been shown that the results are quite robust to departures from normality, in case the same conditions as above are respected.
One other example of a distribution used in bayesian statistics even if data is not really believed to follow it, is asymmetric Laplace, for quantile regression. Bayesian models are very varied, I don't know which are you talking about, but most probably it's gaussian ones. In that case, if you respect the same assumptions as for frequentist models, you should be ok (homoskedasticity is one of those, unless heteroskedasticity is explicitly addressed). | What are the assumptions in bayesian statistics?
Assumptions in bayesian statistics are generally stronger than those, because you need, in every model, to specify the full distribution of your data and parameters.
In many cases, gaussian distributi |
39,163 | Clarifications on Poisson Regression | In Poisson regression we use exponential link function. This means that
$$
\mathbb{E}[Y | X] = e^{W^TX}.
$$
Note that the expression above contains the expectation, not a probability. The expression is known as intensity and is usually denoted with $\lambda(X)$. Conditional on $X$, variable $Y$ has Poisson distribution with parameter $\lambda(X)$. This means that
$$
P(Y = k | X) = \frac{\lambda(X)^k}{k!} e^{-\lambda(X)} = \frac{e^{kW^TX}}{k!} e^{-e^{W^TX}},\ \ \ k = 0, 1, 2, ...
$$
As you can see, $Y$ takes only non-negative integer values. | Clarifications on Poisson Regression | In Poisson regression we use exponential link function. This means that
$$
\mathbb{E}[Y | X] = e^{W^TX}.
$$
Note that the expression above contains the expectation, not a probability. The expression i | Clarifications on Poisson Regression
In Poisson regression we use exponential link function. This means that
$$
\mathbb{E}[Y | X] = e^{W^TX}.
$$
Note that the expression above contains the expectation, not a probability. The expression is known as intensity and is usually denoted with $\lambda(X)$. Conditional on $X$, variable $Y$ has Poisson distribution with parameter $\lambda(X)$. This means that
$$
P(Y = k | X) = \frac{\lambda(X)^k}{k!} e^{-\lambda(X)} = \frac{e^{kW^TX}}{k!} e^{-e^{W^TX}},\ \ \ k = 0, 1, 2, ...
$$
As you can see, $Y$ takes only non-negative integer values. | Clarifications on Poisson Regression
In Poisson regression we use exponential link function. This means that
$$
\mathbb{E}[Y | X] = e^{W^TX}.
$$
Note that the expression above contains the expectation, not a probability. The expression i |
39,164 | When does the sum of the medians = the median of the sum | Actually my comment is not entirely correct, allow me to clear up;
The median of a series of numbers $X$ is calculated by ordering all the numbers from smallest to largest, then finding the number in the middle. This means that when you change the numbers in $X$ you also change the ordering, hence the median changes. Therefore (in general) you can almost always assume that:
$$
\text{MED}(X + Y) \neq \text{MED}(X) + \text{MED}(Y)
$$
However there is at least one exception, whenever the ordering of $X$ (after adding $Y$ to $X$) does not change neither does the median. For instance if all numbers in $X$ and $Y$ are the same, see this example (written in R):
set.seed(42)
n <- 100
x <- rnorm(n)
c <- x
y <- rnorm(n)
median(x+y) # 0.0767433
median(x) + median(y) # 0.02050838
median(x + c) # 0.1795935
median(x) + median(c) # 0.1795935 | When does the sum of the medians = the median of the sum | Actually my comment is not entirely correct, allow me to clear up;
The median of a series of numbers $X$ is calculated by ordering all the numbers from smallest to largest, then finding the number in | When does the sum of the medians = the median of the sum
Actually my comment is not entirely correct, allow me to clear up;
The median of a series of numbers $X$ is calculated by ordering all the numbers from smallest to largest, then finding the number in the middle. This means that when you change the numbers in $X$ you also change the ordering, hence the median changes. Therefore (in general) you can almost always assume that:
$$
\text{MED}(X + Y) \neq \text{MED}(X) + \text{MED}(Y)
$$
However there is at least one exception, whenever the ordering of $X$ (after adding $Y$ to $X$) does not change neither does the median. For instance if all numbers in $X$ and $Y$ are the same, see this example (written in R):
set.seed(42)
n <- 100
x <- rnorm(n)
c <- x
y <- rnorm(n)
median(x+y) # 0.0767433
median(x) + median(y) # 0.02050838
median(x + c) # 0.1795935
median(x) + median(c) # 0.1795935 | When does the sum of the medians = the median of the sum
Actually my comment is not entirely correct, allow me to clear up;
The median of a series of numbers $X$ is calculated by ordering all the numbers from smallest to largest, then finding the number in |
39,165 | When does the sum of the medians = the median of the sum | For continuous variables the following are equivalent
$$\text{M}(X + Y) = \text{M}(X) + \text{M}(Y) \\ \iff \\ \mathbb{P}[(X-\text{M}(X)) > -(Y -\text{M}(Y))] =
\mathbb{P}[(X-\text{M}(X)) < -(Y -\text{M}(Y))] $$
You can imagine this geometrically from the joint distribution of X and Y. Half the mass needs to be on either side of the line $x+y=\text{median}(X)+\text{median}(Y)$ (or equal masses for discrete variables).
This means that for two random x and y, the probability for x to be further above the median of X than y is below the median of Y equals, the probability for x to be less above the median of X than y is below the median of Y. | When does the sum of the medians = the median of the sum | For continuous variables the following are equivalent
$$\text{M}(X + Y) = \text{M}(X) + \text{M}(Y) \\ \iff \\ \mathbb{P}[(X-\text{M}(X)) > -(Y -\text{M}(Y))] =
\mathbb{P}[(X-\text{M}(X)) < -(Y -\t | When does the sum of the medians = the median of the sum
For continuous variables the following are equivalent
$$\text{M}(X + Y) = \text{M}(X) + \text{M}(Y) \\ \iff \\ \mathbb{P}[(X-\text{M}(X)) > -(Y -\text{M}(Y))] =
\mathbb{P}[(X-\text{M}(X)) < -(Y -\text{M}(Y))] $$
You can imagine this geometrically from the joint distribution of X and Y. Half the mass needs to be on either side of the line $x+y=\text{median}(X)+\text{median}(Y)$ (or equal masses for discrete variables).
This means that for two random x and y, the probability for x to be further above the median of X than y is below the median of Y equals, the probability for x to be less above the median of X than y is below the median of Y. | When does the sum of the medians = the median of the sum
For continuous variables the following are equivalent
$$\text{M}(X + Y) = \text{M}(X) + \text{M}(Y) \\ \iff \\ \mathbb{P}[(X-\text{M}(X)) > -(Y -\text{M}(Y))] =
\mathbb{P}[(X-\text{M}(X)) < -(Y -\t |
39,166 | When does the sum of the medians = the median of the sum | Comment: This is parallel with other comments, but it might give
you a quick way to check whether one variable increases precisely when
the other does.
If Spearman correlation between x and y is $1,$ I believe the sum of medians
is the median of the sum. In R:
x = rexp(100); y = sqrt(x)
median(x+y)
[1] 1.598729
median(x)+median(y)
[1] 1.598729
cor(x,y, meth="spearman")
[1] 1
The other situation (approximate) discussed in comments is symmetry:
u = runif(100); z = rnorm(100)
mean(u+z); median(u+z)
[1] 0.5401409
[1] 0.5229718
mean(u)+mean(z)
[1] 0.5401409
median(u)+median(z)
[1] 0.5866283 | When does the sum of the medians = the median of the sum | Comment: This is parallel with other comments, but it might give
you a quick way to check whether one variable increases precisely when
the other does.
If Spearman correlation between x and y is $1,$ | When does the sum of the medians = the median of the sum
Comment: This is parallel with other comments, but it might give
you a quick way to check whether one variable increases precisely when
the other does.
If Spearman correlation between x and y is $1,$ I believe the sum of medians
is the median of the sum. In R:
x = rexp(100); y = sqrt(x)
median(x+y)
[1] 1.598729
median(x)+median(y)
[1] 1.598729
cor(x,y, meth="spearman")
[1] 1
The other situation (approximate) discussed in comments is symmetry:
u = runif(100); z = rnorm(100)
mean(u+z); median(u+z)
[1] 0.5401409
[1] 0.5229718
mean(u)+mean(z)
[1] 0.5401409
median(u)+median(z)
[1] 0.5866283 | When does the sum of the medians = the median of the sum
Comment: This is parallel with other comments, but it might give
you a quick way to check whether one variable increases precisely when
the other does.
If Spearman correlation between x and y is $1,$ |
39,167 | How does one most easily overfit? | As long as all the observations are unique, then K-nearest neighbors with K set to 1 and with any arbitrary valid distance metric will give a classifier which perfectly fits the training set (since the nearest neighbor of every point in the training set is trivially, itself). And it's probably the most efficient since no training at all is needed.
Is that the most efficient way to encode the Boundary? Probably right?
Since we don't know if the data is entirely random or not, using the
data itself as the encoded model with KNN algorithm is probably the
best you can generally do. Right?
It's the most time-efficient, but not necessarily the most space efficient. | How does one most easily overfit? | As long as all the observations are unique, then K-nearest neighbors with K set to 1 and with any arbitrary valid distance metric will give a classifier which perfectly fits the training set (since th | How does one most easily overfit?
As long as all the observations are unique, then K-nearest neighbors with K set to 1 and with any arbitrary valid distance metric will give a classifier which perfectly fits the training set (since the nearest neighbor of every point in the training set is trivially, itself). And it's probably the most efficient since no training at all is needed.
Is that the most efficient way to encode the Boundary? Probably right?
Since we don't know if the data is entirely random or not, using the
data itself as the encoded model with KNN algorithm is probably the
best you can generally do. Right?
It's the most time-efficient, but not necessarily the most space efficient. | How does one most easily overfit?
As long as all the observations are unique, then K-nearest neighbors with K set to 1 and with any arbitrary valid distance metric will give a classifier which perfectly fits the training set (since th |
39,168 | How does one most easily overfit? | You can't.
At least not in general, to the degree you want, if you want a perfect fit with arbitrary data and arbitrary dimensionality.
As an example, suppose we have $n_1=0$ predictor dimensions (i.e., none at all) and $n_2=2$ observations classified into $n_3=2$ buckets. The two observations are classified into two different buckets, namely "chocolate" and "vanilla".
Since you don't have any predictors, you will not be able to classify them perfectly, period.
If you have at least one predictor that takes different values on each observation, then you can indeed overfit arbitrarily badly, simply by using arbitrarily high polynomial orders for a numerical predictor (if the predictor is categorical with different values on each observation, you don't even need to transform). The tool or model is pretty much secondary. Yes, it's easy to overfit.
Here is an example. The 10 observations are completely independent of the single numerical predictor. We fit increasingly complex logistical regressions or powers of the predictor and classify using a threshold of 0.5 (which is not good practice). Correctly fitted points are marked in green, incorrectly fitted ones in red.
R code:
nn <- 10
set.seed(2)
predictor <- runif(nn)
outcome <- runif(nn)>0.5
plot(predictor,outcome,pch=19,yaxt="n",ylim=c(-0.1,1.6))
axis(2,c(0,1),c("FALSE","TRUE"))
orders <- c(1,2,3,5,7,9)
xx <- seq(min(predictor),max(predictor),0.01)
par(mfrow=c(3,2))
for ( kk in seq_along(orders) ) {
plot(predictor,outcome,pch=19,yaxt="n",ylim=c(-0.2,1.2),main=paste("Order:",orders[kk]))
axis(2,c(0,1),c("FALSE","TRUE"))
model <- glm(outcome~poly(predictor,orders[kk]),family="binomial")
fits_obs <- predict(model,type="response")
fits <- predict(model,newdata=data.frame(predictor=xx),type="response")
lines(xx,fits)
correct <- (fits_obs>0.5 & outcome) | ( fits_obs<0.5 & !outcome)
points(predictor[correct],outcome[correct],cex=1.4,col="green",pch="o")
points(predictor[!correct],outcome[!correct],cex=1.4,col="red",pch="o")
} | How does one most easily overfit? | You can't.
At least not in general, to the degree you want, if you want a perfect fit with arbitrary data and arbitrary dimensionality.
As an example, suppose we have $n_1=0$ predictor dimensions (i.e | How does one most easily overfit?
You can't.
At least not in general, to the degree you want, if you want a perfect fit with arbitrary data and arbitrary dimensionality.
As an example, suppose we have $n_1=0$ predictor dimensions (i.e., none at all) and $n_2=2$ observations classified into $n_3=2$ buckets. The two observations are classified into two different buckets, namely "chocolate" and "vanilla".
Since you don't have any predictors, you will not be able to classify them perfectly, period.
If you have at least one predictor that takes different values on each observation, then you can indeed overfit arbitrarily badly, simply by using arbitrarily high polynomial orders for a numerical predictor (if the predictor is categorical with different values on each observation, you don't even need to transform). The tool or model is pretty much secondary. Yes, it's easy to overfit.
Here is an example. The 10 observations are completely independent of the single numerical predictor. We fit increasingly complex logistical regressions or powers of the predictor and classify using a threshold of 0.5 (which is not good practice). Correctly fitted points are marked in green, incorrectly fitted ones in red.
R code:
nn <- 10
set.seed(2)
predictor <- runif(nn)
outcome <- runif(nn)>0.5
plot(predictor,outcome,pch=19,yaxt="n",ylim=c(-0.1,1.6))
axis(2,c(0,1),c("FALSE","TRUE"))
orders <- c(1,2,3,5,7,9)
xx <- seq(min(predictor),max(predictor),0.01)
par(mfrow=c(3,2))
for ( kk in seq_along(orders) ) {
plot(predictor,outcome,pch=19,yaxt="n",ylim=c(-0.2,1.2),main=paste("Order:",orders[kk]))
axis(2,c(0,1),c("FALSE","TRUE"))
model <- glm(outcome~poly(predictor,orders[kk]),family="binomial")
fits_obs <- predict(model,type="response")
fits <- predict(model,newdata=data.frame(predictor=xx),type="response")
lines(xx,fits)
correct <- (fits_obs>0.5 & outcome) | ( fits_obs<0.5 & !outcome)
points(predictor[correct],outcome[correct],cex=1.4,col="green",pch="o")
points(predictor[!correct],outcome[!correct],cex=1.4,col="red",pch="o")
} | How does one most easily overfit?
You can't.
At least not in general, to the degree you want, if you want a perfect fit with arbitrary data and arbitrary dimensionality.
As an example, suppose we have $n_1=0$ predictor dimensions (i.e |
39,169 | lm and glm function in R | If you take a look at the R help documentation you will note that there is no family argument for the lm function. By definition, lm models (ordinary linear regression) in R are fit using ordinary least squares regression (OLS) which assumes the error terms of your model are normally distributed (i.e. family = gaussian) with mean zero and a common variance. You cannot run a lm model using other link functions (there are other functions to do that, though if you wanted--you just can't use lm). In fact, when you try to run the lm code you've presented above, R will generate a warning like this:
> > Warning message: In lm.fit(x, y, offset = offset, singular.ok =
> > singular.ok, ...) : extra argument ‘family’ is disregarded.
When you fit your model using glm, on the other hand, you specified that the error terms in your model were binomial using a logit link function. This essentially constrains your model so that it assumes no constant error variance and it assumes the error terms can only be 0 or 1 for each observation. When you used lm you made no such assumptions, but instead, your fit model assumed your errors could take on any value on the real number line. Put another way, lm is a special case of glm (one in which the error terms are assumed normal). It's entirely possible that you get a good approximation using lm instead of glm but it may not be without problems. For example, nothing in your lm model will prevent your predicted values from lying outside $y\in [0, 1]$. So, how would you treat a predicted value of 1.05 for example (or maybe even trickier 0.5)? There are a number of other reasons to usually select the model that best describes your data, rather than using a simple linear model, but rather than my re-hashing them here, you can read about them in past posts like this one, this one, or perhaps this one.
Of course, you can always use a linear model if you wanted to--it depends on how precise you need to be in your predictions and what the consequences are of using predictions or estimates that might have the drawbacks noted. | lm and glm function in R | If you take a look at the R help documentation you will note that there is no family argument for the lm function. By definition, lm models (ordinary linear regression) in R are fit using ordinary le | lm and glm function in R
If you take a look at the R help documentation you will note that there is no family argument for the lm function. By definition, lm models (ordinary linear regression) in R are fit using ordinary least squares regression (OLS) which assumes the error terms of your model are normally distributed (i.e. family = gaussian) with mean zero and a common variance. You cannot run a lm model using other link functions (there are other functions to do that, though if you wanted--you just can't use lm). In fact, when you try to run the lm code you've presented above, R will generate a warning like this:
> > Warning message: In lm.fit(x, y, offset = offset, singular.ok =
> > singular.ok, ...) : extra argument ‘family’ is disregarded.
When you fit your model using glm, on the other hand, you specified that the error terms in your model were binomial using a logit link function. This essentially constrains your model so that it assumes no constant error variance and it assumes the error terms can only be 0 or 1 for each observation. When you used lm you made no such assumptions, but instead, your fit model assumed your errors could take on any value on the real number line. Put another way, lm is a special case of glm (one in which the error terms are assumed normal). It's entirely possible that you get a good approximation using lm instead of glm but it may not be without problems. For example, nothing in your lm model will prevent your predicted values from lying outside $y\in [0, 1]$. So, how would you treat a predicted value of 1.05 for example (or maybe even trickier 0.5)? There are a number of other reasons to usually select the model that best describes your data, rather than using a simple linear model, but rather than my re-hashing them here, you can read about them in past posts like this one, this one, or perhaps this one.
Of course, you can always use a linear model if you wanted to--it depends on how precise you need to be in your predictions and what the consequences are of using predictions or estimates that might have the drawbacks noted. | lm and glm function in R
If you take a look at the R help documentation you will note that there is no family argument for the lm function. By definition, lm models (ordinary linear regression) in R are fit using ordinary le |
39,170 | lm and glm function in R | Linear regression (lm in R) does not have link function and assumes normal distribution. It is generalized linear model (glm in R) that generalizes linear model beyond what linear regression assumes and allows for such modifications. In your case, the family parameter was passed to the ... method and passed further to other methods that ignore the not used parameter. So basically, you've run linear regression on your data. | lm and glm function in R | Linear regression (lm in R) does not have link function and assumes normal distribution. It is generalized linear model (glm in R) that generalizes linear model beyond what linear regression assumes a | lm and glm function in R
Linear regression (lm in R) does not have link function and assumes normal distribution. It is generalized linear model (glm in R) that generalizes linear model beyond what linear regression assumes and allows for such modifications. In your case, the family parameter was passed to the ... method and passed further to other methods that ignore the not used parameter. So basically, you've run linear regression on your data. | lm and glm function in R
Linear regression (lm in R) does not have link function and assumes normal distribution. It is generalized linear model (glm in R) that generalizes linear model beyond what linear regression assumes a |
39,171 | Can the discrete variable be a negative number? | Your intuition is correct -- a discrete variable can take on negative values.
The example is just an example: a person can't have $-2$ children, but the difference in scores between Home and Away sports teams can be $-2$ when the Home team is behind by two points.
Discrete variables with negative values exist all over the place. Two prominent examples:
Rademacher distribution
Skellam distribution | Can the discrete variable be a negative number? | Your intuition is correct -- a discrete variable can take on negative values.
The example is just an example: a person can't have $-2$ children, but the difference in scores between Home and Away spor | Can the discrete variable be a negative number?
Your intuition is correct -- a discrete variable can take on negative values.
The example is just an example: a person can't have $-2$ children, but the difference in scores between Home and Away sports teams can be $-2$ when the Home team is behind by two points.
Discrete variables with negative values exist all over the place. Two prominent examples:
Rademacher distribution
Skellam distribution | Can the discrete variable be a negative number?
Your intuition is correct -- a discrete variable can take on negative values.
The example is just an example: a person can't have $-2$ children, but the difference in scores between Home and Away spor |
39,172 | Can the discrete variable be a negative number? | The difference between continuous and discrete variables is not a mathematical essential one like the difference between natural and real numbers. It's just a matter of practicality: we use different tools to address each one because we are interested on answering different questions.
Basically, in discrete variables we are interested in the frequency of each value, but in continuous variables we are just interested in frequency of intervals. Then, we treat as continuous variables the variables when two or more cases getting the same value is just an anecdote - unlikely and/or uninteresting - and we model it as being able to get any real value in an interval. Otherwise, we model the variable as being a discrete variable with just a finite or numerable possible values.
For example: monetary quantities (prices, income, GDP and so) are usually modeled as continuous variables. However, they actually can only take a numerable set of values, because we just record monetary values up to some precision - usually 1 cent.
Some Euro area countries previous currency were valued less than 1 euro cent (e.g. Spanish peseta and Italian lira). In those countries cents had fallen in disuse long ago and all prices and wages were natural numbers, but when Euro was introduced they got a couple of decimal figures. Sometimes my students say that prices in pesetas were discrete variables but prices in euros are continuous ones, but that's plainly wrong because we are interested in the same questions and use the same statistical tools for both.
In summary and returning to the question: The difference between discrete an continuous variables are just a matter of convenience and you can treat a variable as discrete even if it takes negative values. You just need it to take few enough values to be interested in frequency of each one. | Can the discrete variable be a negative number? | The difference between continuous and discrete variables is not a mathematical essential one like the difference between natural and real numbers. It's just a matter of practicality: we use different | Can the discrete variable be a negative number?
The difference between continuous and discrete variables is not a mathematical essential one like the difference between natural and real numbers. It's just a matter of practicality: we use different tools to address each one because we are interested on answering different questions.
Basically, in discrete variables we are interested in the frequency of each value, but in continuous variables we are just interested in frequency of intervals. Then, we treat as continuous variables the variables when two or more cases getting the same value is just an anecdote - unlikely and/or uninteresting - and we model it as being able to get any real value in an interval. Otherwise, we model the variable as being a discrete variable with just a finite or numerable possible values.
For example: monetary quantities (prices, income, GDP and so) are usually modeled as continuous variables. However, they actually can only take a numerable set of values, because we just record monetary values up to some precision - usually 1 cent.
Some Euro area countries previous currency were valued less than 1 euro cent (e.g. Spanish peseta and Italian lira). In those countries cents had fallen in disuse long ago and all prices and wages were natural numbers, but when Euro was introduced they got a couple of decimal figures. Sometimes my students say that prices in pesetas were discrete variables but prices in euros are continuous ones, but that's plainly wrong because we are interested in the same questions and use the same statistical tools for both.
In summary and returning to the question: The difference between discrete an continuous variables are just a matter of convenience and you can treat a variable as discrete even if it takes negative values. You just need it to take few enough values to be interested in frequency of each one. | Can the discrete variable be a negative number?
The difference between continuous and discrete variables is not a mathematical essential one like the difference between natural and real numbers. It's just a matter of practicality: we use different |
39,173 | Sufficient Statistic for $\beta$ in OLS | Sometimes the simplest way to look at sufficiency is by looking directly at the log-likelihood and using the factorisation theorem. For a linear regression model with Gaussian error term the log-likelihood function can be written as:
$$\begin{equation} \begin{aligned}
\ell_{\mathbf{y}, \mathbf{x}}(\boldsymbol{\beta}, \sigma)
&= - n \ln \sigma -\frac{1}{2 \sigma^2} || \mathbf{y} - \mathbf{x} \boldsymbol{\beta} ||^2 \\[6pt]
&= - n \ln \sigma -\frac{1}{2 \sigma^2} (\mathbf{y} - \mathbf{x} \boldsymbol{\beta})^\text{T} (\mathbf{y} - \mathbf{x} \boldsymbol{\beta} ) \\[6pt]
&= - n \ln \sigma -\frac{1}{2 \sigma^2} (\mathbf{y}^\text{T} \mathbf{y} - \mathbf{y}^\text{T} \mathbf{x} \boldsymbol{\beta} - \boldsymbol{\beta}^\text{T} \mathbf{x}^\text{T} \mathbf{y} + \boldsymbol{\beta}^\text{T} \mathbf{x}^\text{T} \mathbf{x} \boldsymbol{\beta} ) \\[6pt]
&= - n \ln \sigma -\frac{1}{2 \sigma^2} \mathbf{y}^\text{T} \mathbf{y} -\frac{1}{2 \sigma^2} ( 2 \boldsymbol{\beta}^\text{T} \mathbf{T}_1 - \boldsymbol{\beta}^\text{T} \mathbf{T}_2 \boldsymbol{\beta} ) \\[6pt]
&= h(\mathbf{y}, \sigma) + g_\boldsymbol{\beta}(\mathbf{T}_1, \mathbf{T}_2, \sigma), \\[6pt]
\end{aligned} \end{equation}$$
where $\mathbf{T}_1 \equiv \mathbf{T}_1(\mathbf{x}, \mathbf{y}) \equiv \mathbf{x}^\text{T} \mathbf{y}$ and $\mathbf{T}_2 \equiv \mathbf{T}_2(\mathbf{x}, \mathbf{y}) \equiv \mathbf{x}^\text{T} \mathbf{x}$. This shows that the statistic $\mathbf{T} \equiv (\mathbf{T}_1, \mathbf{T}_2)$ is sufficient for the coefficient parameter $\boldsymbol{\beta}$. There is no requirement that the design matrix be of full rank for sufficiency, but if it is not of full rank then these statistics are not minimal sufficient (and you obtain a minimal sufficient statistic by reducing the design matrix to full rank).
From the above form we can also see that the OLS estimator $\hat{\boldsymbol{\beta}}$ is not sufficient for $\boldsymbol{\beta}$. Sufficiency also requires knowledge of the matrix $\mathbf{T}_2 = \mathbf{x}^\text{T} \mathbf{x}$, which arises as part of the covariance of the OLS estimator. This tells us that, in the case where the design matrix is of full rank, the OLS estimator and its covariance matrix are jointly sufficient for the unkonwn coefficient parameter. (Of course, it is worth noting that regression problems always condition on $\mathbf{x}$, so in this context we get the required sufficiency.) | Sufficient Statistic for $\beta$ in OLS | Sometimes the simplest way to look at sufficiency is by looking directly at the log-likelihood and using the factorisation theorem. For a linear regression model with Gaussian error term the log-like | Sufficient Statistic for $\beta$ in OLS
Sometimes the simplest way to look at sufficiency is by looking directly at the log-likelihood and using the factorisation theorem. For a linear regression model with Gaussian error term the log-likelihood function can be written as:
$$\begin{equation} \begin{aligned}
\ell_{\mathbf{y}, \mathbf{x}}(\boldsymbol{\beta}, \sigma)
&= - n \ln \sigma -\frac{1}{2 \sigma^2} || \mathbf{y} - \mathbf{x} \boldsymbol{\beta} ||^2 \\[6pt]
&= - n \ln \sigma -\frac{1}{2 \sigma^2} (\mathbf{y} - \mathbf{x} \boldsymbol{\beta})^\text{T} (\mathbf{y} - \mathbf{x} \boldsymbol{\beta} ) \\[6pt]
&= - n \ln \sigma -\frac{1}{2 \sigma^2} (\mathbf{y}^\text{T} \mathbf{y} - \mathbf{y}^\text{T} \mathbf{x} \boldsymbol{\beta} - \boldsymbol{\beta}^\text{T} \mathbf{x}^\text{T} \mathbf{y} + \boldsymbol{\beta}^\text{T} \mathbf{x}^\text{T} \mathbf{x} \boldsymbol{\beta} ) \\[6pt]
&= - n \ln \sigma -\frac{1}{2 \sigma^2} \mathbf{y}^\text{T} \mathbf{y} -\frac{1}{2 \sigma^2} ( 2 \boldsymbol{\beta}^\text{T} \mathbf{T}_1 - \boldsymbol{\beta}^\text{T} \mathbf{T}_2 \boldsymbol{\beta} ) \\[6pt]
&= h(\mathbf{y}, \sigma) + g_\boldsymbol{\beta}(\mathbf{T}_1, \mathbf{T}_2, \sigma), \\[6pt]
\end{aligned} \end{equation}$$
where $\mathbf{T}_1 \equiv \mathbf{T}_1(\mathbf{x}, \mathbf{y}) \equiv \mathbf{x}^\text{T} \mathbf{y}$ and $\mathbf{T}_2 \equiv \mathbf{T}_2(\mathbf{x}, \mathbf{y}) \equiv \mathbf{x}^\text{T} \mathbf{x}$. This shows that the statistic $\mathbf{T} \equiv (\mathbf{T}_1, \mathbf{T}_2)$ is sufficient for the coefficient parameter $\boldsymbol{\beta}$. There is no requirement that the design matrix be of full rank for sufficiency, but if it is not of full rank then these statistics are not minimal sufficient (and you obtain a minimal sufficient statistic by reducing the design matrix to full rank).
From the above form we can also see that the OLS estimator $\hat{\boldsymbol{\beta}}$ is not sufficient for $\boldsymbol{\beta}$. Sufficiency also requires knowledge of the matrix $\mathbf{T}_2 = \mathbf{x}^\text{T} \mathbf{x}$, which arises as part of the covariance of the OLS estimator. This tells us that, in the case where the design matrix is of full rank, the OLS estimator and its covariance matrix are jointly sufficient for the unkonwn coefficient parameter. (Of course, it is worth noting that regression problems always condition on $\mathbf{x}$, so in this context we get the required sufficiency.) | Sufficient Statistic for $\beta$ in OLS
Sometimes the simplest way to look at sufficiency is by looking directly at the log-likelihood and using the factorisation theorem. For a linear regression model with Gaussian error term the log-like |
39,174 | Sufficient Statistic for $\beta$ in OLS | I was looking for some detailed proof for the simple linear model. By the factorisation theorem we have the following solution
Considering the classical model
$$y_i=\beta_0+\beta_1 x_i+\varepsilon_i,\ \mbox{ em que}\ \varepsilon_i\sim \mathrm{N}(0;\sigma^2),\ \hbox{Cov}(\varepsilon_i,\varepsilon_j)=0, i\neq j,\ i,j=1,\ldots, n$$
We start from $\mathbf{y}=(y_1,y_2,\ldots,y_n)$ a simple random sample and $x_1,x_2,\ldots, x_n$ not random. $\beta_0$ and $\beta_1$ are the classical parameters for the model (both maximum likehood or OLS). Consequently
$$y_i \sim \mathrm{N}(\beta_0+\beta_1 x_i;\sigma^2),\ i=1,\ldots, n.$$
Density function for $y_i$
$$f(y_i)= \frac{1}{\sigma \sqrt{2\pi}} \exp\left\{ -\frac{1}{2\sigma^2}\left[y_i-(\beta_0+\beta_1 x_i)\right]^2 \right\}\ -\infty <x_i<\infty $$
Likehood function
\begin{align*}
\hbox{L}(\mathbf{y}) &= \prod_{i=1}^{n} \frac{1}{\sigma \sqrt{2\pi}} \exp\left\{ -\frac{1}{2\sigma^2}\left[(y_i-\overline{y})-(\beta_0+\beta_1 x_i-\overline{y})\right]^2 \right\},\ -\infty <x_i<\infty \\
&= (2\pi\sigma^2)^{-n/2}\\
&= \exp\left\{ -\frac{1}{2\sigma^2}\left[\underbrace{\sum_{i=1}^{n}(y_i-\overline{y})^2-2\sum_{i=1}^{n}(y_i-\overline{y})(\beta_0+\beta_1 x_i-\overline{y})+\sum_{i=1}^{n}(\beta_0+\beta_1 x_i-\overline{y})^2}_{\delta}\right] \right\}
\end{align*}
Be $s_{yy}=\sum_{i=1}^{n}(y_i-\overline{y})^2=\sum_{i=1}^{n}y_i^2-n\overline{y}$ e $s_{xx}=\sum_{i=1}^{n}x_i^2-n\overline{x}$, $y$ random variable e $x$ constant.
\begin{align*}
\delta&=\sum_{i=1}^{n}(y_i-\overline{y})^2-2\sum_{i=1}^{n}(y_i-\overline{y})(\beta_0+\beta_1 x_i-\overline{y})+\sum_{i=1}^{n}(\beta_0+\beta_1 x_i-\overline{y})^2\\
\delta&=s_{yy}-2\sum_{i=1}^{n}(y_i\beta_0+\beta_1 x_iy_i-\overline{y}^2) +\sum_{i=1}^{n}\left[(\beta_0+\beta_1 x_i)^2-2(\beta_0+\beta_1 x_i)\overline{y}+\overline{y}^2\right]\\
\delta&=s_{yy}-2\alpha\sum_{i=1}^{n}y_i-2\beta\sum_{i=1}^{n}x_iy_i +2n\overline{y}^2+n\beta_0^2+2\beta_0\beta_1\sum_{i=1}^{n}x_i+\beta^2_1\sum_{i=1}^{n}x_i^2-2n\beta_0\overline{y}-2\beta_1\overline{y}\sum_{i=1}^{n}x_i +n\overline{y}^2\\
\delta&=\sum_{i=1}^{n}y_i^2-4n\beta_0\overline{y}+n\beta_0^2-2\beta_1\sum_{i=1}^{n}x_iy_i+2n\overline{y}^2+2\beta_0\beta_1\sum_{i=1}^{n}x_i+\beta^2_1\sum_{i=1}^{n}x_i^2-2n\beta_1\overline{x}\ \overline{y}\\
\delta&=\sum_{i=1}^{n}y_i^2+n\beta_0^2-4n\beta_0\overline{y}+2\beta_0\beta_1\sum_{i=1}^{n}x_i-2\beta_1\sum_{i=1}^{n}x_iy_i+2n\overline{y}^2+\beta^2_1\sum_{i=1}^{n}x_i^2-2n\beta_1\overline{x}\ \overline{y}\\
\delta&=\sum_{i=1}^{n}y_i^2-2n\beta_0 -2n\beta_0\left(\overline{y}-\beta_1\overline{x}\right)-2\beta_1\left(\sum_{i=1}^{n}x_iy_i-n\overline{x}\ \overline{y}\right)+2n\overline{y}^2+\beta^2_1\sum_{i=1}^{n}x_i^2\\
\delta&=\sum_{i=1}^{n}y_i^2+2n\overline{y}^2-2n\beta_0 -2n\beta_0\left(\overline{y}-\beta_1\overline{x}\right)+\beta^2_1\sum_{i=1}^{n}x_i^2-2\beta_1\left(\sum_{i=1}^{n}x_iy_i-n\overline{x}\ \overline{y}\right)\\
\delta &= \underbrace{\sum_{i=1}^{n}y_i^2+2n\overline{y}^2}_{g(\mathbf{y})}-\underbrace{2n\beta_0 -2n\beta_0\left(\overline{y}-\beta_1\overline{x}\right)}_{h(\mathbf{y},\beta_0)}+\underbrace{\beta^2_1 \sum_{i=1}^{n}x_i^2-2 s_{xx}\beta_1 \left(\frac{\sum_{i=1}^{n}x_iy_i-n\overline{x}\ \overline{y}}{s_{xx}}\right)}_{s(\mathbf{y},\beta_1)}
\end{align*}
Since $\hat{\beta_1}=\frac{\sum_{i=1}^{n}x_iy_i-n\overline{x}\ \overline{y}}{s_{xx}}$ and $\hat{\beta}_0=\overline{y}-\hat{\beta}\overline{x}$ are OLS and ML estimators
\begin{eqnarray*}
\hbox{L}(y_i) &=&(2\pi\sigma^2)^{-n/2} \exp\left[g(\mathbf{y})\right]\times \exp\left[h(\mathbf{y},\hat{\beta_0})+s(\mathbf{y},\hat{\beta_1})\right]
\end{eqnarray*}
Finally by the factorisation theorem we prove that for a simple linear regression model:
$$y_i=\beta_0+\beta_1x_i+\varepsilon_i,$$
the conditional distribution
$$Y|\hat{\beta}_0,\hat{\beta}_1$$
do not depends on $\beta_0$ and $\beta_1$, the original parameters for $Y$. | Sufficient Statistic for $\beta$ in OLS | I was looking for some detailed proof for the simple linear model. By the factorisation theorem we have the following solution
Considering the classical model
$$y_i=\beta_0+\beta_1 x_i+\varepsilon_i, | Sufficient Statistic for $\beta$ in OLS
I was looking for some detailed proof for the simple linear model. By the factorisation theorem we have the following solution
Considering the classical model
$$y_i=\beta_0+\beta_1 x_i+\varepsilon_i,\ \mbox{ em que}\ \varepsilon_i\sim \mathrm{N}(0;\sigma^2),\ \hbox{Cov}(\varepsilon_i,\varepsilon_j)=0, i\neq j,\ i,j=1,\ldots, n$$
We start from $\mathbf{y}=(y_1,y_2,\ldots,y_n)$ a simple random sample and $x_1,x_2,\ldots, x_n$ not random. $\beta_0$ and $\beta_1$ are the classical parameters for the model (both maximum likehood or OLS). Consequently
$$y_i \sim \mathrm{N}(\beta_0+\beta_1 x_i;\sigma^2),\ i=1,\ldots, n.$$
Density function for $y_i$
$$f(y_i)= \frac{1}{\sigma \sqrt{2\pi}} \exp\left\{ -\frac{1}{2\sigma^2}\left[y_i-(\beta_0+\beta_1 x_i)\right]^2 \right\}\ -\infty <x_i<\infty $$
Likehood function
\begin{align*}
\hbox{L}(\mathbf{y}) &= \prod_{i=1}^{n} \frac{1}{\sigma \sqrt{2\pi}} \exp\left\{ -\frac{1}{2\sigma^2}\left[(y_i-\overline{y})-(\beta_0+\beta_1 x_i-\overline{y})\right]^2 \right\},\ -\infty <x_i<\infty \\
&= (2\pi\sigma^2)^{-n/2}\\
&= \exp\left\{ -\frac{1}{2\sigma^2}\left[\underbrace{\sum_{i=1}^{n}(y_i-\overline{y})^2-2\sum_{i=1}^{n}(y_i-\overline{y})(\beta_0+\beta_1 x_i-\overline{y})+\sum_{i=1}^{n}(\beta_0+\beta_1 x_i-\overline{y})^2}_{\delta}\right] \right\}
\end{align*}
Be $s_{yy}=\sum_{i=1}^{n}(y_i-\overline{y})^2=\sum_{i=1}^{n}y_i^2-n\overline{y}$ e $s_{xx}=\sum_{i=1}^{n}x_i^2-n\overline{x}$, $y$ random variable e $x$ constant.
\begin{align*}
\delta&=\sum_{i=1}^{n}(y_i-\overline{y})^2-2\sum_{i=1}^{n}(y_i-\overline{y})(\beta_0+\beta_1 x_i-\overline{y})+\sum_{i=1}^{n}(\beta_0+\beta_1 x_i-\overline{y})^2\\
\delta&=s_{yy}-2\sum_{i=1}^{n}(y_i\beta_0+\beta_1 x_iy_i-\overline{y}^2) +\sum_{i=1}^{n}\left[(\beta_0+\beta_1 x_i)^2-2(\beta_0+\beta_1 x_i)\overline{y}+\overline{y}^2\right]\\
\delta&=s_{yy}-2\alpha\sum_{i=1}^{n}y_i-2\beta\sum_{i=1}^{n}x_iy_i +2n\overline{y}^2+n\beta_0^2+2\beta_0\beta_1\sum_{i=1}^{n}x_i+\beta^2_1\sum_{i=1}^{n}x_i^2-2n\beta_0\overline{y}-2\beta_1\overline{y}\sum_{i=1}^{n}x_i +n\overline{y}^2\\
\delta&=\sum_{i=1}^{n}y_i^2-4n\beta_0\overline{y}+n\beta_0^2-2\beta_1\sum_{i=1}^{n}x_iy_i+2n\overline{y}^2+2\beta_0\beta_1\sum_{i=1}^{n}x_i+\beta^2_1\sum_{i=1}^{n}x_i^2-2n\beta_1\overline{x}\ \overline{y}\\
\delta&=\sum_{i=1}^{n}y_i^2+n\beta_0^2-4n\beta_0\overline{y}+2\beta_0\beta_1\sum_{i=1}^{n}x_i-2\beta_1\sum_{i=1}^{n}x_iy_i+2n\overline{y}^2+\beta^2_1\sum_{i=1}^{n}x_i^2-2n\beta_1\overline{x}\ \overline{y}\\
\delta&=\sum_{i=1}^{n}y_i^2-2n\beta_0 -2n\beta_0\left(\overline{y}-\beta_1\overline{x}\right)-2\beta_1\left(\sum_{i=1}^{n}x_iy_i-n\overline{x}\ \overline{y}\right)+2n\overline{y}^2+\beta^2_1\sum_{i=1}^{n}x_i^2\\
\delta&=\sum_{i=1}^{n}y_i^2+2n\overline{y}^2-2n\beta_0 -2n\beta_0\left(\overline{y}-\beta_1\overline{x}\right)+\beta^2_1\sum_{i=1}^{n}x_i^2-2\beta_1\left(\sum_{i=1}^{n}x_iy_i-n\overline{x}\ \overline{y}\right)\\
\delta &= \underbrace{\sum_{i=1}^{n}y_i^2+2n\overline{y}^2}_{g(\mathbf{y})}-\underbrace{2n\beta_0 -2n\beta_0\left(\overline{y}-\beta_1\overline{x}\right)}_{h(\mathbf{y},\beta_0)}+\underbrace{\beta^2_1 \sum_{i=1}^{n}x_i^2-2 s_{xx}\beta_1 \left(\frac{\sum_{i=1}^{n}x_iy_i-n\overline{x}\ \overline{y}}{s_{xx}}\right)}_{s(\mathbf{y},\beta_1)}
\end{align*}
Since $\hat{\beta_1}=\frac{\sum_{i=1}^{n}x_iy_i-n\overline{x}\ \overline{y}}{s_{xx}}$ and $\hat{\beta}_0=\overline{y}-\hat{\beta}\overline{x}$ are OLS and ML estimators
\begin{eqnarray*}
\hbox{L}(y_i) &=&(2\pi\sigma^2)^{-n/2} \exp\left[g(\mathbf{y})\right]\times \exp\left[h(\mathbf{y},\hat{\beta_0})+s(\mathbf{y},\hat{\beta_1})\right]
\end{eqnarray*}
Finally by the factorisation theorem we prove that for a simple linear regression model:
$$y_i=\beta_0+\beta_1x_i+\varepsilon_i,$$
the conditional distribution
$$Y|\hat{\beta}_0,\hat{\beta}_1$$
do not depends on $\beta_0$ and $\beta_1$, the original parameters for $Y$. | Sufficient Statistic for $\beta$ in OLS
I was looking for some detailed proof for the simple linear model. By the factorisation theorem we have the following solution
Considering the classical model
$$y_i=\beta_0+\beta_1 x_i+\varepsilon_i, |
39,175 | How can change in cost function be positive? | If the learning rate is too large, you can "overshoot". Imagine you're using gradient descent to minimize a 1-dimensional, convex parabola. If you take a small step, you'll (probably) end up closer to the minimum than you were before. But if you take a large step, it's possible that you'll end up on the opposite side of the parabola, possibly even farther away from the minimum than you were before!
Here's a simple demonstration: $f(x)=x^2$ achieves a minimum at $x=0$; $f^\prime(x)=2x$, so our gradient update has the form
$$
\begin{align}
x^{(t+1)} &= x^{(t)} - \eta ~ f^\prime \left( x^{(t)} \right)\\
&= x^{(t)} - 2 \eta x^{(t)}\\
&= x^{(t)}(1 - 2 \eta)
\end{align}
$$
If we start at $x^{(0)}=-1$, we can plot the progress of the optimizer and for $\eta = 0.1$, it's not hard to see that we are slowly but surely approaching the minimum.
If we start from $x^{(0)}=-1$ but choose $\eta = 1.125$ instead, then the optimizer diverges. Instead of becoming closer to the minimum at each iteration, the optimizer will always over shoot; obviously, the change in the objective function is positive at each step.
Why does it overshoot? Because the the step size $\eta$ is so large that the linear approximation to the loss is not a good approximation. That's what Nielsen means when he writes
To make gradient descent work correctly, we need to choose the learning rate $\eta$ to be small enough that Equation (9) is a good approximation.
Stated another way, if $\Delta C > 0$, then Equation (9) is not a good approximation; you'll need to select a smaller value for $\eta$.
For the starting point $x^{(0)}=-1$, the dividing line between these two regimes is $\eta=1.0$; at this value of $\eta$, the optimizer alternates between $-1$ for even iterations and and $1$ for odd iterations. For $\eta < 1$, gradient descent converges from this starting point; for $\eta > 1$, gradient descent diverges.
Some information about how to choose good learning rates for quadratic functions can be found in my answer to Why are second-order derivatives useful in convex optimization?
f <- function(x) x^2
grad_x <- function(x) 2*x
descent <- function(x0, N, gradient, eta=0.1){
x_traj <- numeric(N)
x_traj[1] <- x0
for(i in 2:N){
nabla_x_i <- grad_x(x_traj[i - 1])
x_traj[i] <- x_traj[i - 1] - eta * nabla_x_i
}
return(x_traj)
}
x <- seq(-2,2,length.out=1000)
x_traj_eta_01 <- descent(x0=-1.0, N=10, gradient=grad_x, eta=0.1)
png("gd_eta_0.1.png")
plot(x,f(x), type="l", sub=expression(paste(eta, "=0.1")), main="Gradient descent for f(x)=x * x")
lines(x_traj_eta_01, f(x_traj_eta_01), type="o", col="red", lwd=2)
dev.off()
png("gd_eta_1.125.png")
x_traj_eta_1125 <- descent(x0=-1.0, N=20, gradient=grad_x, eta=1.125)
plot(x,f(x), type="l", sub=expression(paste(eta, "=1.125")), main="Gradient descent for f(x)=x * x")
lines(x_traj_eta_1125, f(x_traj_eta_1125), type="o", col="red", lwd=2)
dev.off() | How can change in cost function be positive? | If the learning rate is too large, you can "overshoot". Imagine you're using gradient descent to minimize a 1-dimensional, convex parabola. If you take a small step, you'll (probably) end up closer to | How can change in cost function be positive?
If the learning rate is too large, you can "overshoot". Imagine you're using gradient descent to minimize a 1-dimensional, convex parabola. If you take a small step, you'll (probably) end up closer to the minimum than you were before. But if you take a large step, it's possible that you'll end up on the opposite side of the parabola, possibly even farther away from the minimum than you were before!
Here's a simple demonstration: $f(x)=x^2$ achieves a minimum at $x=0$; $f^\prime(x)=2x$, so our gradient update has the form
$$
\begin{align}
x^{(t+1)} &= x^{(t)} - \eta ~ f^\prime \left( x^{(t)} \right)\\
&= x^{(t)} - 2 \eta x^{(t)}\\
&= x^{(t)}(1 - 2 \eta)
\end{align}
$$
If we start at $x^{(0)}=-1$, we can plot the progress of the optimizer and for $\eta = 0.1$, it's not hard to see that we are slowly but surely approaching the minimum.
If we start from $x^{(0)}=-1$ but choose $\eta = 1.125$ instead, then the optimizer diverges. Instead of becoming closer to the minimum at each iteration, the optimizer will always over shoot; obviously, the change in the objective function is positive at each step.
Why does it overshoot? Because the the step size $\eta$ is so large that the linear approximation to the loss is not a good approximation. That's what Nielsen means when he writes
To make gradient descent work correctly, we need to choose the learning rate $\eta$ to be small enough that Equation (9) is a good approximation.
Stated another way, if $\Delta C > 0$, then Equation (9) is not a good approximation; you'll need to select a smaller value for $\eta$.
For the starting point $x^{(0)}=-1$, the dividing line between these two regimes is $\eta=1.0$; at this value of $\eta$, the optimizer alternates between $-1$ for even iterations and and $1$ for odd iterations. For $\eta < 1$, gradient descent converges from this starting point; for $\eta > 1$, gradient descent diverges.
Some information about how to choose good learning rates for quadratic functions can be found in my answer to Why are second-order derivatives useful in convex optimization?
f <- function(x) x^2
grad_x <- function(x) 2*x
descent <- function(x0, N, gradient, eta=0.1){
x_traj <- numeric(N)
x_traj[1] <- x0
for(i in 2:N){
nabla_x_i <- grad_x(x_traj[i - 1])
x_traj[i] <- x_traj[i - 1] - eta * nabla_x_i
}
return(x_traj)
}
x <- seq(-2,2,length.out=1000)
x_traj_eta_01 <- descent(x0=-1.0, N=10, gradient=grad_x, eta=0.1)
png("gd_eta_0.1.png")
plot(x,f(x), type="l", sub=expression(paste(eta, "=0.1")), main="Gradient descent for f(x)=x * x")
lines(x_traj_eta_01, f(x_traj_eta_01), type="o", col="red", lwd=2)
dev.off()
png("gd_eta_1.125.png")
x_traj_eta_1125 <- descent(x0=-1.0, N=20, gradient=grad_x, eta=1.125)
plot(x,f(x), type="l", sub=expression(paste(eta, "=1.125")), main="Gradient descent for f(x)=x * x")
lines(x_traj_eta_1125, f(x_traj_eta_1125), type="o", col="red", lwd=2)
dev.off() | How can change in cost function be positive?
If the learning rate is too large, you can "overshoot". Imagine you're using gradient descent to minimize a 1-dimensional, convex parabola. If you take a small step, you'll (probably) end up closer to |
39,176 | Statistics without hypothesis testing | Let me take the liberty to rephrase the question as "What are the arguments that Andrew Gelman puts forward against hypothesis testing?"
In the paper that is linked in the post, the authors take issue with using a mechanical procedure for model selection, or, as they phrase it:
[Raftery] promises the impossible: The selection of a model that is adequate for specific purposes without consideration of those purposes.
Frequentist or Bayesian hypothesis testing are two examples of such mechanical procedures. The specific method that they criticize is model selection by BIC, which is related to Bayesian hypothesis testing. They list two main cases when such procedures can fail badly:
"Too many data": Say you have a regression model $y_i = \beta'x_i + \epsilon_i$ with, say, 100 standard normally distributed regressors. Say that the first entry of $\beta$ is $1$ and all other entries are equal to $10^{-10}$. Given enough data, a hypothesis test would yield that all estimates of $\beta$ are "significant". Does this mean that we should include $x_2,x_3,\ldots x_{100}$ in the model? If we were interested in discovering some relationships between feature and outcome, would we not be better off considering a model with only $x_1$?
"Not enough data": On the other extreme, if sample sizes are very small, we will be unlikely to find any "significant" relationships. Does this mean that the best model to use is the one that includes no regressors?
There are no general answers to these questions as they depend on the modeler's objective in a given situation. Often, we can try to select models based on criteria that are more closely related to our objective function, e.g. cross validation sample when our objective is prediction. In many situations, however, data-based procedures need to be complemented by expert judgment (or by using the Bayesian approach with carefully chosen priors that Gelman seems to prefer). | Statistics without hypothesis testing | Let me take the liberty to rephrase the question as "What are the arguments that Andrew Gelman puts forward against hypothesis testing?"
In the paper that is linked in the post, the authors take issue | Statistics without hypothesis testing
Let me take the liberty to rephrase the question as "What are the arguments that Andrew Gelman puts forward against hypothesis testing?"
In the paper that is linked in the post, the authors take issue with using a mechanical procedure for model selection, or, as they phrase it:
[Raftery] promises the impossible: The selection of a model that is adequate for specific purposes without consideration of those purposes.
Frequentist or Bayesian hypothesis testing are two examples of such mechanical procedures. The specific method that they criticize is model selection by BIC, which is related to Bayesian hypothesis testing. They list two main cases when such procedures can fail badly:
"Too many data": Say you have a regression model $y_i = \beta'x_i + \epsilon_i$ with, say, 100 standard normally distributed regressors. Say that the first entry of $\beta$ is $1$ and all other entries are equal to $10^{-10}$. Given enough data, a hypothesis test would yield that all estimates of $\beta$ are "significant". Does this mean that we should include $x_2,x_3,\ldots x_{100}$ in the model? If we were interested in discovering some relationships between feature and outcome, would we not be better off considering a model with only $x_1$?
"Not enough data": On the other extreme, if sample sizes are very small, we will be unlikely to find any "significant" relationships. Does this mean that the best model to use is the one that includes no regressors?
There are no general answers to these questions as they depend on the modeler's objective in a given situation. Often, we can try to select models based on criteria that are more closely related to our objective function, e.g. cross validation sample when our objective is prediction. In many situations, however, data-based procedures need to be complemented by expert judgment (or by using the Bayesian approach with carefully chosen priors that Gelman seems to prefer). | Statistics without hypothesis testing
Let me take the liberty to rephrase the question as "What are the arguments that Andrew Gelman puts forward against hypothesis testing?"
In the paper that is linked in the post, the authors take issue |
39,177 | Statistics without hypothesis testing | The Neyman-Pearson decision-theoretic approach to hypothesis testing (reject/accept) is closely aligned with Popper's Falsification. This method is not invalid, it just has not accommodated the growing human greed for consumption of knowledge, products, and professional gain.
The validity of Popper's approach to science is strongly based on 1. Prespecifying hypotheses 2. Only conducting research with adequate power and 3. Consuming the results of positive/negative studies with equal earnest. We have (in academia, business, government, media, etc) over the past century done none of that.
Fisher proposed a way of doing "stats without hypothesis tests". He never suggested that his p-value be compared to a 0.05 cut-off. He said to report the p-value, and report the power of the study.
Another alternative suggested by many is to merely report the confidence intervals (CIs). The thought is that forcing one to evaluate a trial's results based on a physical quantity, rather than a unitless quantity (like a p-value), would encourage them to consider more subtle aspects like effect size, interpretability, and generalizability. However, even this has fallen flat: the growing tendency is to inspect whether the CI crosses 0 (or 1 for ratio scales) and declare the result statistically significant if not. Tim Lash calls this backdoor hypothesis testing.
There are meandering and endless arguments about a new-era of hypothesis testing. None have not addressed the greed I spoke of earlier. I am of the impression we don't need to change how we do statistics, we need to change how we do science. | Statistics without hypothesis testing | The Neyman-Pearson decision-theoretic approach to hypothesis testing (reject/accept) is closely aligned with Popper's Falsification. This method is not invalid, it just has not accommodated the growin | Statistics without hypothesis testing
The Neyman-Pearson decision-theoretic approach to hypothesis testing (reject/accept) is closely aligned with Popper's Falsification. This method is not invalid, it just has not accommodated the growing human greed for consumption of knowledge, products, and professional gain.
The validity of Popper's approach to science is strongly based on 1. Prespecifying hypotheses 2. Only conducting research with adequate power and 3. Consuming the results of positive/negative studies with equal earnest. We have (in academia, business, government, media, etc) over the past century done none of that.
Fisher proposed a way of doing "stats without hypothesis tests". He never suggested that his p-value be compared to a 0.05 cut-off. He said to report the p-value, and report the power of the study.
Another alternative suggested by many is to merely report the confidence intervals (CIs). The thought is that forcing one to evaluate a trial's results based on a physical quantity, rather than a unitless quantity (like a p-value), would encourage them to consider more subtle aspects like effect size, interpretability, and generalizability. However, even this has fallen flat: the growing tendency is to inspect whether the CI crosses 0 (or 1 for ratio scales) and declare the result statistically significant if not. Tim Lash calls this backdoor hypothesis testing.
There are meandering and endless arguments about a new-era of hypothesis testing. None have not addressed the greed I spoke of earlier. I am of the impression we don't need to change how we do statistics, we need to change how we do science. | Statistics without hypothesis testing
The Neyman-Pearson decision-theoretic approach to hypothesis testing (reject/accept) is closely aligned with Popper's Falsification. This method is not invalid, it just has not accommodated the growin |
39,178 | How does one graph the PDF of a variable having a mixed discrete-continuous distribution? | The two are not on the same "scale" (probability is p(x) for the discrete and f(x)dx for the continuous, so p and f are very different things); strictly speaking the way to draw the distribution for a mixed variable would be to draw the cdf.
You could also draw the discrete and continuous parts separately, as you suggest.
I don't think there are any standard names for such a drawing.
Some people draw the two parts on the same plot, but the meaning of the function values is quite different and you get behaviour that people don't usually expect (though it's not at all surprising when you consider it) when you try to deal with discrete and continuous parts together.
Consider, for example, doing a histogram where you take more bins as you get more data -- then the apparent shape of the histogram changes with sample size. Since judging shape is what people use histograms for, it somewhat defeats the purpose. One of the things you lose by trying to draw them on the same plot is having the histogram "converge" to something you'd like to see (the finite continuous parts disappear down to zero).
Three histograms of a large sample from a 0-1 inflated beta, with different numbers of bins.
None of those plots look much like what you get if you draw the density of the continuous part and then try to mark in the probabilities using the scale on the y-axis (which again is not really appropriate in any case).
While I generally advise against trying to draw both on the same plot, if you do such a thing, you really have to explain very carefully what's going on so people interpret the drawing correctly.
I broke my usual rule when drawing the last plot in this answer:
A model for non-negative data with many zeros: pros and cons of Tweedie GLM
but I did at least explain the problem there. Note that the probability spike at 0 is roughly the same height in each sub-plot even though it looks huge in some and tiny in others - while sometimes it's convenient to break the rule of not putting both on the same plot, one must consider carefully the degree to which you mislead people by doing so. [You often see this done in some fashion with the Tweedie (I've seen it in at least four papers). One example is Figure 1 of Dunn & Smyth (2001)
"Tweedie Family Densities: Methods of Evaluation",
Proceedings of the 16th International Workshop on Statistical Modelling, Odense, Denmark, 2–6 July. (pdf preprint). It's not such a problem if everyone is clear what they're looking at] | How does one graph the PDF of a variable having a mixed discrete-continuous distribution? | The two are not on the same "scale" (probability is p(x) for the discrete and f(x)dx for the continuous, so p and f are very different things); strictly speaking the way to draw the distribution for a | How does one graph the PDF of a variable having a mixed discrete-continuous distribution?
The two are not on the same "scale" (probability is p(x) for the discrete and f(x)dx for the continuous, so p and f are very different things); strictly speaking the way to draw the distribution for a mixed variable would be to draw the cdf.
You could also draw the discrete and continuous parts separately, as you suggest.
I don't think there are any standard names for such a drawing.
Some people draw the two parts on the same plot, but the meaning of the function values is quite different and you get behaviour that people don't usually expect (though it's not at all surprising when you consider it) when you try to deal with discrete and continuous parts together.
Consider, for example, doing a histogram where you take more bins as you get more data -- then the apparent shape of the histogram changes with sample size. Since judging shape is what people use histograms for, it somewhat defeats the purpose. One of the things you lose by trying to draw them on the same plot is having the histogram "converge" to something you'd like to see (the finite continuous parts disappear down to zero).
Three histograms of a large sample from a 0-1 inflated beta, with different numbers of bins.
None of those plots look much like what you get if you draw the density of the continuous part and then try to mark in the probabilities using the scale on the y-axis (which again is not really appropriate in any case).
While I generally advise against trying to draw both on the same plot, if you do such a thing, you really have to explain very carefully what's going on so people interpret the drawing correctly.
I broke my usual rule when drawing the last plot in this answer:
A model for non-negative data with many zeros: pros and cons of Tweedie GLM
but I did at least explain the problem there. Note that the probability spike at 0 is roughly the same height in each sub-plot even though it looks huge in some and tiny in others - while sometimes it's convenient to break the rule of not putting both on the same plot, one must consider carefully the degree to which you mislead people by doing so. [You often see this done in some fashion with the Tweedie (I've seen it in at least four papers). One example is Figure 1 of Dunn & Smyth (2001)
"Tweedie Family Densities: Methods of Evaluation",
Proceedings of the 16th International Workshop on Statistical Modelling, Odense, Denmark, 2–6 July. (pdf preprint). It's not such a problem if everyone is clear what they're looking at] | How does one graph the PDF of a variable having a mixed discrete-continuous distribution?
The two are not on the same "scale" (probability is p(x) for the discrete and f(x)dx for the continuous, so p and f are very different things); strictly speaking the way to draw the distribution for a |
39,179 | How does one graph the PDF of a variable having a mixed discrete-continuous distribution? | Following on the back and forth with @Glen_b in the commentary to his answer, I created a "mixed" histogram of a mixed discrete-continuous distribution. The variable mixednorm has a 20% chance of producing data from a normal distribution with a mean of 2 and a standard deviation of .8, and an 80% chance of producing a Bernouli value with $p=.75$.
There are separate scales for the discrete and continuous components, and the discrete value with the highest probability is scaled to equal the the peak of the continuous histogram. Also: the width of the discrete value bins is set to be the same as the width of the continuous bins.
I would appreciate comments and suggestions.
A second approach uses vertical lines rather than bars for the discrete values, which might be especially useful when the discrete values are quite proximate: | How does one graph the PDF of a variable having a mixed discrete-continuous distribution? | Following on the back and forth with @Glen_b in the commentary to his answer, I created a "mixed" histogram of a mixed discrete-continuous distribution. The variable mixednorm has a 20% chance of prod | How does one graph the PDF of a variable having a mixed discrete-continuous distribution?
Following on the back and forth with @Glen_b in the commentary to his answer, I created a "mixed" histogram of a mixed discrete-continuous distribution. The variable mixednorm has a 20% chance of producing data from a normal distribution with a mean of 2 and a standard deviation of .8, and an 80% chance of producing a Bernouli value with $p=.75$.
There are separate scales for the discrete and continuous components, and the discrete value with the highest probability is scaled to equal the the peak of the continuous histogram. Also: the width of the discrete value bins is set to be the same as the width of the continuous bins.
I would appreciate comments and suggestions.
A second approach uses vertical lines rather than bars for the discrete values, which might be especially useful when the discrete values are quite proximate: | How does one graph the PDF of a variable having a mixed discrete-continuous distribution?
Following on the back and forth with @Glen_b in the commentary to his answer, I created a "mixed" histogram of a mixed discrete-continuous distribution. The variable mixednorm has a 20% chance of prod |
39,180 | How is bagging different from cross-validation? | To add to @juampa's answer,
The big difference between bagging and validation techniques is that bagging averages models (or predictions of an ensemble of models) in order to reduce the variance the prediction is subject to while resampling validation such as cross validation and out-of-bootstrap validation evaluate a number of surrogate models assuming that they are equivalent (i.e. a good surrogate) for the actual model in question which is trained on the whole data set.
Bagging uses bootstrapped subsets (i.e. drawing with replacement of the original data set) of training data to generate such an ensemble but you can also use ensembles that are produced by drawing without replacement, i.e. cross validation: Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6
Whether an ensemble model can be better than single models depends entirely on what the dominant "problem" of the single model is. If it is variance (overfitting, random error, unstable predictions), then ensemble prediction can help. If the problem is bias (systematic error, underfitting, stable but wrong predictions), pretty much all models of the ensemble will give the same prediction and the ensemble prediction is just as wrong.
Both out-of-bootstrap and iterated/repeated cross validation allow to measure the stability of predictions by comparing predictions for the same input (test) data by a number of different surrogate models. These surrogate models differ in that they were trained on slightly different data sets, that can be described as exchanging a few training cases between any two of the surrogate models for other training cases.
As for the assumption of independence @juampa mentions, there are different topics behind this:
All these techniques typically assume that the splitting of the data achieves independence between the cases. I.e. if there are repeated measurements of the same case in the data, they are all in or all out of a particular training or test set.
This is a crucial assumption that allows us to assume the observed performance generalizes to unknown cases of the same population the data comes from. It is typically up to you to ensure the splitting is done in a way that achieves this independence of cases: without intimate knowledge about the data it is not possible to make sure of such an independent splitting.
Such a splitting into independent cases is needed both by resampling validation and ensemble models if they are to be useful for predicting independent cases.
There is another independence assumption that is sometimes but not always made for validation results (or rather, during their interpretation):
For some tasks such as the general comparison of algorithms, the surrogate models in a resampling validation are sometimes treated as if they were independent trials of the algorithm. Which is quite obviously not the case, as the models share most of their training data (with the exception of a single 2-fold split).
This assumption is needed in order to generalize the findings on this data set to a data set of the given characteristics.
On the other hand, if the task is to establish the performance of the model obtained from the data set at hand (which is e.g. actually to be used for prediction), then the surrogate models are assumed to be equivalent, if not perfectly to the model in question (the well-known slight pessimistic bias of resampling validation) then at least among themselves. This point of view means that the training data of the different surrogate models are not assumed to be independent but quite the opposite: they are assumed to be almost equal (which is what the resampling process actually produces). | How is bagging different from cross-validation? | To add to @juampa's answer,
The big difference between bagging and validation techniques is that bagging averages models (or predictions of an ensemble of models) in order to reduce the variance the p | How is bagging different from cross-validation?
To add to @juampa's answer,
The big difference between bagging and validation techniques is that bagging averages models (or predictions of an ensemble of models) in order to reduce the variance the prediction is subject to while resampling validation such as cross validation and out-of-bootstrap validation evaluate a number of surrogate models assuming that they are equivalent (i.e. a good surrogate) for the actual model in question which is trained on the whole data set.
Bagging uses bootstrapped subsets (i.e. drawing with replacement of the original data set) of training data to generate such an ensemble but you can also use ensembles that are produced by drawing without replacement, i.e. cross validation: Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6
Whether an ensemble model can be better than single models depends entirely on what the dominant "problem" of the single model is. If it is variance (overfitting, random error, unstable predictions), then ensemble prediction can help. If the problem is bias (systematic error, underfitting, stable but wrong predictions), pretty much all models of the ensemble will give the same prediction and the ensemble prediction is just as wrong.
Both out-of-bootstrap and iterated/repeated cross validation allow to measure the stability of predictions by comparing predictions for the same input (test) data by a number of different surrogate models. These surrogate models differ in that they were trained on slightly different data sets, that can be described as exchanging a few training cases between any two of the surrogate models for other training cases.
As for the assumption of independence @juampa mentions, there are different topics behind this:
All these techniques typically assume that the splitting of the data achieves independence between the cases. I.e. if there are repeated measurements of the same case in the data, they are all in or all out of a particular training or test set.
This is a crucial assumption that allows us to assume the observed performance generalizes to unknown cases of the same population the data comes from. It is typically up to you to ensure the splitting is done in a way that achieves this independence of cases: without intimate knowledge about the data it is not possible to make sure of such an independent splitting.
Such a splitting into independent cases is needed both by resampling validation and ensemble models if they are to be useful for predicting independent cases.
There is another independence assumption that is sometimes but not always made for validation results (or rather, during their interpretation):
For some tasks such as the general comparison of algorithms, the surrogate models in a resampling validation are sometimes treated as if they were independent trials of the algorithm. Which is quite obviously not the case, as the models share most of their training data (with the exception of a single 2-fold split).
This assumption is needed in order to generalize the findings on this data set to a data set of the given characteristics.
On the other hand, if the task is to establish the performance of the model obtained from the data set at hand (which is e.g. actually to be used for prediction), then the surrogate models are assumed to be equivalent, if not perfectly to the model in question (the well-known slight pessimistic bias of resampling validation) then at least among themselves. This point of view means that the training data of the different surrogate models are not assumed to be independent but quite the opposite: they are assumed to be almost equal (which is what the resampling process actually produces). | How is bagging different from cross-validation?
To add to @juampa's answer,
The big difference between bagging and validation techniques is that bagging averages models (or predictions of an ensemble of models) in order to reduce the variance the p |
39,181 | How is bagging different from cross-validation? | Bagging is useful when you work with classifiers that can achieve high accuracy because they tend to overfit (for example decision trees and neural networks) but because of that, show high variance, specially in the small data domain. Then you generate bootstraps samples of your data, train classifiers on each of them, and then use the aggregate to classify so that you can average out errors due to overfitting.
Cross validation is about evaluating how well your model does, but you have limited data. Again, you have the problem that your estimate might not be robust enough.
Both share the problem that when used, one makes the assumption that the estimates for each iteration (the classifier that results from each bootstrap iteration and the error of the classifier resulting from each cross-validation step, resp.) are statistically independent. Which is not true. See for example A Study of CrossValidation and Bootstrap for Accuracy Estimation and Model Selection and Bagging Predictors for further details.
So yes, they might be helpful in your case. | How is bagging different from cross-validation? | Bagging is useful when you work with classifiers that can achieve high accuracy because they tend to overfit (for example decision trees and neural networks) but because of that, show high variance, s | How is bagging different from cross-validation?
Bagging is useful when you work with classifiers that can achieve high accuracy because they tend to overfit (for example decision trees and neural networks) but because of that, show high variance, specially in the small data domain. Then you generate bootstraps samples of your data, train classifiers on each of them, and then use the aggregate to classify so that you can average out errors due to overfitting.
Cross validation is about evaluating how well your model does, but you have limited data. Again, you have the problem that your estimate might not be robust enough.
Both share the problem that when used, one makes the assumption that the estimates for each iteration (the classifier that results from each bootstrap iteration and the error of the classifier resulting from each cross-validation step, resp.) are statistically independent. Which is not true. See for example A Study of CrossValidation and Bootstrap for Accuracy Estimation and Model Selection and Bagging Predictors for further details.
So yes, they might be helpful in your case. | How is bagging different from cross-validation?
Bagging is useful when you work with classifiers that can achieve high accuracy because they tend to overfit (for example decision trees and neural networks) but because of that, show high variance, s |
39,182 | What are the real hyperparameters of a neural network? | Yes. Essentially, any parameter that you can initialize (before training the neural network model) can be seen as a hyperparameter.
This includes the optimizer's hyperparameters (e.g., SGD, Adam, etc.): learning rate, decay rates, step size, and batch-size; as well as model's hyperparameter (CNN): number of layers, number of units at each layer, drop out rate at each layer, L2 (or L1) regularization parameters, activation function type (ReLU, Sigmoid, Tanh), and if you are dealing with CNNs, there are extra hyperparameters such as the ones related to convolutional layer: window size, stride value, and Pooling layers.
There are even more hyperparameters that you can initialize and tune. For example, take a look at this list. | What are the real hyperparameters of a neural network? | Yes. Essentially, any parameter that you can initialize (before training the neural network model) can be seen as a hyperparameter.
This includes the optimizer's hyperparameters (e.g., SGD, Adam, etc. | What are the real hyperparameters of a neural network?
Yes. Essentially, any parameter that you can initialize (before training the neural network model) can be seen as a hyperparameter.
This includes the optimizer's hyperparameters (e.g., SGD, Adam, etc.): learning rate, decay rates, step size, and batch-size; as well as model's hyperparameter (CNN): number of layers, number of units at each layer, drop out rate at each layer, L2 (or L1) regularization parameters, activation function type (ReLU, Sigmoid, Tanh), and if you are dealing with CNNs, there are extra hyperparameters such as the ones related to convolutional layer: window size, stride value, and Pooling layers.
There are even more hyperparameters that you can initialize and tune. For example, take a look at this list. | What are the real hyperparameters of a neural network?
Yes. Essentially, any parameter that you can initialize (before training the neural network model) can be seen as a hyperparameter.
This includes the optimizer's hyperparameters (e.g., SGD, Adam, etc. |
39,183 | What are the real hyperparameters of a neural network? | Hyperparameters for a deep neural network:
- Number of iterations
- Number of layers LL in the neural network
- Number of hidden units in each layer
- Learning rate α
- Step size
- Choice of the activation function
- Losss function
- Mini-batch Size
- Momentum
- Regularization
- Drop out rate
- Weight Decay | What are the real hyperparameters of a neural network? | Hyperparameters for a deep neural network:
- Number of iterations
- Number of layers LL in the neural network
- Number of hidden units in each layer
- Learning rate α
- Step size
- Choice of the activ | What are the real hyperparameters of a neural network?
Hyperparameters for a deep neural network:
- Number of iterations
- Number of layers LL in the neural network
- Number of hidden units in each layer
- Learning rate α
- Step size
- Choice of the activation function
- Losss function
- Mini-batch Size
- Momentum
- Regularization
- Drop out rate
- Weight Decay | What are the real hyperparameters of a neural network?
Hyperparameters for a deep neural network:
- Number of iterations
- Number of layers LL in the neural network
- Number of hidden units in each layer
- Learning rate α
- Step size
- Choice of the activ |
39,184 | PDF of cosine of a uniform random variable | First note that $\cos$ is an even function; $\cos(-X)=\cos(X)$. Consequently it's the same as taking $\cos(W)$ where $W=|X|$ (or indeed you could work instead with $\cos(-W)$). Now $W$ is uniform on $[0,\pi)$. This is easier because the $\cos$ function is now monotonic over the values taken by the new variable and is now invertible.
Let $Y=\cos(W)$. Note that $P(W\leq w) = w/\pi$
\begin{eqnarray}
F_Y(y)&=&P(Y\leq y)\\
&=&P(\cos(W)\leq y) \\
&=& P(W\geq \cos^{-1}(y)) \\
&=& P(W\leq \cos^{-1}(-y)) \\
&=&\cos^{-1}(-y)/\pi \\[20pt]
f_Y(y)&=&\frac{d}{dy}F_Y(y)\\
&=&\frac{1}{\pi}\frac{d}{dy}\cos^{-1}(-y)
\end{eqnarray}
Now $\frac{d}{dx} g^{-1}(x) = \frac{1}{g'(g^{-1}(x))}$, so
\begin{eqnarray}
\frac{1}{\pi}\frac{d}{dy}\cos^{-1}(-y)&=&
-\frac{1}{\pi}\frac{d}{dy}\cos^{-1}(y)\\
&=&
\frac{1}{\pi}\cdot\frac{1}{\sin(\cos^{-1}(y))}\,,\:-1<y<1
\end{eqnarray}
Or, using the fact that $\frac{d}{dx}\cos^{-1}(x)=-\frac{1}{\sqrt{1-x^2}}$,
\begin{eqnarray}
-\frac{1}{\pi}\frac{d}{dy}\cos^{-1}(-y)&=&\frac{1}{\pi}\cdot \frac{1}{\sqrt{1-y^2}}\,,\:-1<y<1
\end{eqnarray} | PDF of cosine of a uniform random variable | First note that $\cos$ is an even function; $\cos(-X)=\cos(X)$. Consequently it's the same as taking $\cos(W)$ where $W=|X|$ (or indeed you could work instead with $\cos(-W)$). Now $W$ is uniform on $ | PDF of cosine of a uniform random variable
First note that $\cos$ is an even function; $\cos(-X)=\cos(X)$. Consequently it's the same as taking $\cos(W)$ where $W=|X|$ (or indeed you could work instead with $\cos(-W)$). Now $W$ is uniform on $[0,\pi)$. This is easier because the $\cos$ function is now monotonic over the values taken by the new variable and is now invertible.
Let $Y=\cos(W)$. Note that $P(W\leq w) = w/\pi$
\begin{eqnarray}
F_Y(y)&=&P(Y\leq y)\\
&=&P(\cos(W)\leq y) \\
&=& P(W\geq \cos^{-1}(y)) \\
&=& P(W\leq \cos^{-1}(-y)) \\
&=&\cos^{-1}(-y)/\pi \\[20pt]
f_Y(y)&=&\frac{d}{dy}F_Y(y)\\
&=&\frac{1}{\pi}\frac{d}{dy}\cos^{-1}(-y)
\end{eqnarray}
Now $\frac{d}{dx} g^{-1}(x) = \frac{1}{g'(g^{-1}(x))}$, so
\begin{eqnarray}
\frac{1}{\pi}\frac{d}{dy}\cos^{-1}(-y)&=&
-\frac{1}{\pi}\frac{d}{dy}\cos^{-1}(y)\\
&=&
\frac{1}{\pi}\cdot\frac{1}{\sin(\cos^{-1}(y))}\,,\:-1<y<1
\end{eqnarray}
Or, using the fact that $\frac{d}{dx}\cos^{-1}(x)=-\frac{1}{\sqrt{1-x^2}}$,
\begin{eqnarray}
-\frac{1}{\pi}\frac{d}{dy}\cos^{-1}(-y)&=&\frac{1}{\pi}\cdot \frac{1}{\sqrt{1-y^2}}\,,\:-1<y<1
\end{eqnarray} | PDF of cosine of a uniform random variable
First note that $\cos$ is an even function; $\cos(-X)=\cos(X)$. Consequently it's the same as taking $\cos(W)$ where $W=|X|$ (or indeed you could work instead with $\cos(-W)$). Now $W$ is uniform on $ |
39,185 | What are the motivations for the use of the logistic function as a model for binary classification? | There are several reasons for choosing the logistic function as a "default" method for estimating probabilities from one or more variables. Here are a few:
Historical, e.g. dose-response curves
When used with a regression specification on the right hand side of a model, the regression effects are interpretable in that they can be related to odds ratios for the separate effects of predictors
If you start with a multivariate normality assumption for the predictors as in linear discriminant analysis, using Bayes' rule to reverse the conditioning yields the logistic model
The shape fits actual data a good deal of the time
Please note that the logistic is not used for classification but for direct probability estimation. | What are the motivations for the use of the logistic function as a model for binary classification? | There are several reasons for choosing the logistic function as a "default" method for estimating probabilities from one or more variables. Here are a few:
Historical, e.g. dose-response curves
When | What are the motivations for the use of the logistic function as a model for binary classification?
There are several reasons for choosing the logistic function as a "default" method for estimating probabilities from one or more variables. Here are a few:
Historical, e.g. dose-response curves
When used with a regression specification on the right hand side of a model, the regression effects are interpretable in that they can be related to odds ratios for the separate effects of predictors
If you start with a multivariate normality assumption for the predictors as in linear discriminant analysis, using Bayes' rule to reverse the conditioning yields the logistic model
The shape fits actual data a good deal of the time
Please note that the logistic is not used for classification but for direct probability estimation. | What are the motivations for the use of the logistic function as a model for binary classification?
There are several reasons for choosing the logistic function as a "default" method for estimating probabilities from one or more variables. Here are a few:
Historical, e.g. dose-response curves
When |
39,186 | What are the motivations for the use of the logistic function as a model for binary classification? | A technical feature of the logistic link function is that when combined with ML estimation (or posterior mode with flat prior) the gradient equation becomes
$$\sum_i x_i (p_i - y_i) = 0 $$
That is, your residuals, $p_i - y_i$, are uncorrelated with the covariates, $x_i$ (note: $y_i $ is binary $0$ or $1$). This is analogous to ols regression. If you have a different link function - the equation gets modified by inclusion of "weights" that depend on how different the link function it is from logistic one.
One practical feature of this is that if you include an intercept in your model, the fitted probabilities will add up to the number of "successes" (observations where $y_i=1$). Similarly for factor variables. | What are the motivations for the use of the logistic function as a model for binary classification? | A technical feature of the logistic link function is that when combined with ML estimation (or posterior mode with flat prior) the gradient equation becomes
$$\sum_i x_i (p_i - y_i) = 0 $$
That is, yo | What are the motivations for the use of the logistic function as a model for binary classification?
A technical feature of the logistic link function is that when combined with ML estimation (or posterior mode with flat prior) the gradient equation becomes
$$\sum_i x_i (p_i - y_i) = 0 $$
That is, your residuals, $p_i - y_i$, are uncorrelated with the covariates, $x_i$ (note: $y_i $ is binary $0$ or $1$). This is analogous to ols regression. If you have a different link function - the equation gets modified by inclusion of "weights" that depend on how different the link function it is from logistic one.
One practical feature of this is that if you include an intercept in your model, the fitted probabilities will add up to the number of "successes" (observations where $y_i=1$). Similarly for factor variables. | What are the motivations for the use of the logistic function as a model for binary classification?
A technical feature of the logistic link function is that when combined with ML estimation (or posterior mode with flat prior) the gradient equation becomes
$$\sum_i x_i (p_i - y_i) = 0 $$
That is, yo |
39,187 | How can I quickly detect cheating variables in large data? | This is sometimes referred to as "Data Leakage." There's a nice paper on this here:
Leakage in Data Mining:
Formulation, Detection, and Avoidance
The above paper has plenty of amusing (and horrifying) examples of data leakage, for example, a cancer prediction competition where it turned out that patient ID numbers had a near perfect prediction of future cancer, unintentionally because of how groups were formed throughout the study.
I don't think there's a clear cut way of identifying data leakage. The above paper has some suggestions but in general it's very problem specific. As an example, you could definitely look at just the correlations between your features and target. However, sometimes you'll miss things. For example, imagine you're making a spam bot detector for a website like stackexchange, where in addition to collection features like message length, content, etc., you can potentially collect information on whether a message was flagged by another user. However, if you want your bot detector to be as fast as possible, you shouldn't have to rely on user-generated message flags. Naturally, spam bots would accumulate a ton of user-generated message flags, so your classifier might start relying on these flags, and less so on the content of the messages. In this way you should consider removing flags as a feature so that you can tag bots faster than the crowd-sourced user effort, i.e. before a wide audience has been exposed to their messages.
Other times, you'll have a very stupid feature that's causing your detection. There's a nice anecdote here about a story on how the Army tried to make a tank detector, which had near perfect accuracy but, ended up detecting cloudy days instead because all the training images with tanks were taken on a cloudy day, and every training image without tanks was taken on a clear day. A very relevant paper on this is: "Why Should I trust you?": Explaining Predictions of Any Classifier - Ribeiro, et. al. | How can I quickly detect cheating variables in large data? | This is sometimes referred to as "Data Leakage." There's a nice paper on this here:
Leakage in Data Mining:
Formulation, Detection, and Avoidance
The above paper has plenty of amusing (and horrifying | How can I quickly detect cheating variables in large data?
This is sometimes referred to as "Data Leakage." There's a nice paper on this here:
Leakage in Data Mining:
Formulation, Detection, and Avoidance
The above paper has plenty of amusing (and horrifying) examples of data leakage, for example, a cancer prediction competition where it turned out that patient ID numbers had a near perfect prediction of future cancer, unintentionally because of how groups were formed throughout the study.
I don't think there's a clear cut way of identifying data leakage. The above paper has some suggestions but in general it's very problem specific. As an example, you could definitely look at just the correlations between your features and target. However, sometimes you'll miss things. For example, imagine you're making a spam bot detector for a website like stackexchange, where in addition to collection features like message length, content, etc., you can potentially collect information on whether a message was flagged by another user. However, if you want your bot detector to be as fast as possible, you shouldn't have to rely on user-generated message flags. Naturally, spam bots would accumulate a ton of user-generated message flags, so your classifier might start relying on these flags, and less so on the content of the messages. In this way you should consider removing flags as a feature so that you can tag bots faster than the crowd-sourced user effort, i.e. before a wide audience has been exposed to their messages.
Other times, you'll have a very stupid feature that's causing your detection. There's a nice anecdote here about a story on how the Army tried to make a tank detector, which had near perfect accuracy but, ended up detecting cloudy days instead because all the training images with tanks were taken on a cloudy day, and every training image without tanks was taken on a clear day. A very relevant paper on this is: "Why Should I trust you?": Explaining Predictions of Any Classifier - Ribeiro, et. al. | How can I quickly detect cheating variables in large data?
This is sometimes referred to as "Data Leakage." There's a nice paper on this here:
Leakage in Data Mining:
Formulation, Detection, and Avoidance
The above paper has plenty of amusing (and horrifying |
39,188 | How can I quickly detect cheating variables in large data? | One way of detecting cheating variables is building a tree model and look at first few splits. Here is a simulated example.
cheating_variable=runif(1e3)
x=matrix(runif(1e5),nrow = 1e3)
y=rbinom(1e3,1,cheating_variable)
d=data.frame(x=cbind(x,cheating_variable),y=y)
library(rpart)
library(partykit)
tree_fit=rpart(y~.,d)
plot(as.party(tree_fit)) | How can I quickly detect cheating variables in large data? | One way of detecting cheating variables is building a tree model and look at first few splits. Here is a simulated example.
cheating_variable=runif(1e3)
x=matrix(runif(1e5),nrow = 1e3)
y=rbinom(1e3,1, | How can I quickly detect cheating variables in large data?
One way of detecting cheating variables is building a tree model and look at first few splits. Here is a simulated example.
cheating_variable=runif(1e3)
x=matrix(runif(1e5),nrow = 1e3)
y=rbinom(1e3,1,cheating_variable)
d=data.frame(x=cbind(x,cheating_variable),y=y)
library(rpart)
library(partykit)
tree_fit=rpart(y~.,d)
plot(as.party(tree_fit)) | How can I quickly detect cheating variables in large data?
One way of detecting cheating variables is building a tree model and look at first few splits. Here is a simulated example.
cheating_variable=runif(1e3)
x=matrix(runif(1e5),nrow = 1e3)
y=rbinom(1e3,1, |
39,189 | Why doesn't homoskedacticity bias an estimator? | What heteroskedasticity describes is that the variation of the errors may depend on the values of the regressors. That is, that for certain values of $x$ we expect that, while we still expect zero errors on average, any given error tends to be further away from the true regression line in either direction.
The situation you describe rather concerns the situation in which the errors systematically deviate from the regression line in one direction (or in one direction for some range of $x$, and in another for another range of $x$), so that errors would no longer have mean zero for such predictor values, for example due to omitted nonlinearities or omitted variables.
Here is an example in which the error term $u$ of the model is generated such that it correlates with the regressor $X$ (see code below). This causes the scatter plot not to scatter around the true (red) regression line, such that, despite the huge sample size of $n=10,000$, the (blue) estimated OLS line is pretty far away from the true value $\beta_1=0.5$.
library(mvtnorm)
# truth
beta0 <- 1
beta1 <- 0.5
# generate some data with correlation between X and u
n <- 10000
errors <- rmvnorm(n, mean = rep(0, 2), sigma = matrix(c(1,-0.5,-0.5,1),2,2))
u <- errors[,1]
X <- errors[,2]
y <- beta0 + beta1*X + u
plot(X,y,xlab="x",ylab="y")
abline(a = beta0, b = beta1, col="red", lwd=4) # the truth
regr <- lm(y~X)
abline(regr, col="blue", lwd=3) | Why doesn't homoskedacticity bias an estimator? | What heteroskedasticity describes is that the variation of the errors may depend on the values of the regressors. That is, that for certain values of $x$ we expect that, while we still expect zero err | Why doesn't homoskedacticity bias an estimator?
What heteroskedasticity describes is that the variation of the errors may depend on the values of the regressors. That is, that for certain values of $x$ we expect that, while we still expect zero errors on average, any given error tends to be further away from the true regression line in either direction.
The situation you describe rather concerns the situation in which the errors systematically deviate from the regression line in one direction (or in one direction for some range of $x$, and in another for another range of $x$), so that errors would no longer have mean zero for such predictor values, for example due to omitted nonlinearities or omitted variables.
Here is an example in which the error term $u$ of the model is generated such that it correlates with the regressor $X$ (see code below). This causes the scatter plot not to scatter around the true (red) regression line, such that, despite the huge sample size of $n=10,000$, the (blue) estimated OLS line is pretty far away from the true value $\beta_1=0.5$.
library(mvtnorm)
# truth
beta0 <- 1
beta1 <- 0.5
# generate some data with correlation between X and u
n <- 10000
errors <- rmvnorm(n, mean = rep(0, 2), sigma = matrix(c(1,-0.5,-0.5,1),2,2))
u <- errors[,1]
X <- errors[,2]
y <- beta0 + beta1*X + u
plot(X,y,xlab="x",ylab="y")
abline(a = beta0, b = beta1, col="red", lwd=4) # the truth
regr <- lm(y~X)
abline(regr, col="blue", lwd=3) | Why doesn't homoskedacticity bias an estimator?
What heteroskedasticity describes is that the variation of the errors may depend on the values of the regressors. That is, that for certain values of $x$ we expect that, while we still expect zero err |
39,190 | Why doesn't homoskedacticity bias an estimator? | Consider a heteroskedastic linear regression with model form:
$$\boldsymbol{Y} = \boldsymbol{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon} \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \boldsymbol{\varepsilon} \sim \text{N}(\boldsymbol{0}, \sigma^2 \text{diag}(\boldsymbol{\tau} ) ),$$
where $\boldsymbol{\tau} = (\tau_1, ..., \tau_n) \in \mathbb{R}^n$ gives the underlying variance scales for the variables. (To simplify our analysis we will also assume that none of these weightings is zero, which means that without loss of generality, we can let $|\text{diag}(\boldsymbol{\tau})| = 1$.) The ordinary least squares (OLS) estimator is:
$$\hat{\boldsymbol{\beta}} = (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} (\boldsymbol{x^\text{T}} \boldsymbol{Y}).$$
Since $\mathbb{E}(\boldsymbol{\varepsilon}) = \boldsymbol{0}$, this has expected value:
$$\begin{equation} \begin{aligned}
\mathbb{E}( \hat{\boldsymbol{\beta}} ) = \mathbb{E}( (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} (\boldsymbol{x^\text{T}} \boldsymbol{Y}) ) &= (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \boldsymbol{x^\text{T}} \mathbb{E}(\boldsymbol{Y})\\[6pt]
&= (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \boldsymbol{x^\text{T}} \mathbb{E}(\boldsymbol{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon} )\\[6pt]
&= \boldsymbol{\beta} + (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \boldsymbol{x^\text{T}} \mathbb{E} (\boldsymbol{\varepsilon}) \\[6pt]
&= \boldsymbol{\beta}.
\end{aligned} \end{equation}$$
It also has variance:
$$\begin{equation} \begin{aligned}
\mathbb{V}( \hat{\boldsymbol{\beta}} ) = \mathbb{V}( (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} (\boldsymbol{x^\text{T}} \boldsymbol{Y}) ) &= (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \boldsymbol{x^\text{T}} \mathbb{V}(\boldsymbol{Y}) \boldsymbol{x} (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \\[6pt]
&= (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \boldsymbol{x^\text{T}} \mathbb{V}(\boldsymbol{\varepsilon}) \boldsymbol{x} (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \\[6pt]
&= \sigma^2 (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} (\boldsymbol{x^\text{T}} \text{diag}(\boldsymbol{\tau}) \boldsymbol{x}) (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1}. \\[6pt]
\end{aligned} \end{equation}$$
As you can see, nothing in the expected value derivation depends on the variance of the error vector in the model. Heteroscedasticity affects the variance matrix of the OLS coefficient estimator, but it does not affect the expected value. Intuitively, this is because positive and negative variation in the (symmetric) distribution of the error terms "balance out" in the expected value. | Why doesn't homoskedacticity bias an estimator? | Consider a heteroskedastic linear regression with model form:
$$\boldsymbol{Y} = \boldsymbol{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon} \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \tex | Why doesn't homoskedacticity bias an estimator?
Consider a heteroskedastic linear regression with model form:
$$\boldsymbol{Y} = \boldsymbol{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon} \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \boldsymbol{\varepsilon} \sim \text{N}(\boldsymbol{0}, \sigma^2 \text{diag}(\boldsymbol{\tau} ) ),$$
where $\boldsymbol{\tau} = (\tau_1, ..., \tau_n) \in \mathbb{R}^n$ gives the underlying variance scales for the variables. (To simplify our analysis we will also assume that none of these weightings is zero, which means that without loss of generality, we can let $|\text{diag}(\boldsymbol{\tau})| = 1$.) The ordinary least squares (OLS) estimator is:
$$\hat{\boldsymbol{\beta}} = (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} (\boldsymbol{x^\text{T}} \boldsymbol{Y}).$$
Since $\mathbb{E}(\boldsymbol{\varepsilon}) = \boldsymbol{0}$, this has expected value:
$$\begin{equation} \begin{aligned}
\mathbb{E}( \hat{\boldsymbol{\beta}} ) = \mathbb{E}( (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} (\boldsymbol{x^\text{T}} \boldsymbol{Y}) ) &= (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \boldsymbol{x^\text{T}} \mathbb{E}(\boldsymbol{Y})\\[6pt]
&= (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \boldsymbol{x^\text{T}} \mathbb{E}(\boldsymbol{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon} )\\[6pt]
&= \boldsymbol{\beta} + (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \boldsymbol{x^\text{T}} \mathbb{E} (\boldsymbol{\varepsilon}) \\[6pt]
&= \boldsymbol{\beta}.
\end{aligned} \end{equation}$$
It also has variance:
$$\begin{equation} \begin{aligned}
\mathbb{V}( \hat{\boldsymbol{\beta}} ) = \mathbb{V}( (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} (\boldsymbol{x^\text{T}} \boldsymbol{Y}) ) &= (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \boldsymbol{x^\text{T}} \mathbb{V}(\boldsymbol{Y}) \boldsymbol{x} (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \\[6pt]
&= (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \boldsymbol{x^\text{T}} \mathbb{V}(\boldsymbol{\varepsilon}) \boldsymbol{x} (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} \\[6pt]
&= \sigma^2 (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1} (\boldsymbol{x^\text{T}} \text{diag}(\boldsymbol{\tau}) \boldsymbol{x}) (\boldsymbol{x^\text{T}} \boldsymbol{x})^{-1}. \\[6pt]
\end{aligned} \end{equation}$$
As you can see, nothing in the expected value derivation depends on the variance of the error vector in the model. Heteroscedasticity affects the variance matrix of the OLS coefficient estimator, but it does not affect the expected value. Intuitively, this is because positive and negative variation in the (symmetric) distribution of the error terms "balance out" in the expected value. | Why doesn't homoskedacticity bias an estimator?
Consider a heteroskedastic linear regression with model form:
$$\boldsymbol{Y} = \boldsymbol{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon} \text{ } \text{ } \text{ } \text{ } \text{ } \text{ } \tex |
39,191 | Why doesn't homoskedacticity bias an estimator? | I would like to go a bit theoretical on this. One of the most important assumptions for the unbiasedness of estimators is:
\begin{equation}
E(u|x) = 0
\end{equation}
This assumption implies that we expect no variance between a parameter's true value and its estimator. This can also be expressed as:
\begin{equation}
E(\hat{\mu}) = \mu
\end{equation}
with $\hat{\mu}$ is the estimator of parameter $\mu$. This is called the zero conditional mean assumption, and it is distinct from the homoskedasticity assumption, which is:
\begin{equation}
Var(u|x) = {\sigma}^2
\end{equation}
Woolridge (2002) clearly distinguishes between the assumptions of unbiasedness and homoskedasticity. The first concerns the expected value of $u$, while the second concerns the variance of $u$. @Christopher Hanck illustrates a good example of heteroskedasticity: the variation of $\hat{\beta}$ should be spread out unequally along the true $\beta line in every direction.
In your case, when you imagine a bunch of errors is clustered on the top left, such data will violate the unbiasedness assumption as the $E(u|x) > 0$. Yet, it does not necessarily means that $Var(u|x)$ is non-constant. | Why doesn't homoskedacticity bias an estimator? | I would like to go a bit theoretical on this. One of the most important assumptions for the unbiasedness of estimators is:
\begin{equation}
E(u|x) = 0
\end{equation}
This assumption implies that we ex | Why doesn't homoskedacticity bias an estimator?
I would like to go a bit theoretical on this. One of the most important assumptions for the unbiasedness of estimators is:
\begin{equation}
E(u|x) = 0
\end{equation}
This assumption implies that we expect no variance between a parameter's true value and its estimator. This can also be expressed as:
\begin{equation}
E(\hat{\mu}) = \mu
\end{equation}
with $\hat{\mu}$ is the estimator of parameter $\mu$. This is called the zero conditional mean assumption, and it is distinct from the homoskedasticity assumption, which is:
\begin{equation}
Var(u|x) = {\sigma}^2
\end{equation}
Woolridge (2002) clearly distinguishes between the assumptions of unbiasedness and homoskedasticity. The first concerns the expected value of $u$, while the second concerns the variance of $u$. @Christopher Hanck illustrates a good example of heteroskedasticity: the variation of $\hat{\beta}$ should be spread out unequally along the true $\beta line in every direction.
In your case, when you imagine a bunch of errors is clustered on the top left, such data will violate the unbiasedness assumption as the $E(u|x) > 0$. Yet, it does not necessarily means that $Var(u|x)$ is non-constant. | Why doesn't homoskedacticity bias an estimator?
I would like to go a bit theoretical on this. One of the most important assumptions for the unbiasedness of estimators is:
\begin{equation}
E(u|x) = 0
\end{equation}
This assumption implies that we ex |
39,192 | Statistical Significance of multiple classifiers by using p-value | This is common practice but quite controversial among statisticians (Dietterich 1998)
(Kohavi 1995)
For a t-test you need 30 or more samples if you cannot assume a normal distribution (which you really cannot). The common approach to this is to have 3 repetitions of a 10 fold CV. The folds should be shuffled by a random generator, but the same random folds should be used with each algorithm. Use a one sample t-test on the difference of performance.
T-tests also assume i.i.d. samples. The independence of the samples is not given in CV. This problem can be avoided by using very different non-parametric tests. Or you could use corrected resampled t-tests that correct precisely for this non independence in repeated cross validation settings. They are less powerful than normal t-tests, but arguably more so than the non parametric tests linked above. Weka uses these tests per default for example.
PS. If you compare 7 algorithms pairwise, you also need a Bonferroni adjustment or something similar to your p-value. | Statistical Significance of multiple classifiers by using p-value | This is common practice but quite controversial among statisticians (Dietterich 1998)
(Kohavi 1995)
For a t-test you need 30 or more samples if you cannot assume a normal distribution (which you reall | Statistical Significance of multiple classifiers by using p-value
This is common practice but quite controversial among statisticians (Dietterich 1998)
(Kohavi 1995)
For a t-test you need 30 or more samples if you cannot assume a normal distribution (which you really cannot). The common approach to this is to have 3 repetitions of a 10 fold CV. The folds should be shuffled by a random generator, but the same random folds should be used with each algorithm. Use a one sample t-test on the difference of performance.
T-tests also assume i.i.d. samples. The independence of the samples is not given in CV. This problem can be avoided by using very different non-parametric tests. Or you could use corrected resampled t-tests that correct precisely for this non independence in repeated cross validation settings. They are less powerful than normal t-tests, but arguably more so than the non parametric tests linked above. Weka uses these tests per default for example.
PS. If you compare 7 algorithms pairwise, you also need a Bonferroni adjustment or something similar to your p-value. | Statistical Significance of multiple classifiers by using p-value
This is common practice but quite controversial among statisticians (Dietterich 1998)
(Kohavi 1995)
For a t-test you need 30 or more samples if you cannot assume a normal distribution (which you reall |
39,193 | Statistical Significance of multiple classifiers by using p-value | I agree with Ramalho that it is a good idea to use statistical hypothesis testing for interpretation of model comparison.
Here are some points, that should give you a start:
You can and should set up your performance measurements so that all algorithms use exactly the same splits. This gives you a paired test setup. Paired tests are more powerful than unpaired tests.
Yes, you should also do iterations/repetitions of the cross validation, because that allows you to compare the differences you observe between algorithms to model (in)stability.
However, keep in mind that your test sample size stays the number of different cases you have available and does not increase with repeating the cross valdiation.
Also you should know that 0/1 loss and related figures of merit (such as accuracy) are quite bad behaved from a statistical point of view: they have high variance and thus need large sample sizes to become precise. So-called proper scoring rules would be a better choice, but you have to weight into the decision whether your audience would accept that.
To give you a starting point: with 0/1 loss, a (single) paired comparison can be done with McNemar's test. As @user7019377 says, for multiple comparisons you need to do appropriate corrections.
Even better, though would be to limit the number of comparisons you actually do. So when setting up the test, think hard how to formulate your hypotheses. You may want to show that your stratified dummy is not significantly different from 50 % and the random dummy not significantly different from 1/nclasses (if I'm correctly guessing the situation).
Also, glancing at the results you got for the real classifiers, you may not need multiple tests there but a single test showing that no algorithm predicts significantly different from any other may be sufficient.
Unfortunately, I don't know whether there's something like an ANOVA for binomial data. However, I'd go for a proper scoring rule and have a look whether ANOVA would be a suitable choice.
However, here's something that may provide you a shortcut that gives you all that you need for your interpretation:
The tests for the dummy classifiers are pretty straightforward textbook things.
For the real classifiers, a quick calculation of binomial confidence intervals reveals that 2-sided 95% confidence intervals of the accuracies you report for 675 test cases enclose all of the other classifiers (e.g. 610 of 675 => c.i. from about 87.9% to 92.4%)
looking at the "best" possible results with McNemar's test between KNN with 599 correct and SVM with 610 correct => that would be if the SVN gets all those cases correct that KNN got correct as well, plus additional 11 cases. This gets an (uncorrected) p-value of 0.0025. I.e. after whatever multiple testing correction you prefer, it won't be significant at the p = 0.05 level. Also, one single case that KNN got correct but SVM didn't catapults the p-value for McNemar's test between SVM and kNN above the p = 0.05 value.
Even if an ANOVA-analogue for McNemar's test found that not all 5 real classifiers are equivalent, you don't have enough test cases to conclude which is better!
=> For all practical purposes, I don't see any significant (also and particularly in the every-day sense of the word!) difference between the results for the 5 classifiers you used here (This is based on the assumption that the results were gotten in a paired setup - if the experiment wasn't paired as described above, it is far from siginificant but then things may look different for a paired experiment)
Instead of the binomial tests, also rank-based tests are used.
@air mentions the correlation between the cross validation surrogate models. Whether this is a problem or an advantage depends on what you actually want to compare. If you are looking from an application point of view and want to answer the question what performance to expect for the model trained on this particular dataset using the training algorithm - then the high correlation is what you want. Actually, you assume equality! The correlation is problematic if you want to treat your resampled data sets as approximations to drawing new training data sets for this application (i.e. want to compare achievable performance for different training algorithms on similar tasks).
In the Dietterich1998 paper that corresponds to the question whether you analyze a classier or an algorithm. | Statistical Significance of multiple classifiers by using p-value | I agree with Ramalho that it is a good idea to use statistical hypothesis testing for interpretation of model comparison.
Here are some points, that should give you a start:
You can and should set | Statistical Significance of multiple classifiers by using p-value
I agree with Ramalho that it is a good idea to use statistical hypothesis testing for interpretation of model comparison.
Here are some points, that should give you a start:
You can and should set up your performance measurements so that all algorithms use exactly the same splits. This gives you a paired test setup. Paired tests are more powerful than unpaired tests.
Yes, you should also do iterations/repetitions of the cross validation, because that allows you to compare the differences you observe between algorithms to model (in)stability.
However, keep in mind that your test sample size stays the number of different cases you have available and does not increase with repeating the cross valdiation.
Also you should know that 0/1 loss and related figures of merit (such as accuracy) are quite bad behaved from a statistical point of view: they have high variance and thus need large sample sizes to become precise. So-called proper scoring rules would be a better choice, but you have to weight into the decision whether your audience would accept that.
To give you a starting point: with 0/1 loss, a (single) paired comparison can be done with McNemar's test. As @user7019377 says, for multiple comparisons you need to do appropriate corrections.
Even better, though would be to limit the number of comparisons you actually do. So when setting up the test, think hard how to formulate your hypotheses. You may want to show that your stratified dummy is not significantly different from 50 % and the random dummy not significantly different from 1/nclasses (if I'm correctly guessing the situation).
Also, glancing at the results you got for the real classifiers, you may not need multiple tests there but a single test showing that no algorithm predicts significantly different from any other may be sufficient.
Unfortunately, I don't know whether there's something like an ANOVA for binomial data. However, I'd go for a proper scoring rule and have a look whether ANOVA would be a suitable choice.
However, here's something that may provide you a shortcut that gives you all that you need for your interpretation:
The tests for the dummy classifiers are pretty straightforward textbook things.
For the real classifiers, a quick calculation of binomial confidence intervals reveals that 2-sided 95% confidence intervals of the accuracies you report for 675 test cases enclose all of the other classifiers (e.g. 610 of 675 => c.i. from about 87.9% to 92.4%)
looking at the "best" possible results with McNemar's test between KNN with 599 correct and SVM with 610 correct => that would be if the SVN gets all those cases correct that KNN got correct as well, plus additional 11 cases. This gets an (uncorrected) p-value of 0.0025. I.e. after whatever multiple testing correction you prefer, it won't be significant at the p = 0.05 level. Also, one single case that KNN got correct but SVM didn't catapults the p-value for McNemar's test between SVM and kNN above the p = 0.05 value.
Even if an ANOVA-analogue for McNemar's test found that not all 5 real classifiers are equivalent, you don't have enough test cases to conclude which is better!
=> For all practical purposes, I don't see any significant (also and particularly in the every-day sense of the word!) difference between the results for the 5 classifiers you used here (This is based on the assumption that the results were gotten in a paired setup - if the experiment wasn't paired as described above, it is far from siginificant but then things may look different for a paired experiment)
Instead of the binomial tests, also rank-based tests are used.
@air mentions the correlation between the cross validation surrogate models. Whether this is a problem or an advantage depends on what you actually want to compare. If you are looking from an application point of view and want to answer the question what performance to expect for the model trained on this particular dataset using the training algorithm - then the high correlation is what you want. Actually, you assume equality! The correlation is problematic if you want to treat your resampled data sets as approximations to drawing new training data sets for this application (i.e. want to compare achievable performance for different training algorithms on similar tasks).
In the Dietterich1998 paper that corresponds to the question whether you analyze a classier or an algorithm. | Statistical Significance of multiple classifiers by using p-value
I agree with Ramalho that it is a good idea to use statistical hypothesis testing for interpretation of model comparison.
Here are some points, that should give you a start:
You can and should set |
39,194 | Statistical Significance of multiple classifiers by using p-value | You might consider adding the best naive classifier you can think of, assign classes based on some simple historical average, or assign randomly based on expected distribution. (Perhaps that's the 'random dummy'.)
Then do a chi-square test on the confusion matrix of each test vs. the naive classifier to determine the likelihood of the improvement being due to chance. | Statistical Significance of multiple classifiers by using p-value | You might consider adding the best naive classifier you can think of, assign classes based on some simple historical average, or assign randomly based on expected distribution. (Perhaps that's the 'ra | Statistical Significance of multiple classifiers by using p-value
You might consider adding the best naive classifier you can think of, assign classes based on some simple historical average, or assign randomly based on expected distribution. (Perhaps that's the 'random dummy'.)
Then do a chi-square test on the confusion matrix of each test vs. the naive classifier to determine the likelihood of the improvement being due to chance. | Statistical Significance of multiple classifiers by using p-value
You might consider adding the best naive classifier you can think of, assign classes based on some simple historical average, or assign randomly based on expected distribution. (Perhaps that's the 'ra |
39,195 | Statistical Significance of multiple classifiers by using p-value | Yes you can apply statistical hypothesis testing.
You evaluate each classifier with cross-validation (like you did) but instead store the (accuracy/other metric) of that classifier during each fold. After you have done that you can use hypothesis testing to test if the mean acc of that classifier is significantly better or worst than the mean acc of a second classifier. You can even rank them this way.
Regarding this being a good or bad idea.. I find it is a good idea to compare the same classifier while using different sets of features or transformations.. but comparing different models? I don't know.
You might be better off combining the classifiers through an ensemble.
Repeating cross-validation is also a good idea if you pursuit the method I described. Specially if you want to achieve more precise/stable results (crossvalidation usually has high variance).
Yes, I did it before... on python. It's really straightforward. | Statistical Significance of multiple classifiers by using p-value | Yes you can apply statistical hypothesis testing.
You evaluate each classifier with cross-validation (like you did) but instead store the (accuracy/other metric) of that classifier during each fold. A | Statistical Significance of multiple classifiers by using p-value
Yes you can apply statistical hypothesis testing.
You evaluate each classifier with cross-validation (like you did) but instead store the (accuracy/other metric) of that classifier during each fold. After you have done that you can use hypothesis testing to test if the mean acc of that classifier is significantly better or worst than the mean acc of a second classifier. You can even rank them this way.
Regarding this being a good or bad idea.. I find it is a good idea to compare the same classifier while using different sets of features or transformations.. but comparing different models? I don't know.
You might be better off combining the classifiers through an ensemble.
Repeating cross-validation is also a good idea if you pursuit the method I described. Specially if you want to achieve more precise/stable results (crossvalidation usually has high variance).
Yes, I did it before... on python. It's really straightforward. | Statistical Significance of multiple classifiers by using p-value
Yes you can apply statistical hypothesis testing.
You evaluate each classifier with cross-validation (like you did) but instead store the (accuracy/other metric) of that classifier during each fold. A |
39,196 | When should I use `scipy.stats.wilcoxon` instead of `scipy.stats.ranksums`? [duplicate] | Frank Wilcoxon's 1945 paper [1] described two tests -- for "Unpaired Experiments" and "Paired Comparisons" which have come to be called the (Wilcoxon) rank sum test and the (Wilcoxon) signed rank test respectively.
So the first test is for independent (unpaired) samples and the second is for paired samples*.
* It can also be used for comparing single samples from a symmetric distribution versus some specified center of location.
The test for comparing unpaired samples was extended by Mann and Whitney in 1947. They organized it in a way that may at first seem like a different test, though the tests turn out to be equivalent. [A number of other authors suggested the same idea around the same time as Wilcoxon -- or even a bit earlier. Nevertheless the test is generally named for Wilcoxon or Mann and Whitney or both]
However, you seem to be familiar with the rank sum test so I will now focus on the signed rank test.
In the same way that the rank sum test corresponds (more or less) to an ordinary two-sample t-test, the signed rank test corresponds to a one-sample t-test on paired differences. In this case the differences are ranked in magnitude (i.e. without regard to sign) then the ranks that correspond to positive differences are summed.
This is compared with the distribution of the same statistic if the pair labels had be allocated to the pair-members arbitrarily (because if they come from the same population distribution their pair differences should be symmetrically distributed about 0, and the sign that goes with each rank would then be equally likely to be + or - when the null is true.
Conversely when the populations are different in distribution in a way that tends to make one sample larger, the statistic should tend to be large or small (depending on which sample is from the population that tends to be larger).
[Note that if the null is false, it is not required that the differences be symmetrically distributed - many books incorrectly claim this is a requirement. However, if you are focused only on location shift alternatives then the differences should be symmetric about the amount of the location-shift.]
That is, large or small sums of positive ranks (relative to what you'd expect under the null) indicate a difference in the populations in a way that indicates one group tends to be higher than the other.
Wikipedia's version of the test adds together the positive and negative ranks (with the accompanying signs instead. This shifts the center to 0 but gives an equivalent test.
The function you're calling defines the statistic differently to both of the above versions (as the smaller of the sum of positive and sum of negative ranks, which matches the original definition in Wilcoxon's paper) but the different versions of the tests are all equivalent and should give the same p-values under the same conditions.
[1] Wilcoxon, Frank (1945),
"Individual comparisons by ranking methods"
Biometrics Bulletin. 1 (6), Dec, p80–83.
(The Wikipedia page on this test offers a link to a pdf of the paper) | When should I use `scipy.stats.wilcoxon` instead of `scipy.stats.ranksums`? [duplicate] | Frank Wilcoxon's 1945 paper [1] described two tests -- for "Unpaired Experiments" and "Paired Comparisons" which have come to be called the (Wilcoxon) rank sum test and the (Wilcoxon) signed rank test | When should I use `scipy.stats.wilcoxon` instead of `scipy.stats.ranksums`? [duplicate]
Frank Wilcoxon's 1945 paper [1] described two tests -- for "Unpaired Experiments" and "Paired Comparisons" which have come to be called the (Wilcoxon) rank sum test and the (Wilcoxon) signed rank test respectively.
So the first test is for independent (unpaired) samples and the second is for paired samples*.
* It can also be used for comparing single samples from a symmetric distribution versus some specified center of location.
The test for comparing unpaired samples was extended by Mann and Whitney in 1947. They organized it in a way that may at first seem like a different test, though the tests turn out to be equivalent. [A number of other authors suggested the same idea around the same time as Wilcoxon -- or even a bit earlier. Nevertheless the test is generally named for Wilcoxon or Mann and Whitney or both]
However, you seem to be familiar with the rank sum test so I will now focus on the signed rank test.
In the same way that the rank sum test corresponds (more or less) to an ordinary two-sample t-test, the signed rank test corresponds to a one-sample t-test on paired differences. In this case the differences are ranked in magnitude (i.e. without regard to sign) then the ranks that correspond to positive differences are summed.
This is compared with the distribution of the same statistic if the pair labels had be allocated to the pair-members arbitrarily (because if they come from the same population distribution their pair differences should be symmetrically distributed about 0, and the sign that goes with each rank would then be equally likely to be + or - when the null is true.
Conversely when the populations are different in distribution in a way that tends to make one sample larger, the statistic should tend to be large or small (depending on which sample is from the population that tends to be larger).
[Note that if the null is false, it is not required that the differences be symmetrically distributed - many books incorrectly claim this is a requirement. However, if you are focused only on location shift alternatives then the differences should be symmetric about the amount of the location-shift.]
That is, large or small sums of positive ranks (relative to what you'd expect under the null) indicate a difference in the populations in a way that indicates one group tends to be higher than the other.
Wikipedia's version of the test adds together the positive and negative ranks (with the accompanying signs instead. This shifts the center to 0 but gives an equivalent test.
The function you're calling defines the statistic differently to both of the above versions (as the smaller of the sum of positive and sum of negative ranks, which matches the original definition in Wilcoxon's paper) but the different versions of the tests are all equivalent and should give the same p-values under the same conditions.
[1] Wilcoxon, Frank (1945),
"Individual comparisons by ranking methods"
Biometrics Bulletin. 1 (6), Dec, p80–83.
(The Wikipedia page on this test offers a link to a pdf of the paper) | When should I use `scipy.stats.wilcoxon` instead of `scipy.stats.ranksums`? [duplicate]
Frank Wilcoxon's 1945 paper [1] described two tests -- for "Unpaired Experiments" and "Paired Comparisons" which have come to be called the (Wilcoxon) rank sum test and the (Wilcoxon) signed rank test |
39,197 | Logistic regression diagnostic plots in R | This question is related to: Interpretation of plot(glm.model), which it may benefit you to read. Regarding your specific questions:
What constitutes a predicted value in logistic regression is a tricky subject. That's because the prediction can be made on several different scales. I think the most intuitive predicted value is the fitted probability of 'success' for the given observation. However, you could also use the fitted odds, or the fitted log odds. The fitted model equation / coefficients that is returned by statistical software will be on the scale of the linear predictor, that is, on the log odds scale. As a result, the fitted log odds of 'success' is typically used as the default. In R, for example, ?predict.glm will default to type="link" (the log odds); since your predicted values extend below $0$, it is clear that the log odds of success is what is being plotted.
Here are some additional resources that might help you:
Interpretation of simple predictions to odds ratios in logistic regression
Likewise, what constitutes a residual in logistic regression is even more tricky. There are lots of ways to compute residuals for a generalized linear model. In my opinion, the most intuitive residual would be the raw residual ($r_i = y_i - \hat y_i$), but they are actually hard to use, so you may well never see them. By default, ?residuals.glm defaults to type="deviance". Deviance residuals reflect a datum's contribution to the model's total deviance. Deviance residuals (and some other common types) are briefly discussed in the lecture notes for Germán Rodríguez's GLM class.
Suggested reading:
What do the residuals in a logistic regression mean?
I have argued, in my answer to the thread linked at the top, that it is best not to use these to examine a fitted logistic regression model.
Further reading:
Diagnostics for logistic regression? | Logistic regression diagnostic plots in R | This question is related to: Interpretation of plot(glm.model), which it may benefit you to read. Regarding your specific questions:
What constitutes a predicted value in logistic regression is a | Logistic regression diagnostic plots in R
This question is related to: Interpretation of plot(glm.model), which it may benefit you to read. Regarding your specific questions:
What constitutes a predicted value in logistic regression is a tricky subject. That's because the prediction can be made on several different scales. I think the most intuitive predicted value is the fitted probability of 'success' for the given observation. However, you could also use the fitted odds, or the fitted log odds. The fitted model equation / coefficients that is returned by statistical software will be on the scale of the linear predictor, that is, on the log odds scale. As a result, the fitted log odds of 'success' is typically used as the default. In R, for example, ?predict.glm will default to type="link" (the log odds); since your predicted values extend below $0$, it is clear that the log odds of success is what is being plotted.
Here are some additional resources that might help you:
Interpretation of simple predictions to odds ratios in logistic regression
Likewise, what constitutes a residual in logistic regression is even more tricky. There are lots of ways to compute residuals for a generalized linear model. In my opinion, the most intuitive residual would be the raw residual ($r_i = y_i - \hat y_i$), but they are actually hard to use, so you may well never see them. By default, ?residuals.glm defaults to type="deviance". Deviance residuals reflect a datum's contribution to the model's total deviance. Deviance residuals (and some other common types) are briefly discussed in the lecture notes for Germán Rodríguez's GLM class.
Suggested reading:
What do the residuals in a logistic regression mean?
I have argued, in my answer to the thread linked at the top, that it is best not to use these to examine a fitted logistic regression model.
Further reading:
Diagnostics for logistic regression? | Logistic regression diagnostic plots in R
This question is related to: Interpretation of plot(glm.model), which it may benefit you to read. Regarding your specific questions:
What constitutes a predicted value in logistic regression is a |
39,198 | Why are the cluster analysis results using raw data the same as the ones using PCA scores? | This is because PCA scores are simply original data in a rotated coordinate frame.
Below on the left I show some example 2D data (100 points in 2D) and on the right the corresponding PCA scores. The data cloud simply gets rotated clockwise by approximately 45 degrees.
If it is not completely clear to you how one gets from the first subplot to the second one or why PCA amounts to rotation, take a look at our very informative thread Making sense of principal component analysis, eigenvectors & eigenvalues. In my answer there I am using exactly the same toy dataset as displayed here. Some other answers are very much worth reading too.
Now, to your question.
Clustering methods are usually based on Euclidean distances between points. The points that lie close to each other get clustered together; the ones that are far away get assigned to different clusters. As you can see above, all distances between all points stay exactly the same after PCA.
Hence the identical clustering results. Here are both representations clustered with k-means with $k=3$:
As you see, the clustering results are identical.
Can PCA make any difference at all?
Yes. One can use it in two ways:
Standardize all scores to unit variance; or
Use only a subset of principal components, usually the ones that explain the most variance.
Here is how it looks like in the same toy example. On the left I am using standardized scores (note how different the clusters become), on the right I am using only PC1. | Why are the cluster analysis results using raw data the same as the ones using PCA scores? | This is because PCA scores are simply original data in a rotated coordinate frame.
Below on the left I show some example 2D data (100 points in 2D) and on the right the corresponding PCA scores. The d | Why are the cluster analysis results using raw data the same as the ones using PCA scores?
This is because PCA scores are simply original data in a rotated coordinate frame.
Below on the left I show some example 2D data (100 points in 2D) and on the right the corresponding PCA scores. The data cloud simply gets rotated clockwise by approximately 45 degrees.
If it is not completely clear to you how one gets from the first subplot to the second one or why PCA amounts to rotation, take a look at our very informative thread Making sense of principal component analysis, eigenvectors & eigenvalues. In my answer there I am using exactly the same toy dataset as displayed here. Some other answers are very much worth reading too.
Now, to your question.
Clustering methods are usually based on Euclidean distances between points. The points that lie close to each other get clustered together; the ones that are far away get assigned to different clusters. As you can see above, all distances between all points stay exactly the same after PCA.
Hence the identical clustering results. Here are both representations clustered with k-means with $k=3$:
As you see, the clustering results are identical.
Can PCA make any difference at all?
Yes. One can use it in two ways:
Standardize all scores to unit variance; or
Use only a subset of principal components, usually the ones that explain the most variance.
Here is how it looks like in the same toy example. On the left I am using standardized scores (note how different the clusters become), on the right I am using only PC1. | Why are the cluster analysis results using raw data the same as the ones using PCA scores?
This is because PCA scores are simply original data in a rotated coordinate frame.
Below on the left I show some example 2D data (100 points in 2D) and on the right the corresponding PCA scores. The d |
39,199 | Relationship between Poisson, binomial, negative binomial distributions and normal distribution | The binomial distribution is the distribution of the number of successes in a fixed (i.e. not random) number of independent trials with the same probability of success on each trial. It support is the set $\{0,1,2,\ldots,n\}$, which is finite, where $n$ is the number of trials.
The negative binomial distribution is the distribution of the number of failures before a fixed (i.e. not random) number of successes, again with independent trials and the same probability of success on each trial. Its support is the set $\{0,1,2,3,\ldots\}$, which is infinite.
The Poisson distribution can be loosely characterized as the number of successes in an infinite number of independent trials with an infinitely small probability of success on each trial, in which the expected number of successes is some fixed positive number. It is a limit of the binomial distribution in which the number of trials approaches $\infty$ and the probability of success on each trial approaches $0$ in such a way that the expected number of successes remains constant or at least approaches some positive number.
It is true that for the binomial distribution the mean is larger than the variance, for the negative binomial distribution the mean is smaller than the variance, and for the Poisson distribution they are equal.
But it is not true that for every distribution whose support is some set of cardinal numbers, if the mean equals the variance then it is a Poisson distribution, nor that if the mean is greater than the variance it is a binomial distribution, nor that if the mean is less than the variance it is a negative binomial distribution. For example, the mean of the hypergeometric distribution that arises from sampling without replacement is greater than the variance, as with the binomial distribution, but the distribution is not the same. For the uniform distribution on the set $\{0,1,2,\ldots,n\}$, if $n>4$ then the variance is greater than the mean, as with the negative binomial distribution, but the distribution is not the same. For the uniform distribution on the set $\{0,2\}$, the variance is equal to the mean, as with the Poisson distribution, but the distribution is not the same.
If $X\sim\mathrm{Poisson}(\lambda)$ then
$$
\frac{X-\lambda}{\sqrt\lambda} \overset{\text{D.}} \longrightarrow N(0,1) \text{ as } \lambda\to\infty
$$
because when $\lambda$ is large, the distribution of $X$ is the same as the distribution of the sum of a large number of Poisson distributed random variables whose sum is near $1$. That is because the sum of independent Poisson-distributed random variables is Poisson distributed, so the central limit theorem can be applied.
If $X\sim\mathrm{Binomial}(n,p)$ then
$$
\frac{X-np}{\sqrt{np(1-p)}} \overset{\text{D.}}\longrightarrow N(0,1) \text{ as } n \to \infty
$$
because $X$ has the same distribution as the sum of $n$ independent random variables distributed as $\mathrm{Binomial}(1,p)$, so again the central limit theorem applies.
The negative binomial distribution with parameters $r,p$ is the distribution of the number of failures before the $r$th success, with probability $p$ of success on each trial. If $X$ is so distributed then we have
$$
\frac{X- (pr/(1-p)) }{\sqrt{pr}/(1-p)} \overset{\text{D.}} \to N(0,1) \text{ as } r\to\infty
$$
because $X$ has the same distribution as the sum of $r$ independent random variables distributed as negative binomial with parameters $1,p$, so again the central limit theorem applies.
When approximating any of these kinds of distributions with a normal distribution, note that the even $[X\le n]$ is the same as the event $[X<n+1]$, so use the continuity correction in which you find the probability that $[X\le n+\frac 1 2]$ according to the normal distribution. | Relationship between Poisson, binomial, negative binomial distributions and normal distribution | The binomial distribution is the distribution of the number of successes in a fixed (i.e. not random) number of independent trials with the same probability of success on each trial. It support is th | Relationship between Poisson, binomial, negative binomial distributions and normal distribution
The binomial distribution is the distribution of the number of successes in a fixed (i.e. not random) number of independent trials with the same probability of success on each trial. It support is the set $\{0,1,2,\ldots,n\}$, which is finite, where $n$ is the number of trials.
The negative binomial distribution is the distribution of the number of failures before a fixed (i.e. not random) number of successes, again with independent trials and the same probability of success on each trial. Its support is the set $\{0,1,2,3,\ldots\}$, which is infinite.
The Poisson distribution can be loosely characterized as the number of successes in an infinite number of independent trials with an infinitely small probability of success on each trial, in which the expected number of successes is some fixed positive number. It is a limit of the binomial distribution in which the number of trials approaches $\infty$ and the probability of success on each trial approaches $0$ in such a way that the expected number of successes remains constant or at least approaches some positive number.
It is true that for the binomial distribution the mean is larger than the variance, for the negative binomial distribution the mean is smaller than the variance, and for the Poisson distribution they are equal.
But it is not true that for every distribution whose support is some set of cardinal numbers, if the mean equals the variance then it is a Poisson distribution, nor that if the mean is greater than the variance it is a binomial distribution, nor that if the mean is less than the variance it is a negative binomial distribution. For example, the mean of the hypergeometric distribution that arises from sampling without replacement is greater than the variance, as with the binomial distribution, but the distribution is not the same. For the uniform distribution on the set $\{0,1,2,\ldots,n\}$, if $n>4$ then the variance is greater than the mean, as with the negative binomial distribution, but the distribution is not the same. For the uniform distribution on the set $\{0,2\}$, the variance is equal to the mean, as with the Poisson distribution, but the distribution is not the same.
If $X\sim\mathrm{Poisson}(\lambda)$ then
$$
\frac{X-\lambda}{\sqrt\lambda} \overset{\text{D.}} \longrightarrow N(0,1) \text{ as } \lambda\to\infty
$$
because when $\lambda$ is large, the distribution of $X$ is the same as the distribution of the sum of a large number of Poisson distributed random variables whose sum is near $1$. That is because the sum of independent Poisson-distributed random variables is Poisson distributed, so the central limit theorem can be applied.
If $X\sim\mathrm{Binomial}(n,p)$ then
$$
\frac{X-np}{\sqrt{np(1-p)}} \overset{\text{D.}}\longrightarrow N(0,1) \text{ as } n \to \infty
$$
because $X$ has the same distribution as the sum of $n$ independent random variables distributed as $\mathrm{Binomial}(1,p)$, so again the central limit theorem applies.
The negative binomial distribution with parameters $r,p$ is the distribution of the number of failures before the $r$th success, with probability $p$ of success on each trial. If $X$ is so distributed then we have
$$
\frac{X- (pr/(1-p)) }{\sqrt{pr}/(1-p)} \overset{\text{D.}} \to N(0,1) \text{ as } r\to\infty
$$
because $X$ has the same distribution as the sum of $r$ independent random variables distributed as negative binomial with parameters $1,p$, so again the central limit theorem applies.
When approximating any of these kinds of distributions with a normal distribution, note that the even $[X\le n]$ is the same as the event $[X<n+1]$, so use the continuity correction in which you find the probability that $[X\le n+\frac 1 2]$ according to the normal distribution. | Relationship between Poisson, binomial, negative binomial distributions and normal distribution
The binomial distribution is the distribution of the number of successes in a fixed (i.e. not random) number of independent trials with the same probability of success on each trial. It support is th |
39,200 | Covariance matrix of complex random variables | Here is a geometric interpretation.
First, take two vectors in $\mathbb{R}^2$
$$\vec{\mathbb{z}}=[x,y] \,, \vec{\mathbb{w}}=[u,v]$$
For these vectors, there are two standard types of "products", the dot product
$$\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}=xu+yv$$
and the cross product*
$$\vec{\mathbb{z}}\times\vec{\mathbb{w}}=xv-yu$$
which can be interpreted as
$$\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}_\perp$$
where $\vec{\mathbb{w}}_\perp=[v,-u]$ has the same magnitude as $\vec{\mathbb{w}}$ but is orthogonal.
(*Technically this "2D cross product" is defined as $[0,0,\vec{\mathbb{z}}\times\vec{\mathbb{w}}]\equiv[x,y,0]\times[u,v,0]$.)
In terms of geometric intuition, the dot product between two vectors measures how well they align (think correlation), but also their relative magnitudes (think standard deviations), i.e.
$$\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}=||\vec{\mathbb{z}}||\,||\vec{\mathbb{w}}||\,\cos[\theta]$$
where $\theta$ is the angle between them (compare to $\sigma_{xy}=\sigma_x\sigma_y\rho_{xy}$).
Note that the dot product can also be written as $\mathbb{z}^T\mathbb{w}$, where $\mathbb{z}$ and $\mathbb{w}$ are just $\vec{\mathbb{z}}$ and $\vec{\mathbb{z}}$ written as column vectors.
Now let us do the same thing with two scalars in the complex plane ($z,w\in\mathbb{C}$), i.e.
$$z=x+iy \,, w=u+iv$$
What is the equivalent to the "dot product" here? It is actually the same as above, but now using the conjugate transpose, i.e. $z^*\equiv\bar{z}^T$ (also written as $z^\dagger$).
Since the transpose of a scalar is just that same scalar, the complex dot product is then
$$z^{\dagger}w=\bar{z}w=(x-iy)(u+iv)=(xu+yv)+i(xv-yu)$$
We can immediately notice two things. First, the complex dot product is equivalent to
$$z^{\dagger}w=(\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}})+i(\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}_\perp)$$
i.e. it is a complex number whose real component is the dot product of the corresponding 2-vectors, and whose imaginary component is their cross product. Second, since $\bar{x}=x$ for $x\in\mathbb{R}$, the vector dot product we started with can be written as $\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}=\mathbb{z}^{\dagger}\mathbb{w}$ (i.e. we were really using the conjugate transpose all along).
Now for the covariance matrix.
For simplicity, let us assume that all random variables have zero mean. Then the covariance is defined as
$$\mathrm{Cov}[z,w]\equiv\mathbb{E}[\bar{z}w]$$
so we have
\begin{align}
\mathrm{Cov}[z,w] &= \mathrm{Re}[\sigma_{z,w}]+i\,\mathrm{Im}[\sigma_{z,w}] \\
&= \,\mathbb{E}[\,\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}\,]+i\,\mathbb{E}[\,\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}_\perp]
\end{align}
The real (imaginary) component of $\sigma_{z,w}$ is the expected value of the dot (cross) product of the associated vectors $\vec{\mathbb{z}}$ and $\vec{\mathbb{w}}$.
This is the main intuition. The rest of this answer is just for completeness.
If our random variable is a column vector $\mathbb{z}=[z_1,\ldots,z_n]\in\mathbb{C}^n$ with covariance matrix $\boldsymbol{\Sigma}\in\mathbb{C}^{n\times n}$, then we have
$$\Sigma_{ij}=\mathrm{Cov}[z_i,z_j]$$
Finally, if we have $m$ samples of the random variable $\mathbb{z}$, arranged as the rows of a data matrix $\boldsymbol{Z}\in\mathbb{C}^{m\times n}$, then the sample* covariance can be approximated by
$$\boldsymbol{\Sigma}\approx\tfrac{1}{m}\boldsymbol{Z}^{\dagger}\boldsymbol{Z}$$
(*yes, I divided by $m$, so you can call it the "biased" sample covariance if you must.) | Covariance matrix of complex random variables | Here is a geometric interpretation.
First, take two vectors in $\mathbb{R}^2$
$$\vec{\mathbb{z}}=[x,y] \,, \vec{\mathbb{w}}=[u,v]$$
For these vectors, there are two standard types of "products", the d | Covariance matrix of complex random variables
Here is a geometric interpretation.
First, take two vectors in $\mathbb{R}^2$
$$\vec{\mathbb{z}}=[x,y] \,, \vec{\mathbb{w}}=[u,v]$$
For these vectors, there are two standard types of "products", the dot product
$$\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}=xu+yv$$
and the cross product*
$$\vec{\mathbb{z}}\times\vec{\mathbb{w}}=xv-yu$$
which can be interpreted as
$$\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}_\perp$$
where $\vec{\mathbb{w}}_\perp=[v,-u]$ has the same magnitude as $\vec{\mathbb{w}}$ but is orthogonal.
(*Technically this "2D cross product" is defined as $[0,0,\vec{\mathbb{z}}\times\vec{\mathbb{w}}]\equiv[x,y,0]\times[u,v,0]$.)
In terms of geometric intuition, the dot product between two vectors measures how well they align (think correlation), but also their relative magnitudes (think standard deviations), i.e.
$$\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}=||\vec{\mathbb{z}}||\,||\vec{\mathbb{w}}||\,\cos[\theta]$$
where $\theta$ is the angle between them (compare to $\sigma_{xy}=\sigma_x\sigma_y\rho_{xy}$).
Note that the dot product can also be written as $\mathbb{z}^T\mathbb{w}$, where $\mathbb{z}$ and $\mathbb{w}$ are just $\vec{\mathbb{z}}$ and $\vec{\mathbb{z}}$ written as column vectors.
Now let us do the same thing with two scalars in the complex plane ($z,w\in\mathbb{C}$), i.e.
$$z=x+iy \,, w=u+iv$$
What is the equivalent to the "dot product" here? It is actually the same as above, but now using the conjugate transpose, i.e. $z^*\equiv\bar{z}^T$ (also written as $z^\dagger$).
Since the transpose of a scalar is just that same scalar, the complex dot product is then
$$z^{\dagger}w=\bar{z}w=(x-iy)(u+iv)=(xu+yv)+i(xv-yu)$$
We can immediately notice two things. First, the complex dot product is equivalent to
$$z^{\dagger}w=(\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}})+i(\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}_\perp)$$
i.e. it is a complex number whose real component is the dot product of the corresponding 2-vectors, and whose imaginary component is their cross product. Second, since $\bar{x}=x$ for $x\in\mathbb{R}$, the vector dot product we started with can be written as $\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}=\mathbb{z}^{\dagger}\mathbb{w}$ (i.e. we were really using the conjugate transpose all along).
Now for the covariance matrix.
For simplicity, let us assume that all random variables have zero mean. Then the covariance is defined as
$$\mathrm{Cov}[z,w]\equiv\mathbb{E}[\bar{z}w]$$
so we have
\begin{align}
\mathrm{Cov}[z,w] &= \mathrm{Re}[\sigma_{z,w}]+i\,\mathrm{Im}[\sigma_{z,w}] \\
&= \,\mathbb{E}[\,\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}\,]+i\,\mathbb{E}[\,\vec{\mathbb{z}}\dot{}\vec{\mathbb{w}}_\perp]
\end{align}
The real (imaginary) component of $\sigma_{z,w}$ is the expected value of the dot (cross) product of the associated vectors $\vec{\mathbb{z}}$ and $\vec{\mathbb{w}}$.
This is the main intuition. The rest of this answer is just for completeness.
If our random variable is a column vector $\mathbb{z}=[z_1,\ldots,z_n]\in\mathbb{C}^n$ with covariance matrix $\boldsymbol{\Sigma}\in\mathbb{C}^{n\times n}$, then we have
$$\Sigma_{ij}=\mathrm{Cov}[z_i,z_j]$$
Finally, if we have $m$ samples of the random variable $\mathbb{z}$, arranged as the rows of a data matrix $\boldsymbol{Z}\in\mathbb{C}^{m\times n}$, then the sample* covariance can be approximated by
$$\boldsymbol{\Sigma}\approx\tfrac{1}{m}\boldsymbol{Z}^{\dagger}\boldsymbol{Z}$$
(*yes, I divided by $m$, so you can call it the "biased" sample covariance if you must.) | Covariance matrix of complex random variables
Here is a geometric interpretation.
First, take two vectors in $\mathbb{R}^2$
$$\vec{\mathbb{z}}=[x,y] \,, \vec{\mathbb{w}}=[u,v]$$
For these vectors, there are two standard types of "products", the d |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.