idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k β | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 β | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
27,401 | Compare the statistical significance of the difference between two polynomial regressions in R | #Create some example data
mydata1 <- subset(iris, Species == "setosa", select = c(Sepal.Length, Sepal.Width))
mydata2 <- subset(iris, Species == "virginica", select = c(Sepal.Length, Sepal.Width))
#add a grouping variable
mydata1$g <- "a"
mydata2$g <- "b"
#combine the datasets
mydata <- rbind(mydata1, mydata2)
#model without grouping variable
fit0 <- lm(Sepal.Width ~ poly(Sepal.Length, 2), data = mydata)
#model with grouping variable
fit1 <- lm(Sepal.Width ~ poly(Sepal.Length, 2) * g, data = mydata)
#compare models
anova(fit0, fit1)
#Analysis of Variance Table
#
#Model 1: Sepal.Width ~ poly(Sepal.Length, 2)
#Model 2: Sepal.Width ~ poly(Sepal.Length, 2) * g
# Res.Df RSS Df Sum of Sq F Pr(>F)
#1 97 16.4700
#2 94 7.1143 3 9.3557 41.205 < 2.2e-16 ***
# ---
# Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
As you see, fit1 is significantly better than fit0, i.e. the effect of the grouping variable is significant. Since the grouping variable represents the respective datasets, the polynomial fits to the two datasets can be considered significantly different. | Compare the statistical significance of the difference between two polynomial regressions in R | #Create some example data
mydata1 <- subset(iris, Species == "setosa", select = c(Sepal.Length, Sepal.Width))
mydata2 <- subset(iris, Species == "virginica", select = c(Sepal.Length, Sepal.Width))
#a | Compare the statistical significance of the difference between two polynomial regressions in R
#Create some example data
mydata1 <- subset(iris, Species == "setosa", select = c(Sepal.Length, Sepal.Width))
mydata2 <- subset(iris, Species == "virginica", select = c(Sepal.Length, Sepal.Width))
#add a grouping variable
mydata1$g <- "a"
mydata2$g <- "b"
#combine the datasets
mydata <- rbind(mydata1, mydata2)
#model without grouping variable
fit0 <- lm(Sepal.Width ~ poly(Sepal.Length, 2), data = mydata)
#model with grouping variable
fit1 <- lm(Sepal.Width ~ poly(Sepal.Length, 2) * g, data = mydata)
#compare models
anova(fit0, fit1)
#Analysis of Variance Table
#
#Model 1: Sepal.Width ~ poly(Sepal.Length, 2)
#Model 2: Sepal.Width ~ poly(Sepal.Length, 2) * g
# Res.Df RSS Df Sum of Sq F Pr(>F)
#1 97 16.4700
#2 94 7.1143 3 9.3557 41.205 < 2.2e-16 ***
# ---
# Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
As you see, fit1 is significantly better than fit0, i.e. the effect of the grouping variable is significant. Since the grouping variable represents the respective datasets, the polynomial fits to the two datasets can be considered significantly different. | Compare the statistical significance of the difference between two polynomial regressions in R
#Create some example data
mydata1 <- subset(iris, Species == "setosa", select = c(Sepal.Length, Sepal.Width))
mydata2 <- subset(iris, Species == "virginica", select = c(Sepal.Length, Sepal.Width))
#a |
27,402 | Compare the statistical significance of the difference between two polynomial regressions in R | @Ronald 's answer is the best and it's widely applicable to many similar problems (for example, is there a statistically significant difference between men and women in the relationship between weight and age?). However, I'll add another solution which, while not as quantitative (it doesn't provide a p-value), gives a nice graphical display of the difference.
EDIT: according to this question, it looks like predict.lm, the function used by ggplot2 to compute the confidence intervals, doesn't compute simultaneous confidence bands around the regression curve, but only pointwise confidence bands. These last bands are not the right ones to assess if two fitted linear models are statistically different, or said in another way, whether they could be compatible with the same true model or not. Thus, they are not the right curves to answer your question. Since apparently there's no R builtin to get simultaneous confidence bands (strange!), I wrote my own function. Here it is:
simultaneous_CBs <- function(linear_model, newdata, level = 0.95){
# Working-Hotelling 1 β Ξ± confidence bands for the model linear_model
# at points newdata with Ξ± = 1 - level
# summary of regression model
lm_summary <- summary(linear_model)
# degrees of freedom
p <- lm_summary$df[1]
# residual degrees of freedom
nmp <-lm_summary$df[2]
# F-distribution
Fvalue <- qf(level,p,nmp)
# multiplier
W <- sqrt(p*Fvalue)
# confidence intervals for the mean response at the new points
CI <- predict(linear_model, newdata, se.fit = TRUE, interval = "confidence",
level = level)
# mean value at new points
Y_h <- CI$fit[,1]
# Working-Hotelling 1 β Ξ± confidence bands
LB <- Y_h - W*CI$se.fit
UB <- Y_h + W*CI$se.fit
sim_CB <- data.frame(LowerBound = LB, Mean = Y_h, UpperBound = UB)
}
library(dplyr)
# sample datasets
setosa <- iris %>% filter(Species == "setosa") %>% select(Sepal.Length, Sepal.Width, Species)
virginica <- iris %>% filter(Species == "virginica") %>% select(Sepal.Length, Sepal.Width, Species)
# compute simultaneous confidence bands
# 1. compute linear models
Model <- as.formula(Sepal.Width ~ poly(Sepal.Length,2))
fit1 <- lm(Model, data = setosa)
fit2 <- lm(Model, data = virginica)
# 2. compute new prediction points
npoints <- 100
newdata1 <- with(setosa, data.frame(Sepal.Length =
seq(min(Sepal.Length), max(Sepal.Length), len = npoints )))
newdata2 <- with(virginica, data.frame(Sepal.Length =
seq(min(Sepal.Length), max(Sepal.Length), len = npoints)))
# 3. simultaneous confidence bands
mylevel = 0.95
cc1 <- simultaneous_CBs(fit1, newdata1, level = mylevel)
cc1 <- cc1 %>% mutate(Species = "setosa", Sepal.Length = newdata1$Sepal.Length)
cc2 <- simultaneous_CBs(fit2, newdata2, level = mylevel)
cc2 <- cc2 %>% mutate(Species = "virginica", Sepal.Length = newdata2$Sepal.Length)
# combine datasets
mydata <- rbind(setosa, virginica)
mycc <- rbind(cc1, cc2)
mycc <- mycc %>% rename(Sepal.Width = Mean)
# plot both simultaneous confidence bands and pointwise confidence
# bands, to show the difference
library(ggplot2)
# prepare a plot using dataframe mydata, mapping sepal Length to x,
# sepal width to y, and grouping the data by species
p <- ggplot(data = mydata, aes(x = Sepal.Length, y = Sepal.Width, color = Species)) +
# add data points
geom_point() +
# add quadratic regression with orthogonal polynomials and 95% pointwise
# confidence intervals
geom_smooth(method ="lm", formula = y ~ poly(x,2)) +
# add 95% simultaneous confidence bands
geom_ribbon(data = mycc, aes(x = Sepal.Length, color = NULL, fill = Species, ymin = LowerBound, ymax = UpperBound),alpha = 0.5)
print(p)
The inner bands are those computed by default by geom_smooth: these are pointwise 95% confidence bands around the regression curves. The outer, semitransparent bands (thanks for the graphics tip, @Roland ) are instead the simultaneous 95% confidence bands. As you can see, they're larger than the pointwise bands, as expected. The fact that the simultaneous confidence bands from the two curves don't overlap can be taken as an indication of the fact that the difference between the two models is statistically significant.
Of course, for a hypothesis test with a valid p-value, @Roland approach must be followed, but this graphical approach can be viewed as exploratory data analysis. Also, the plot can give us some additional ideas. It's clear that the models for the two data set are statistically different. But it also looks like two degree 1 models would fit the data nearly as well as the two quadratic models. We can easily test this hypothesis:
fit_deg1 <- lm(data = mydata, Sepal.Width ~ Species*poly(Sepal.Length,1))
fit_deg2 <- lm(data = mydata, Sepal.Width ~ Species*poly(Sepal.Length,2))
anova(fit_deg1, fit_deg2)
# Analysis of Variance Table
# Model 1: Sepal.Width ~ Species * poly(Sepal.Length, 1)
# Model 2: Sepal.Width ~ Species * poly(Sepal.Length, 2)
# Res.Df RSS Df Sum of Sq F Pr(>F)
# 1 96 7.1895
# 2 94 7.1143 2 0.075221 0.4969 0.61
The difference between the degree 1 model and the degree 2 model is not significant, thus we may as well use two linear regressions for each data set. | Compare the statistical significance of the difference between two polynomial regressions in R | @Ronald 's answer is the best and it's widely applicable to many similar problems (for example, is there a statistically significant difference between men and women in the relationship between weight | Compare the statistical significance of the difference between two polynomial regressions in R
@Ronald 's answer is the best and it's widely applicable to many similar problems (for example, is there a statistically significant difference between men and women in the relationship between weight and age?). However, I'll add another solution which, while not as quantitative (it doesn't provide a p-value), gives a nice graphical display of the difference.
EDIT: according to this question, it looks like predict.lm, the function used by ggplot2 to compute the confidence intervals, doesn't compute simultaneous confidence bands around the regression curve, but only pointwise confidence bands. These last bands are not the right ones to assess if two fitted linear models are statistically different, or said in another way, whether they could be compatible with the same true model or not. Thus, they are not the right curves to answer your question. Since apparently there's no R builtin to get simultaneous confidence bands (strange!), I wrote my own function. Here it is:
simultaneous_CBs <- function(linear_model, newdata, level = 0.95){
# Working-Hotelling 1 β Ξ± confidence bands for the model linear_model
# at points newdata with Ξ± = 1 - level
# summary of regression model
lm_summary <- summary(linear_model)
# degrees of freedom
p <- lm_summary$df[1]
# residual degrees of freedom
nmp <-lm_summary$df[2]
# F-distribution
Fvalue <- qf(level,p,nmp)
# multiplier
W <- sqrt(p*Fvalue)
# confidence intervals for the mean response at the new points
CI <- predict(linear_model, newdata, se.fit = TRUE, interval = "confidence",
level = level)
# mean value at new points
Y_h <- CI$fit[,1]
# Working-Hotelling 1 β Ξ± confidence bands
LB <- Y_h - W*CI$se.fit
UB <- Y_h + W*CI$se.fit
sim_CB <- data.frame(LowerBound = LB, Mean = Y_h, UpperBound = UB)
}
library(dplyr)
# sample datasets
setosa <- iris %>% filter(Species == "setosa") %>% select(Sepal.Length, Sepal.Width, Species)
virginica <- iris %>% filter(Species == "virginica") %>% select(Sepal.Length, Sepal.Width, Species)
# compute simultaneous confidence bands
# 1. compute linear models
Model <- as.formula(Sepal.Width ~ poly(Sepal.Length,2))
fit1 <- lm(Model, data = setosa)
fit2 <- lm(Model, data = virginica)
# 2. compute new prediction points
npoints <- 100
newdata1 <- with(setosa, data.frame(Sepal.Length =
seq(min(Sepal.Length), max(Sepal.Length), len = npoints )))
newdata2 <- with(virginica, data.frame(Sepal.Length =
seq(min(Sepal.Length), max(Sepal.Length), len = npoints)))
# 3. simultaneous confidence bands
mylevel = 0.95
cc1 <- simultaneous_CBs(fit1, newdata1, level = mylevel)
cc1 <- cc1 %>% mutate(Species = "setosa", Sepal.Length = newdata1$Sepal.Length)
cc2 <- simultaneous_CBs(fit2, newdata2, level = mylevel)
cc2 <- cc2 %>% mutate(Species = "virginica", Sepal.Length = newdata2$Sepal.Length)
# combine datasets
mydata <- rbind(setosa, virginica)
mycc <- rbind(cc1, cc2)
mycc <- mycc %>% rename(Sepal.Width = Mean)
# plot both simultaneous confidence bands and pointwise confidence
# bands, to show the difference
library(ggplot2)
# prepare a plot using dataframe mydata, mapping sepal Length to x,
# sepal width to y, and grouping the data by species
p <- ggplot(data = mydata, aes(x = Sepal.Length, y = Sepal.Width, color = Species)) +
# add data points
geom_point() +
# add quadratic regression with orthogonal polynomials and 95% pointwise
# confidence intervals
geom_smooth(method ="lm", formula = y ~ poly(x,2)) +
# add 95% simultaneous confidence bands
geom_ribbon(data = mycc, aes(x = Sepal.Length, color = NULL, fill = Species, ymin = LowerBound, ymax = UpperBound),alpha = 0.5)
print(p)
The inner bands are those computed by default by geom_smooth: these are pointwise 95% confidence bands around the regression curves. The outer, semitransparent bands (thanks for the graphics tip, @Roland ) are instead the simultaneous 95% confidence bands. As you can see, they're larger than the pointwise bands, as expected. The fact that the simultaneous confidence bands from the two curves don't overlap can be taken as an indication of the fact that the difference between the two models is statistically significant.
Of course, for a hypothesis test with a valid p-value, @Roland approach must be followed, but this graphical approach can be viewed as exploratory data analysis. Also, the plot can give us some additional ideas. It's clear that the models for the two data set are statistically different. But it also looks like two degree 1 models would fit the data nearly as well as the two quadratic models. We can easily test this hypothesis:
fit_deg1 <- lm(data = mydata, Sepal.Width ~ Species*poly(Sepal.Length,1))
fit_deg2 <- lm(data = mydata, Sepal.Width ~ Species*poly(Sepal.Length,2))
anova(fit_deg1, fit_deg2)
# Analysis of Variance Table
# Model 1: Sepal.Width ~ Species * poly(Sepal.Length, 1)
# Model 2: Sepal.Width ~ Species * poly(Sepal.Length, 2)
# Res.Df RSS Df Sum of Sq F Pr(>F)
# 1 96 7.1895
# 2 94 7.1143 2 0.075221 0.4969 0.61
The difference between the degree 1 model and the degree 2 model is not significant, thus we may as well use two linear regressions for each data set. | Compare the statistical significance of the difference between two polynomial regressions in R
@Ronald 's answer is the best and it's widely applicable to many similar problems (for example, is there a statistically significant difference between men and women in the relationship between weight |
27,403 | matched pairs in Python (Propensity score matching) | The easiest way I've found is to use NearestNeighbors from sklearn:
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import NearestNeighbors
def get_matching_pairs(treated_df, non_treated_df, scaler=True):
treated_x = treated_df.values
non_treated_x = non_treated_df.values
if scaler == True:
scaler = StandardScaler()
if scaler:
scaler.fit(treated_x)
treated_x = scaler.transform(treated_x)
non_treated_x = scaler.transform(non_treated_x)
nbrs = NearestNeighbors(n_neighbors=1, algorithm='ball_tree').fit(non_treated_x)
distances, indices = nbrs.kneighbors(treated_x)
indices = indices.reshape(indices.shape[0])
matched = non_treated_df.iloc[indices]
return matched
Example below:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
treated_df = pd.DataFrame()
np.random.seed(1)
size_1 = 200
size_2 = 1000
treated_df['x'] = np.random.normal(0,1,size=size_1)
treated_df['y'] = np.random.normal(50,20,size=size_1)
treated_df['z'] = np.random.normal(0,100,size=size_1)
non_treated_df = pd.DataFrame()
# two different populations
non_treated_df['x'] = list(np.random.normal(0,3,size=size_2)) + list(np.random.normal(-1,2,size=2*size_2))
non_treated_df['y'] = list(np.random.normal(50,30,size=size_2)) + list(np.random.normal(-100,2,size=2*size_2))
non_treated_df['z'] = list(np.random.normal(0,200,size=size_2)) + list(np.random.normal(13,200,size=2*size_2))
matched_df = get_matching_pairs(treated_df, non_treated_df)
fig, ax = plt.subplots(figsize=(6,6))
plt.scatter(non_treated_df['x'], non_treated_df['y'], alpha=0.3, label='All non-treated')
plt.scatter(treated_df['x'], treated_df['y'], label='Treated')
plt.scatter(matched_df['x'], matched_df['y'], marker='x', label='matched')
plt.legend()
plt.xlim(-1,2) | matched pairs in Python (Propensity score matching) | The easiest way I've found is to use NearestNeighbors from sklearn:
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import NearestNeighbors
def get_matching_pairs(treated_df, | matched pairs in Python (Propensity score matching)
The easiest way I've found is to use NearestNeighbors from sklearn:
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import NearestNeighbors
def get_matching_pairs(treated_df, non_treated_df, scaler=True):
treated_x = treated_df.values
non_treated_x = non_treated_df.values
if scaler == True:
scaler = StandardScaler()
if scaler:
scaler.fit(treated_x)
treated_x = scaler.transform(treated_x)
non_treated_x = scaler.transform(non_treated_x)
nbrs = NearestNeighbors(n_neighbors=1, algorithm='ball_tree').fit(non_treated_x)
distances, indices = nbrs.kneighbors(treated_x)
indices = indices.reshape(indices.shape[0])
matched = non_treated_df.iloc[indices]
return matched
Example below:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
treated_df = pd.DataFrame()
np.random.seed(1)
size_1 = 200
size_2 = 1000
treated_df['x'] = np.random.normal(0,1,size=size_1)
treated_df['y'] = np.random.normal(50,20,size=size_1)
treated_df['z'] = np.random.normal(0,100,size=size_1)
non_treated_df = pd.DataFrame()
# two different populations
non_treated_df['x'] = list(np.random.normal(0,3,size=size_2)) + list(np.random.normal(-1,2,size=2*size_2))
non_treated_df['y'] = list(np.random.normal(50,30,size=size_2)) + list(np.random.normal(-100,2,size=2*size_2))
non_treated_df['z'] = list(np.random.normal(0,200,size=size_2)) + list(np.random.normal(13,200,size=2*size_2))
matched_df = get_matching_pairs(treated_df, non_treated_df)
fig, ax = plt.subplots(figsize=(6,6))
plt.scatter(non_treated_df['x'], non_treated_df['y'], alpha=0.3, label='All non-treated')
plt.scatter(treated_df['x'], treated_df['y'], label='Treated')
plt.scatter(matched_df['x'], matched_df['y'], marker='x', label='matched')
plt.legend()
plt.xlim(-1,2) | matched pairs in Python (Propensity score matching)
The easiest way I've found is to use NearestNeighbors from sklearn:
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import NearestNeighbors
def get_matching_pairs(treated_df, |
27,404 | matched pairs in Python (Propensity score matching) | As an answer to your question you will find libraries and small recipes that deal with propensity score matching. Such is the case for:
Implements propensity-score matching and eventually will implement balance diagnostics
CausalInference
This last resource (a library) also has an article written to explain what the library actually does. You can check it here. The main features are:
Assessment of overlap in covariate distributions
Estimation of propensity score
Improvement of covariate balance through trimming
Subclassification on propensity score
Estimation of treatment effects via matching, blocking, weighting, and least squares | matched pairs in Python (Propensity score matching) | As an answer to your question you will find libraries and small recipes that deal with propensity score matching. Such is the case for:
Implements propensity-score matching and eventually will impleme | matched pairs in Python (Propensity score matching)
As an answer to your question you will find libraries and small recipes that deal with propensity score matching. Such is the case for:
Implements propensity-score matching and eventually will implement balance diagnostics
CausalInference
This last resource (a library) also has an article written to explain what the library actually does. You can check it here. The main features are:
Assessment of overlap in covariate distributions
Estimation of propensity score
Improvement of covariate balance through trimming
Subclassification on propensity score
Estimation of treatment effects via matching, blocking, weighting, and least squares | matched pairs in Python (Propensity score matching)
As an answer to your question you will find libraries and small recipes that deal with propensity score matching. Such is the case for:
Implements propensity-score matching and eventually will impleme |
27,405 | What is this diagram called | This plot called as "Tree plot" in tableau. You can see here to know how to do that. You can find trail version of Tableau here
Hope this helps ! | What is this diagram called | This plot called as "Tree plot" in tableau. You can see here to know how to do that. You can find trail version of Tableau here
Hope this helps ! | What is this diagram called
This plot called as "Tree plot" in tableau. You can see here to know how to do that. You can find trail version of Tableau here
Hope this helps ! | What is this diagram called
This plot called as "Tree plot" in tableau. You can see here to know how to do that. You can find trail version of Tableau here
Hope this helps ! |
27,406 | What is this diagram called | If the difference between tree plot and mosaic plot coincides with the distinction between hierarchically-arranged categories and how single categories are broken down, then OP's image would appear to be a tree plot.
At first blush, I believed that the plot was a mosaic plot, which is one way to present stratified categories. A tutorial on the construction of mosaic plots in R can be found here.
I'll research this issue further when I have a moment. | What is this diagram called | If the difference between tree plot and mosaic plot coincides with the distinction between hierarchically-arranged categories and how single categories are broken down, then OP's image would appear to | What is this diagram called
If the difference between tree plot and mosaic plot coincides with the distinction between hierarchically-arranged categories and how single categories are broken down, then OP's image would appear to be a tree plot.
At first blush, I believed that the plot was a mosaic plot, which is one way to present stratified categories. A tutorial on the construction of mosaic plots in R can be found here.
I'll research this issue further when I have a moment. | What is this diagram called
If the difference between tree plot and mosaic plot coincides with the distinction between hierarchically-arranged categories and how single categories are broken down, then OP's image would appear to |
27,407 | What is this diagram called | Treemapping is an information visualization method displaying hierarchical data by using nested rectangles.
It has origins in mosaic plots (left) or Marimekko charts (right), adding nesting or embedding to the standard mosaic structure. The one you displayed is well-balanced, without elongated, skinny rectangles, that degrade the appearance of some treemaps generated by the "slice-and-dice" tiling algorithm.
So it belongs to the subspecies of "squarified treemaps", that can be decorated with colors or shading in cushion treemaps. A description is given in Mark Bruls et al. (2000) Squarified Treemaps, Proceedings of the Joint EUROGRAPHICS and IEEE TCVG Symposium on Visualization in Amsterdam, The Netherlands, May 29β30, 2000. Apparently, the idea of squarifying, i. e. constraining rectangles to low aspect ratio, was already present in M. Zizi M. and M. Beaudouin-Lafon (1994). Accessing Hyperdocuments Through Interactive Dynamic Maps, Proceedings of the 1994 ACM European conference on Hypermedia technology. | What is this diagram called | Treemapping is an information visualization method displaying hierarchical data by using nested rectangles.
It has origins in mosaic plots (left) or Marimekko charts (right), adding nesting or embe | What is this diagram called
Treemapping is an information visualization method displaying hierarchical data by using nested rectangles.
It has origins in mosaic plots (left) or Marimekko charts (right), adding nesting or embedding to the standard mosaic structure. The one you displayed is well-balanced, without elongated, skinny rectangles, that degrade the appearance of some treemaps generated by the "slice-and-dice" tiling algorithm.
So it belongs to the subspecies of "squarified treemaps", that can be decorated with colors or shading in cushion treemaps. A description is given in Mark Bruls et al. (2000) Squarified Treemaps, Proceedings of the Joint EUROGRAPHICS and IEEE TCVG Symposium on Visualization in Amsterdam, The Netherlands, May 29β30, 2000. Apparently, the idea of squarifying, i. e. constraining rectangles to low aspect ratio, was already present in M. Zizi M. and M. Beaudouin-Lafon (1994). Accessing Hyperdocuments Through Interactive Dynamic Maps, Proceedings of the 1994 ACM European conference on Hypermedia technology. | What is this diagram called
Treemapping is an information visualization method displaying hierarchical data by using nested rectangles.
It has origins in mosaic plots (left) or Marimekko charts (right), adding nesting or embe |
27,408 | What is this diagram called | It looks like a variation of mosaic plot.
(source: statmenthods.net) | What is this diagram called | It looks like a variation of mosaic plot.
(source: statmenthods.net) | What is this diagram called
It looks like a variation of mosaic plot.
(source: statmenthods.net) | What is this diagram called
It looks like a variation of mosaic plot.
(source: statmenthods.net) |
27,409 | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)? | SEM is an umbrella term. CFA is the measurement part of SEM, which shows relationships between latent variables and their indicators. The other part is the structural component, or the path model, which shows how the variables of interest (often latent variables) are related.
You can run CFA alone, path analysis alone, or a full SEM. Path analysis is SEM without latent variables. | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)? | SEM is an umbrella term. CFA is the measurement part of SEM, which shows relationships between latent variables and their indicators. The other part is the structural component, or the path model, whi | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)?
SEM is an umbrella term. CFA is the measurement part of SEM, which shows relationships between latent variables and their indicators. The other part is the structural component, or the path model, which shows how the variables of interest (often latent variables) are related.
You can run CFA alone, path analysis alone, or a full SEM. Path analysis is SEM without latent variables. | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)?
SEM is an umbrella term. CFA is the measurement part of SEM, which shows relationships between latent variables and their indicators. The other part is the structural component, or the path model, whi |
27,410 | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)? | I agree with @Hotaka's answer and would like to add to it.
CFA (Confirmatory factor analysis) actually tests a measurement model. This means that you have some data collected using a questionnaire. The questions of the questionnaire are called items or indicator variables. Using EFA (or similar process) you come to derive the constructs for the groups of these items.
CFA is used to confirm and trim these constructs and items (measurement model).
SEM is used to find if relationships exist between these items and constructs (structural model).
Collectively they are known as CFA-SEM, where SEM is an umbrella term, and CFA is a subset. But we use the term SEM specifically for hypothesis testing part (testing relationships among indicators and constructs). | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)? | I agree with @Hotaka's answer and would like to add to it.
CFA (Confirmatory factor analysis) actually tests a measurement model. This means that you have some data collected using a questionnaire. Th | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)?
I agree with @Hotaka's answer and would like to add to it.
CFA (Confirmatory factor analysis) actually tests a measurement model. This means that you have some data collected using a questionnaire. The questions of the questionnaire are called items or indicator variables. Using EFA (or similar process) you come to derive the constructs for the groups of these items.
CFA is used to confirm and trim these constructs and items (measurement model).
SEM is used to find if relationships exist between these items and constructs (structural model).
Collectively they are known as CFA-SEM, where SEM is an umbrella term, and CFA is a subset. But we use the term SEM specifically for hypothesis testing part (testing relationships among indicators and constructs). | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)?
I agree with @Hotaka's answer and would like to add to it.
CFA (Confirmatory factor analysis) actually tests a measurement model. This means that you have some data collected using a questionnaire. Th |
27,411 | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)? | I've taken this from Wikipedia:
"CFA is distinguished from structural equation modeling by the fact that in CFA, there are no directed arrows between latent factors. In other words, while in CFA factors are not presumed to directly cause one another, SEM often does specify particular factors and variables to be causal in nature. In the context of SEM, the CFA is often called 'the measurement model', while the relations between the latent variables (with directed arrows) are called 'the structural model'." | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)? | I've taken this from Wikipedia:
"CFA is distinguished from structural equation modeling by the fact that in CFA, there are no directed arrows between latent factors. In other words, while in CFA facto | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)?
I've taken this from Wikipedia:
"CFA is distinguished from structural equation modeling by the fact that in CFA, there are no directed arrows between latent factors. In other words, while in CFA factors are not presumed to directly cause one another, SEM often does specify particular factors and variables to be causal in nature. In the context of SEM, the CFA is often called 'the measurement model', while the relations between the latent variables (with directed arrows) are called 'the structural model'." | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)?
I've taken this from Wikipedia:
"CFA is distinguished from structural equation modeling by the fact that in CFA, there are no directed arrows between latent factors. In other words, while in CFA facto |
27,412 | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)? | Bollen and Pearl (2013) in "Handbook of Causal Analysis for Social Research" treat factor analysis models (like CFA) as part of SEM.
Excerpt:
In the path diagram, the ovals or circles represent the latent variables. As stated above, these are variables that are part of our theory, but not in our data set. As in the previous path diagrams, the observed variables are in boxes, single-headed arrows stand for direct causal effects, and two-headed arrows (often curved) signify sources of associations between the connected variables, though the reasons for their associations are not specified in the model. It could be that they have direct causal influence on each other, that some third set of variables not part of the model influence both, or there could be some other unspecified mechanism (preferential selection) leading them to be associated. The model only says that they are associated and not why. [...]
To my mind, CFA is a SEM model where you don't take any position on why or how the latent variables are correlated. | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)? | Bollen and Pearl (2013) in "Handbook of Causal Analysis for Social Research" treat factor analysis models (like CFA) as part of SEM.
Excerpt:
In the path diagram, the ovals or circles represent the l | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)?
Bollen and Pearl (2013) in "Handbook of Causal Analysis for Social Research" treat factor analysis models (like CFA) as part of SEM.
Excerpt:
In the path diagram, the ovals or circles represent the latent variables. As stated above, these are variables that are part of our theory, but not in our data set. As in the previous path diagrams, the observed variables are in boxes, single-headed arrows stand for direct causal effects, and two-headed arrows (often curved) signify sources of associations between the connected variables, though the reasons for their associations are not specified in the model. It could be that they have direct causal influence on each other, that some third set of variables not part of the model influence both, or there could be some other unspecified mechanism (preferential selection) leading them to be associated. The model only says that they are associated and not why. [...]
To my mind, CFA is a SEM model where you don't take any position on why or how the latent variables are correlated. | Is structural equation modeling (SEM) just another name of confirmatory factor analysis (CFA)?
Bollen and Pearl (2013) in "Handbook of Causal Analysis for Social Research" treat factor analysis models (like CFA) as part of SEM.
Excerpt:
In the path diagram, the ovals or circles represent the l |
27,413 | How to determine which variable goes on the X & Y axes in a scatterplot? | If you have a variable you see as "explanatory" and the other one as the thing being explained, then one (very common) convention is to put the explanatory variable on the x-axis and the thing being explained by it on the y-axis.
So, for example, you may be viewing the relationship between literacy and mortality as potentially causal (and thus, clearly explanatory) in that greater literacy might lead to lower mortality.
In that case it would be common to put mortality on the y-axis and literacy on the x-axis.
But it's also possible to conceive of them the other way around (high infant mortality might well affect literacy rates), or with neither being explanatory of the other.
In some cases, if one variable is 'fixed' and the other is 'random', the more common convention is that random one tends to go on the y-axis of the plot.
In some areas the conventions may tend to be flipped around; this is simply the most widespread. | How to determine which variable goes on the X & Y axes in a scatterplot? | If you have a variable you see as "explanatory" and the other one as the thing being explained, then one (very common) convention is to put the explanatory variable on the x-axis and the thing being e | How to determine which variable goes on the X & Y axes in a scatterplot?
If you have a variable you see as "explanatory" and the other one as the thing being explained, then one (very common) convention is to put the explanatory variable on the x-axis and the thing being explained by it on the y-axis.
So, for example, you may be viewing the relationship between literacy and mortality as potentially causal (and thus, clearly explanatory) in that greater literacy might lead to lower mortality.
In that case it would be common to put mortality on the y-axis and literacy on the x-axis.
But it's also possible to conceive of them the other way around (high infant mortality might well affect literacy rates), or with neither being explanatory of the other.
In some cases, if one variable is 'fixed' and the other is 'random', the more common convention is that random one tends to go on the y-axis of the plot.
In some areas the conventions may tend to be flipped around; this is simply the most widespread. | How to determine which variable goes on the X & Y axes in a scatterplot?
If you have a variable you see as "explanatory" and the other one as the thing being explained, then one (very common) convention is to put the explanatory variable on the x-axis and the thing being e |
27,414 | How to determine which variable goes on the X & Y axes in a scatterplot? | Any x-y scatter plot is relevant only to the end user (pretty much what whuber said). In general, the x-axis is the variable (cause) and the y-axis is the response (effect). In your case, I would suggest that literacy is a variable that affects baby mortality, so I would put literacy on the X and mortality on the Y. | How to determine which variable goes on the X & Y axes in a scatterplot? | Any x-y scatter plot is relevant only to the end user (pretty much what whuber said). In general, the x-axis is the variable (cause) and the y-axis is the response (effect). In your case, I would su | How to determine which variable goes on the X & Y axes in a scatterplot?
Any x-y scatter plot is relevant only to the end user (pretty much what whuber said). In general, the x-axis is the variable (cause) and the y-axis is the response (effect). In your case, I would suggest that literacy is a variable that affects baby mortality, so I would put literacy on the X and mortality on the Y. | How to determine which variable goes on the X & Y axes in a scatterplot?
Any x-y scatter plot is relevant only to the end user (pretty much what whuber said). In general, the x-axis is the variable (cause) and the y-axis is the response (effect). In your case, I would su |
27,415 | How to determine which variable goes on the X & Y axes in a scatterplot? | Independent variable goes on the x-axis (the thing you are changing)
Dependent variable goes on the y-axis (the thing you are measuring) | How to determine which variable goes on the X & Y axes in a scatterplot? | Independent variable goes on the x-axis (the thing you are changing)
Dependent variable goes on the y-axis (the thing you are measuring) | How to determine which variable goes on the X & Y axes in a scatterplot?
Independent variable goes on the x-axis (the thing you are changing)
Dependent variable goes on the y-axis (the thing you are measuring) | How to determine which variable goes on the X & Y axes in a scatterplot?
Independent variable goes on the x-axis (the thing you are changing)
Dependent variable goes on the y-axis (the thing you are measuring) |
27,416 | Generating random numbers from a t-distribution | I have an answer to the practical part of your question, though not quite the theoretical one.
There is a function called TINV that directly does this. Except that it conly returns positive random t variates. You can get around that limitation with the following formula:
=TINV(RAND(),6)*(RANDBETWEEN(0,1)*2-1)
...you can replace 6 with whatever value you want for the DF, and the rand() can be replaced with any number between 0 and 1. The rest of it simply guarantees equal probability of negative and positive values. | Generating random numbers from a t-distribution | I have an answer to the practical part of your question, though not quite the theoretical one.
There is a function called TINV that directly does this. Except that it conly returns positive random t v | Generating random numbers from a t-distribution
I have an answer to the practical part of your question, though not quite the theoretical one.
There is a function called TINV that directly does this. Except that it conly returns positive random t variates. You can get around that limitation with the following formula:
=TINV(RAND(),6)*(RANDBETWEEN(0,1)*2-1)
...you can replace 6 with whatever value you want for the DF, and the rand() can be replaced with any number between 0 and 1. The rest of it simply guarantees equal probability of negative and positive values. | Generating random numbers from a t-distribution
I have an answer to the practical part of your question, though not quite the theoretical one.
There is a function called TINV that directly does this. Except that it conly returns positive random t v |
27,417 | Generating random numbers from a t-distribution | Given a generator of i.i.d. standard gaussian random variates, you can generate $t_k$ distributed random variates (with any positive integer degree of freedom $k$) by using the relation:
$$Y=\frac{X_{k+1}}{\sqrt{k^{-1}\sum_{i=1}^k X_i^2}}$$
where $Y\sim t_k$ and $X_i\sim\text{i.i.d. }\mathcal{N}(0,1),i=1,\ldots,k+1.$ | Generating random numbers from a t-distribution | Given a generator of i.i.d. standard gaussian random variates, you can generate $t_k$ distributed random variates (with any positive integer degree of freedom $k$) by using the relation:
$$Y=\frac{X_{ | Generating random numbers from a t-distribution
Given a generator of i.i.d. standard gaussian random variates, you can generate $t_k$ distributed random variates (with any positive integer degree of freedom $k$) by using the relation:
$$Y=\frac{X_{k+1}}{\sqrt{k^{-1}\sum_{i=1}^k X_i^2}}$$
where $Y\sim t_k$ and $X_i\sim\text{i.i.d. }\mathcal{N}(0,1),i=1,\ldots,k+1.$ | Generating random numbers from a t-distribution
Given a generator of i.i.d. standard gaussian random variates, you can generate $t_k$ distributed random variates (with any positive integer degree of freedom $k$) by using the relation:
$$Y=\frac{X_{ |
27,418 | Generating random numbers from a t-distribution | A fast way of generating a t variate, faster than the gaussian-only approach for all but the smallest degrees of freedom, is to use the fact that a t distribution is a mixture of Normals, with the mixture distribution being an inverted gamma distribution on the variance. Here's an example in R, where we generate 1,000,000 t(10) variates this way and compare to the theoretical distribution using the Kolmogorov-Smirnov test ("proof" by large experiment!):
> df <- 10
> s2 <- 1/rgamma(1000000, df/2, df/2)
> tv <- rnorm(1000000,0,sqrt(s2))
>
> ks.test(tv, pt, df=df)
One-sample Kolmogorov-Smirnov test
data: tv
D = 6e-04, p-value = 0.8826
alternative hypothesis: two-sided
This approach also works for non-integer degrees of freedom, which the gaussian-only approach does not. | Generating random numbers from a t-distribution | A fast way of generating a t variate, faster than the gaussian-only approach for all but the smallest degrees of freedom, is to use the fact that a t distribution is a mixture of Normals, with the mix | Generating random numbers from a t-distribution
A fast way of generating a t variate, faster than the gaussian-only approach for all but the smallest degrees of freedom, is to use the fact that a t distribution is a mixture of Normals, with the mixture distribution being an inverted gamma distribution on the variance. Here's an example in R, where we generate 1,000,000 t(10) variates this way and compare to the theoretical distribution using the Kolmogorov-Smirnov test ("proof" by large experiment!):
> df <- 10
> s2 <- 1/rgamma(1000000, df/2, df/2)
> tv <- rnorm(1000000,0,sqrt(s2))
>
> ks.test(tv, pt, df=df)
One-sample Kolmogorov-Smirnov test
data: tv
D = 6e-04, p-value = 0.8826
alternative hypothesis: two-sided
This approach also works for non-integer degrees of freedom, which the gaussian-only approach does not. | Generating random numbers from a t-distribution
A fast way of generating a t variate, faster than the gaussian-only approach for all but the smallest degrees of freedom, is to use the fact that a t distribution is a mixture of Normals, with the mix |
27,419 | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression? | Logistic regression will, up to numerical imprecision, give exactly the same fits as the tabulated percentages. Therefore, if your independent variables are factor objects factor1, etc., and the dependent results (0 and 1) are x, then you can obtain the effects with an expression like
aggregate(x, list(factor1, <etc>), FUN=mean)
Compare this to
glm(x ~ factor1 * <etc>, family=binomial(link="logit"))
As an example, let's generate some random data:
set.seed(17)
n <- 1000
x <- sample(c(0,1), n, replace=TRUE)
factor1 <- as.factor(floor(2*runif(n)))
factor2 <- as.factor(floor(3*runif(n)))
factor3 <- as.factor(floor(4*runif(n)))
The summary is obtained with
aggregate.results <- aggregate(x, list(factor1, factor2, factor3), FUN=mean)
aggregate.results
Its output includes
Group.1 Group.2 Group.3 x
1 0 0 0 0.5128205
2 1 0 0 0.4210526
3 0 1 0 0.5454545
4 1 1 0 0.6071429
5 0 2 0 0.4736842
6 1 2 0 0.5000000
...
24 1 2 3 0.5227273
For future reference, the estimate for factors at levels (1,2,0) in row 6 of the output is 0.5.
The logistic regression gives up its coefficients this way:
model <- glm(x ~ factor1 * factor2 * factor3, family=binomial(link="logit"))
b <- model$coefficients
To use them, we need the logistic function:
logistic <- function(x) 1 / (1 + exp(-x))
To obtain, e.g., the estimate for factors at levels (1,2,0), compute
logistic (b["(Intercept)"] + b["factor11"] + b["factor22"] + b["factor11:factor22"])
(Notice how all interactions must be included in the model and all associated coefficients have to be applied to obtain a correct estimate.)
The output is
(Intercept)
0.5
agreeing with the results of aggregate. (The "(Intercept)" heading in the output is a vestige of the input and effectively meaningless for this calculation.)
The same information in yet another form appears in the output of table. E.g., the (lengthy) output of
table(x, factor1, factor2, factor3)
includes this panel:
, , factor2 = 2, factor3 = 0
factor1
x 0 1
0 20 21
1 18 21
The column for factor1 = 1 corresponds to the three factors at levels (1,2,0) and shows that $21/(21+21) = 0.5$ of the values of x equal $1$, agreeing with what we read out of aggregate and glm.
Finally, a combination of factors yielding the highest proportion in the dataset is conveniently obtained from the output of aggregate:
> aggregate.results[which.max(aggregate.results$x),]
Group.1 Group.2 Group.3 x
4 1 1 0 0.6071429 | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba | Logistic regression will, up to numerical imprecision, give exactly the same fits as the tabulated percentages. Therefore, if your independent variables are factor objects factor1, etc., and the depe | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression?
Logistic regression will, up to numerical imprecision, give exactly the same fits as the tabulated percentages. Therefore, if your independent variables are factor objects factor1, etc., and the dependent results (0 and 1) are x, then you can obtain the effects with an expression like
aggregate(x, list(factor1, <etc>), FUN=mean)
Compare this to
glm(x ~ factor1 * <etc>, family=binomial(link="logit"))
As an example, let's generate some random data:
set.seed(17)
n <- 1000
x <- sample(c(0,1), n, replace=TRUE)
factor1 <- as.factor(floor(2*runif(n)))
factor2 <- as.factor(floor(3*runif(n)))
factor3 <- as.factor(floor(4*runif(n)))
The summary is obtained with
aggregate.results <- aggregate(x, list(factor1, factor2, factor3), FUN=mean)
aggregate.results
Its output includes
Group.1 Group.2 Group.3 x
1 0 0 0 0.5128205
2 1 0 0 0.4210526
3 0 1 0 0.5454545
4 1 1 0 0.6071429
5 0 2 0 0.4736842
6 1 2 0 0.5000000
...
24 1 2 3 0.5227273
For future reference, the estimate for factors at levels (1,2,0) in row 6 of the output is 0.5.
The logistic regression gives up its coefficients this way:
model <- glm(x ~ factor1 * factor2 * factor3, family=binomial(link="logit"))
b <- model$coefficients
To use them, we need the logistic function:
logistic <- function(x) 1 / (1 + exp(-x))
To obtain, e.g., the estimate for factors at levels (1,2,0), compute
logistic (b["(Intercept)"] + b["factor11"] + b["factor22"] + b["factor11:factor22"])
(Notice how all interactions must be included in the model and all associated coefficients have to be applied to obtain a correct estimate.)
The output is
(Intercept)
0.5
agreeing with the results of aggregate. (The "(Intercept)" heading in the output is a vestige of the input and effectively meaningless for this calculation.)
The same information in yet another form appears in the output of table. E.g., the (lengthy) output of
table(x, factor1, factor2, factor3)
includes this panel:
, , factor2 = 2, factor3 = 0
factor1
x 0 1
0 20 21
1 18 21
The column for factor1 = 1 corresponds to the three factors at levels (1,2,0) and shows that $21/(21+21) = 0.5$ of the values of x equal $1$, agreeing with what we read out of aggregate and glm.
Finally, a combination of factors yielding the highest proportion in the dataset is conveniently obtained from the output of aggregate:
> aggregate.results[which.max(aggregate.results$x),]
Group.1 Group.2 Group.3 x
4 1 1 0 0.6071429 | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba
Logistic regression will, up to numerical imprecision, give exactly the same fits as the tabulated percentages. Therefore, if your independent variables are factor objects factor1, etc., and the depe |
27,420 | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression? | For a quick glance at the proportion of binary responses within each category and/or conditional on multiple categories, graphical plots can be of service. In particular, to simultaneously visualize proportions conditioned on many categorical independent variables I would suggest Mosaic Plots.
Below is an example taken from a blog post, Understanding area based plots: Mosaic plots from the Statistical graphics and more blog. This example visualizes the proportion of survivors on the Titanic in blue, conditional on the class of the passenger. One can simultaneously assess the proportion of survivors, while still appreciating the total number of passengers within each of the subgroups (useful information for sure, especially when certain sub-groups are sparse in number and we would expect more random variation).
(source: theusrus.de)
One can then make subsequent mosaic plots conditional on multiple categorical independent variables. The next example from the same blog post in a quick visual summary demonstrates that all children passengers in the first and second classes survived, while in the third class children did not fare nearly as well. It also clearly shows that female adults had a much higher survival rate compared to males within each class, although the proportion of female survivors between classes diminished appreciably from the first to second to third classes (and then was relatively high again for the crew, although again note not many female crew members exist, given how narrow the bar is).
(source: theusrus.de)
It is amazing how much information is displayed, this is proportions in four dimensions (Class, Adult/Child, Sex and Proportion of Survivors)!
I agree if you are interested in prediction or more causal explanation in general you will want to turn to more formal modelling. Graphical plots can be very quick visual clues though as to the nature of the data, and can provide other insights often missed when simply estimating regression models (especially when considering interactions between the different categorical variables). | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba | For a quick glance at the proportion of binary responses within each category and/or conditional on multiple categories, graphical plots can be of service. In particular, to simultaneously visualize p | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression?
For a quick glance at the proportion of binary responses within each category and/or conditional on multiple categories, graphical plots can be of service. In particular, to simultaneously visualize proportions conditioned on many categorical independent variables I would suggest Mosaic Plots.
Below is an example taken from a blog post, Understanding area based plots: Mosaic plots from the Statistical graphics and more blog. This example visualizes the proportion of survivors on the Titanic in blue, conditional on the class of the passenger. One can simultaneously assess the proportion of survivors, while still appreciating the total number of passengers within each of the subgroups (useful information for sure, especially when certain sub-groups are sparse in number and we would expect more random variation).
(source: theusrus.de)
One can then make subsequent mosaic plots conditional on multiple categorical independent variables. The next example from the same blog post in a quick visual summary demonstrates that all children passengers in the first and second classes survived, while in the third class children did not fare nearly as well. It also clearly shows that female adults had a much higher survival rate compared to males within each class, although the proportion of female survivors between classes diminished appreciably from the first to second to third classes (and then was relatively high again for the crew, although again note not many female crew members exist, given how narrow the bar is).
(source: theusrus.de)
It is amazing how much information is displayed, this is proportions in four dimensions (Class, Adult/Child, Sex and Proportion of Survivors)!
I agree if you are interested in prediction or more causal explanation in general you will want to turn to more formal modelling. Graphical plots can be very quick visual clues though as to the nature of the data, and can provide other insights often missed when simply estimating regression models (especially when considering interactions between the different categorical variables). | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba
For a quick glance at the proportion of binary responses within each category and/or conditional on multiple categories, graphical plots can be of service. In particular, to simultaneously visualize p |
27,421 | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression? | Depending on your needs, you might find that recursive partioning provides an easy to interpret method for predicting an outcome variable. For an R introduction to these methods, see Quick-R's Tree-based model page. I generally favour ctree() implementation in R's `party package as one does not have to worry about pruning, and it produces pretty graphics by default.
This would fall into the category of feature selection algorithms suggested in a previous answer, and generally gives as good if not better predictions as logistic regression. | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba | Depending on your needs, you might find that recursive partioning provides an easy to interpret method for predicting an outcome variable. For an R introduction to these methods, see Quick-R's Tree-ba | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression?
Depending on your needs, you might find that recursive partioning provides an easy to interpret method for predicting an outcome variable. For an R introduction to these methods, see Quick-R's Tree-based model page. I generally favour ctree() implementation in R's `party package as one does not have to worry about pruning, and it produces pretty graphics by default.
This would fall into the category of feature selection algorithms suggested in a previous answer, and generally gives as good if not better predictions as logistic regression. | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba
Depending on your needs, you might find that recursive partioning provides an easy to interpret method for predicting an outcome variable. For an R introduction to these methods, see Quick-R's Tree-ba |
27,422 | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression? | Given your five categorical predictors with let's say 20 outcomes each, then the solution with a different prediction for each configuration of predictors needs $20^5$ parameters. Each of those parameters needs many training examples in order to be learned well. Do you have at least ten million training examples spread over all configurations? If so, go ahead and do it that way.
If you have less data, you want to learn fewer parameters. You can reduce the number of parameters by assuming, for example, that configurations of individual predictors have consistent effects on the response variable.
If you believe that your predictors are independent of each other, then logistic regression is the unique algorithm that does the right thing. (Even if they're not independent, it can still do fairly well.)
In summary, logistic regression makes an assumption about independent influence of predictors, which reduces the number of model parameters, and yields a model that's easy to learn. | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba | Given your five categorical predictors with let's say 20 outcomes each, then the solution with a different prediction for each configuration of predictors needs $20^5$ parameters. Each of those param | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression?
Given your five categorical predictors with let's say 20 outcomes each, then the solution with a different prediction for each configuration of predictors needs $20^5$ parameters. Each of those parameters needs many training examples in order to be learned well. Do you have at least ten million training examples spread over all configurations? If so, go ahead and do it that way.
If you have less data, you want to learn fewer parameters. You can reduce the number of parameters by assuming, for example, that configurations of individual predictors have consistent effects on the response variable.
If you believe that your predictors are independent of each other, then logistic regression is the unique algorithm that does the right thing. (Even if they're not independent, it can still do fairly well.)
In summary, logistic regression makes an assumption about independent influence of predictors, which reduces the number of model parameters, and yields a model that's easy to learn. | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba
Given your five categorical predictors with let's say 20 outcomes each, then the solution with a different prediction for each configuration of predictors needs $20^5$ parameters. Each of those param |
27,423 | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression? | You should look at feature selection algorithms. One that is suitable for your case (binary classification, categorical variables) is the "minimum Redundancy Maximum Relevance" (mRMR) method. You can quickly try it online at http://penglab.janelia.org/proj/mRMR/ | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba | You should look at feature selection algorithms. One that is suitable for your case (binary classification, categorical variables) is the "minimum Redundancy Maximum Relevance" (mRMR) method. You can | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression?
You should look at feature selection algorithms. One that is suitable for your case (binary classification, categorical variables) is the "minimum Redundancy Maximum Relevance" (mRMR) method. You can quickly try it online at http://penglab.janelia.org/proj/mRMR/ | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba
You should look at feature selection algorithms. One that is suitable for your case (binary classification, categorical variables) is the "minimum Redundancy Maximum Relevance" (mRMR) method. You can |
27,424 | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression? | I work in the field of credit scoring, where what here is being presented as a strange case is the norm.
We use logistic regression, and convert both categorical and continuous variables into weights of evidence (WOEs), that are then used as the predictors in the regression. A lot of time is spent grouping the categorical variables, and discretising (binning/classing) the continuous variables.
The weight of evidence is a simple calculation. It is the log of the odds for the class, less the log of odds for the population:
WOE = ln(Good(Class)/Bad(Class)) - ln(Good(ALL)/Bad(ALL))
This is the standard transformation methodology for almost all credit scoring models built using logistic regression. You can use the same numbers in a piecewise approach.
The beauty of it is that you will always know whether the coefficients being assigned to each WOE make sense. Negative coefficients are contrary to the patterns within the data, and usually result from multicollinearity; and coefficients over 1.0 indicate overcompensation. Most coefficients will come out somewhere between zero and one. | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba | I work in the field of credit scoring, where what here is being presented as a strange case is the norm.
We use logistic regression, and convert both categorical and continuous variables into weights | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate probabilities or logistic regression?
I work in the field of credit scoring, where what here is being presented as a strange case is the norm.
We use logistic regression, and convert both categorical and continuous variables into weights of evidence (WOEs), that are then used as the predictors in the regression. A lot of time is spent grouping the categorical variables, and discretising (binning/classing) the continuous variables.
The weight of evidence is a simple calculation. It is the log of the odds for the class, less the log of odds for the population:
WOE = ln(Good(Class)/Bad(Class)) - ln(Good(ALL)/Bad(ALL))
This is the standard transformation methodology for almost all credit scoring models built using logistic regression. You can use the same numbers in a piecewise approach.
The beauty of it is that you will always know whether the coefficients being assigned to each WOE make sense. Negative coefficients are contrary to the patterns within the data, and usually result from multicollinearity; and coefficients over 1.0 indicate overcompensation. Most coefficients will come out somewhere between zero and one. | How to assess predictive power of set of categorical predictors of a binary outcome? Calculate proba
I work in the field of credit scoring, where what here is being presented as a strange case is the norm.
We use logistic regression, and convert both categorical and continuous variables into weights |
27,425 | What algorithm could be used to predict consumables usage given data from past purchases? | The question concerns the rate of consumption versus time. This calls for regression of the rate against time (not regression of total purchases against time). Extrapolation is accomplished by constructing prediction limits for future purchases.
Several models are possible. Given the move to a paperless office (which has been ongoing for about 25 years :-), we might adopt an exponential (decrease) model. The result is portrayed by the following scatterplot of the consumption, on which are drawn the exponential curve (fitted via ordinary least squares to the logarithms of the consumption) and its 95% prediction limits. Extrapolated values would be expected to lie near the line and between the prediction limits with 95% confidence.
The vertical axis shows pages per day on a linear scale. The dark blue solid line is the fit: it is truly exponential but comes remarkably close to being linear. The effect of the exponential fit appears in the prediction bands, which on this linear scale are asymmetrically placed around the fit; on a log scale, they would be symmetric.
A more precise model would account for the fact that consumption information is more uncertain over shorter periods of time (or when total purchases are smaller), which could be fitted using weighted least squares. Given the variability in these data and the rough equality of the size of all purchases, it's not worth the extra effort.
This approach accommodates intermediate inventory data, which can be used to interpolate consumption rates at intermediate times. In such a case, because the intermediate amounts of consumption could vary considerably, the weighted least squares approach would be advisable.
What weights to use? We might consider the paper consumption, which necessarily accrues in integral amounts of paper, as a count which varies independently from day to day. Over short periods, the variance of the count would therefore be proportional to the length of the period. The variance of the count per day would then be inversely proportional to the length of the period. Consequently the weights should be directly proportional proportional to the periods elapsed between inventories. Thus, for example, the consumption of 1000 sheets between 2007-05-10 and 2007-11-11 (about 180 days) would have almost five times the weight of the 1000 sheet consumption between 2007-11-11 and 2007-12-18, a period of only 37 days.
The same weighting can be accommodated in the prediction intervals. This would result in relatively wide intervals for predictions of consumption during one day compared to prediction for consumption over, say, three months.
Please note that these suggestions focus on simple models and simple predictions, appropriate for the intended application and the obvious large variability in the data. If the projections involved, say, defense spending for a large country, we would want to accommodate many more explanatory variables, account for temporal correlation, and provide much more detailed information in the model. | What algorithm could be used to predict consumables usage given data from past purchases? | The question concerns the rate of consumption versus time. This calls for regression of the rate against time (not regression of total purchases against time). Extrapolation is accomplished by const | What algorithm could be used to predict consumables usage given data from past purchases?
The question concerns the rate of consumption versus time. This calls for regression of the rate against time (not regression of total purchases against time). Extrapolation is accomplished by constructing prediction limits for future purchases.
Several models are possible. Given the move to a paperless office (which has been ongoing for about 25 years :-), we might adopt an exponential (decrease) model. The result is portrayed by the following scatterplot of the consumption, on which are drawn the exponential curve (fitted via ordinary least squares to the logarithms of the consumption) and its 95% prediction limits. Extrapolated values would be expected to lie near the line and between the prediction limits with 95% confidence.
The vertical axis shows pages per day on a linear scale. The dark blue solid line is the fit: it is truly exponential but comes remarkably close to being linear. The effect of the exponential fit appears in the prediction bands, which on this linear scale are asymmetrically placed around the fit; on a log scale, they would be symmetric.
A more precise model would account for the fact that consumption information is more uncertain over shorter periods of time (or when total purchases are smaller), which could be fitted using weighted least squares. Given the variability in these data and the rough equality of the size of all purchases, it's not worth the extra effort.
This approach accommodates intermediate inventory data, which can be used to interpolate consumption rates at intermediate times. In such a case, because the intermediate amounts of consumption could vary considerably, the weighted least squares approach would be advisable.
What weights to use? We might consider the paper consumption, which necessarily accrues in integral amounts of paper, as a count which varies independently from day to day. Over short periods, the variance of the count would therefore be proportional to the length of the period. The variance of the count per day would then be inversely proportional to the length of the period. Consequently the weights should be directly proportional proportional to the periods elapsed between inventories. Thus, for example, the consumption of 1000 sheets between 2007-05-10 and 2007-11-11 (about 180 days) would have almost five times the weight of the 1000 sheet consumption between 2007-11-11 and 2007-12-18, a period of only 37 days.
The same weighting can be accommodated in the prediction intervals. This would result in relatively wide intervals for predictions of consumption during one day compared to prediction for consumption over, say, three months.
Please note that these suggestions focus on simple models and simple predictions, appropriate for the intended application and the obvious large variability in the data. If the projections involved, say, defense spending for a large country, we would want to accommodate many more explanatory variables, account for temporal correlation, and provide much more detailed information in the model. | What algorithm could be used to predict consumables usage given data from past purchases?
The question concerns the rate of consumption versus time. This calls for regression of the rate against time (not regression of total purchases against time). Extrapolation is accomplished by const |
27,426 | What algorithm could be used to predict consumables usage given data from past purchases? | This is definitely the machine learning problem (I updated tags in your post). Most probably, this is linear regression. In short, linear regression tries to recover relationship between 1 dependent and 1 or more independent variables. Dependent variable here is consumables usage. For independent variables I suggest time intervals between purchases. You can also add more independent variables, for example, number of people who used consumables at each moment, or anything else that can affect an amount of purchases. You can find nice description of linear regression together with implementation in Python here.
In theory, it is also possible that not only time intervals between purchases, but also moments in time themselves influence on the amounts. For example, for some reason in January people may want more paper than, say, in April. In this case you cannot use number of month as independent variable itself due to the nature of linear regression itself (number of the month is just a label, but will be used as amount). So you have 2 ways how to overcome this.
First, you can add 12 additional variables, one for each month, and set each variable to 1 if it represents month of purchase and to 0 if it doesn't. Then use same linear regression.
Second, you can use use more sophisticated algorithm, such as M5', which is mix of linear regression and decision trees (you can find detailed description of this algorithm in Data Mining: Practical Machine Learning Tools and Techniques). | What algorithm could be used to predict consumables usage given data from past purchases? | This is definitely the machine learning problem (I updated tags in your post). Most probably, this is linear regression. In short, linear regression tries to recover relationship between 1 dependent a | What algorithm could be used to predict consumables usage given data from past purchases?
This is definitely the machine learning problem (I updated tags in your post). Most probably, this is linear regression. In short, linear regression tries to recover relationship between 1 dependent and 1 or more independent variables. Dependent variable here is consumables usage. For independent variables I suggest time intervals between purchases. You can also add more independent variables, for example, number of people who used consumables at each moment, or anything else that can affect an amount of purchases. You can find nice description of linear regression together with implementation in Python here.
In theory, it is also possible that not only time intervals between purchases, but also moments in time themselves influence on the amounts. For example, for some reason in January people may want more paper than, say, in April. In this case you cannot use number of month as independent variable itself due to the nature of linear regression itself (number of the month is just a label, but will be used as amount). So you have 2 ways how to overcome this.
First, you can add 12 additional variables, one for each month, and set each variable to 1 if it represents month of purchase and to 0 if it doesn't. Then use same linear regression.
Second, you can use use more sophisticated algorithm, such as M5', which is mix of linear regression and decision trees (you can find detailed description of this algorithm in Data Mining: Practical Machine Learning Tools and Techniques). | What algorithm could be used to predict consumables usage given data from past purchases?
This is definitely the machine learning problem (I updated tags in your post). Most probably, this is linear regression. In short, linear regression tries to recover relationship between 1 dependent a |
27,427 | What algorithm could be used to predict consumables usage given data from past purchases? | it's not 'sampled' at regular intervals, so I think it doesn't qualify
as a Time Series data.
Here is an idea about how to forecast the purchases: consider the data as an intermittent demand series. That is, you do have a time series sampled at regular intervals, but the positive values are obviously irregularly spaced. Rob Hyndman has a nice paper on using Croston's method for forecasting intermittent demand series. While I also program a lot in Python, you'll save a lot of exploration time by using Croston's method, as well as other time series forecasting methods, readily available in Rob's excellent R package forecast. | What algorithm could be used to predict consumables usage given data from past purchases? | it's not 'sampled' at regular intervals, so I think it doesn't qualify
as a Time Series data.
Here is an idea about how to forecast the purchases: consider the data as an intermittent demand series | What algorithm could be used to predict consumables usage given data from past purchases?
it's not 'sampled' at regular intervals, so I think it doesn't qualify
as a Time Series data.
Here is an idea about how to forecast the purchases: consider the data as an intermittent demand series. That is, you do have a time series sampled at regular intervals, but the positive values are obviously irregularly spaced. Rob Hyndman has a nice paper on using Croston's method for forecasting intermittent demand series. While I also program a lot in Python, you'll save a lot of exploration time by using Croston's method, as well as other time series forecasting methods, readily available in Rob's excellent R package forecast. | What algorithm could be used to predict consumables usage given data from past purchases?
it's not 'sampled' at regular intervals, so I think it doesn't qualify
as a Time Series data.
Here is an idea about how to forecast the purchases: consider the data as an intermittent demand series |
27,428 | What algorithm could be used to predict consumables usage given data from past purchases? | I'm pretty sure you are trying to do some regression analysis to fit a line to your data points. There are plenty of tools out there to help you - MS Excel being the most accessable. If you want to roll your own solution, best to brush up on your statistics (here and here, perhaps). Once you fit a line to your data, you can extrapolate into the future.
EDIT: Here is a screenshot of the excel example I mentioned in the comments below. The bolded dates are random dates in the future that I typed in myself. The bold values in column B are extrapolated values calculated by Excel's flavor of exponential regression.
EDIT2: OK, so to answer the question of, "What techniques can I use?"
exponential regression (mentioned above)
Holt's Method
Winter's method
ARIMA
Please see this page for a little intro on each: http://www.decisioncraft.com/dmdirect/forecastingtechnique.htm | What algorithm could be used to predict consumables usage given data from past purchases? | I'm pretty sure you are trying to do some regression analysis to fit a line to your data points. There are plenty of tools out there to help you - MS Excel being the most accessable. If you want to | What algorithm could be used to predict consumables usage given data from past purchases?
I'm pretty sure you are trying to do some regression analysis to fit a line to your data points. There are plenty of tools out there to help you - MS Excel being the most accessable. If you want to roll your own solution, best to brush up on your statistics (here and here, perhaps). Once you fit a line to your data, you can extrapolate into the future.
EDIT: Here is a screenshot of the excel example I mentioned in the comments below. The bolded dates are random dates in the future that I typed in myself. The bold values in column B are extrapolated values calculated by Excel's flavor of exponential regression.
EDIT2: OK, so to answer the question of, "What techniques can I use?"
exponential regression (mentioned above)
Holt's Method
Winter's method
ARIMA
Please see this page for a little intro on each: http://www.decisioncraft.com/dmdirect/forecastingtechnique.htm | What algorithm could be used to predict consumables usage given data from past purchases?
I'm pretty sure you are trying to do some regression analysis to fit a line to your data points. There are plenty of tools out there to help you - MS Excel being the most accessable. If you want to |
27,429 | What algorithm could be used to predict consumables usage given data from past purchases? | Started as a comment, grew too long...
it's not 'sampled' at regular intervals, so I think it doesn't qualify as a Time Series data
This is an erroneous conclusion - it's certainly time series. A time series may be irregularly sampled, it just tends to require different from the usual approaches when it is.
This problem appears to be related to stochastic problems like dam levels (water is generally used at a fairly stable rate over time, sometimes increasing or decreasing more or less quickly, while at other times its fairly stable), while dam levels tend to only increase rapidly (essentially in jumps), as rainfall occurs. The paper usage and replenishment patterns may be somewhat similar (though the amount ordered may tend to be much more stable and in much rounder numbers than rainfall amounts, and to occur whenever the level gets low).
It's also related to insurance company capital (but kind of reversed) - aside initial capital, money from premiums (net operating costs) and investments comes in fairly steadily (sometimes more or less), while insurance policy payments tend to be made in relatively large amounts.
Both of those things have been modelled, and may provide a little insight for this problem. | What algorithm could be used to predict consumables usage given data from past purchases? | Started as a comment, grew too long...
it's not 'sampled' at regular intervals, so I think it doesn't qualify as a Time Series data
This is an erroneous conclusion - it's certainly time series. A t | What algorithm could be used to predict consumables usage given data from past purchases?
Started as a comment, grew too long...
it's not 'sampled' at regular intervals, so I think it doesn't qualify as a Time Series data
This is an erroneous conclusion - it's certainly time series. A time series may be irregularly sampled, it just tends to require different from the usual approaches when it is.
This problem appears to be related to stochastic problems like dam levels (water is generally used at a fairly stable rate over time, sometimes increasing or decreasing more or less quickly, while at other times its fairly stable), while dam levels tend to only increase rapidly (essentially in jumps), as rainfall occurs. The paper usage and replenishment patterns may be somewhat similar (though the amount ordered may tend to be much more stable and in much rounder numbers than rainfall amounts, and to occur whenever the level gets low).
It's also related to insurance company capital (but kind of reversed) - aside initial capital, money from premiums (net operating costs) and investments comes in fairly steadily (sometimes more or less), while insurance policy payments tend to be made in relatively large amounts.
Both of those things have been modelled, and may provide a little insight for this problem. | What algorithm could be used to predict consumables usage given data from past purchases?
Started as a comment, grew too long...
it's not 'sampled' at regular intervals, so I think it doesn't qualify as a Time Series data
This is an erroneous conclusion - it's certainly time series. A t |
27,430 | What algorithm could be used to predict consumables usage given data from past purchases? | you should have a look at WEKA. It is a tool and a Java API with a suite of machine learning
algorithms.
In particular you should look for classification algorithms.
Good luck | What algorithm could be used to predict consumables usage given data from past purchases? | you should have a look at WEKA. It is a tool and a Java API with a suite of machine learning
algorithms.
In particular you should look for classification algorithms.
Good luck | What algorithm could be used to predict consumables usage given data from past purchases?
you should have a look at WEKA. It is a tool and a Java API with a suite of machine learning
algorithms.
In particular you should look for classification algorithms.
Good luck | What algorithm could be used to predict consumables usage given data from past purchases?
you should have a look at WEKA. It is a tool and a Java API with a suite of machine learning
algorithms.
In particular you should look for classification algorithms.
Good luck |
27,431 | What algorithm could be used to predict consumables usage given data from past purchases? | I would use linear least squares to fit a model to the cumulative consumption (i.e. running total of pages by date). An initial assumption would be to use a first degree polynomial. However the residuals indicate that the first degree is underfitting the data in the example, so the next logical step would be to increase it to a second degree (i.e. quadratic) fit. This removes the curvature in the residuals, and the slightly negative coefficient for the squared term means that the consumption rate is decreasing over time, which seems intuitive given that most people probably tend to use less paper over time. For this data I don't think you need to go beyond a second degree fit, as you may start overfitting and the resulting extrapolation may not make sense.
You can see the fits (including extrapolation) and the residuals in the plots below.
If you can, it might be good to perform bootstrapping to get a better estimate of the prediction errors. | What algorithm could be used to predict consumables usage given data from past purchases? | I would use linear least squares to fit a model to the cumulative consumption (i.e. running total of pages by date). An initial assumption would be to use a first degree polynomial. However the resi | What algorithm could be used to predict consumables usage given data from past purchases?
I would use linear least squares to fit a model to the cumulative consumption (i.e. running total of pages by date). An initial assumption would be to use a first degree polynomial. However the residuals indicate that the first degree is underfitting the data in the example, so the next logical step would be to increase it to a second degree (i.e. quadratic) fit. This removes the curvature in the residuals, and the slightly negative coefficient for the squared term means that the consumption rate is decreasing over time, which seems intuitive given that most people probably tend to use less paper over time. For this data I don't think you need to go beyond a second degree fit, as you may start overfitting and the resulting extrapolation may not make sense.
You can see the fits (including extrapolation) and the residuals in the plots below.
If you can, it might be good to perform bootstrapping to get a better estimate of the prediction errors. | What algorithm could be used to predict consumables usage given data from past purchases?
I would use linear least squares to fit a model to the cumulative consumption (i.e. running total of pages by date). An initial assumption would be to use a first degree polynomial. However the resi |
27,432 | What algorithm could be used to predict consumables usage given data from past purchases? | I think you can get your data using operations research.
Why don't you try to find some equations that takes as variables the amount of paper used per time period, users of the paper, etc? | What algorithm could be used to predict consumables usage given data from past purchases? | I think you can get your data using operations research.
Why don't you try to find some equations that takes as variables the amount of paper used per time period, users of the paper, etc? | What algorithm could be used to predict consumables usage given data from past purchases?
I think you can get your data using operations research.
Why don't you try to find some equations that takes as variables the amount of paper used per time period, users of the paper, etc? | What algorithm could be used to predict consumables usage given data from past purchases?
I think you can get your data using operations research.
Why don't you try to find some equations that takes as variables the amount of paper used per time period, users of the paper, etc? |
27,433 | Acceptance rates for Metropolis-Hastings with uniform candidate distribution | I believe that Weak convergence and optimal scaling of random walk Metropolis algorithms by Roberts, Gelman and Gilks is the source for the 0.234 optimal acceptance rate.
What the paper shows is that, under certain assumptions, you can scale the random walk Metropolis-Hastings algorithm as the dimension of the space goes to infinity to get a limiting diffusion for each coordinate. In the limit, the diffusion can be seen as "most efficient" if the acceptance rate takes the value 0.234. Intuitively, it is a tradeoff between making to many small accepted steps and making to many large proposals that get rejected.
The Metropolis-Hastings algorithm is not really an optimization algorithm, in contrast to simulated annealing. It is an algorithm that is supposed to simulate from the target distribution, hence the acceptance probability should not be driven towards 0. | Acceptance rates for Metropolis-Hastings with uniform candidate distribution | I believe that Weak convergence and optimal scaling of random walk Metropolis algorithms by Roberts, Gelman and Gilks is the source for the 0.234 optimal acceptance rate.
What the paper shows is that, | Acceptance rates for Metropolis-Hastings with uniform candidate distribution
I believe that Weak convergence and optimal scaling of random walk Metropolis algorithms by Roberts, Gelman and Gilks is the source for the 0.234 optimal acceptance rate.
What the paper shows is that, under certain assumptions, you can scale the random walk Metropolis-Hastings algorithm as the dimension of the space goes to infinity to get a limiting diffusion for each coordinate. In the limit, the diffusion can be seen as "most efficient" if the acceptance rate takes the value 0.234. Intuitively, it is a tradeoff between making to many small accepted steps and making to many large proposals that get rejected.
The Metropolis-Hastings algorithm is not really an optimization algorithm, in contrast to simulated annealing. It is an algorithm that is supposed to simulate from the target distribution, hence the acceptance probability should not be driven towards 0. | Acceptance rates for Metropolis-Hastings with uniform candidate distribution
I believe that Weak convergence and optimal scaling of random walk Metropolis algorithms by Roberts, Gelman and Gilks is the source for the 0.234 optimal acceptance rate.
What the paper shows is that, |
27,434 | Acceptance rates for Metropolis-Hastings with uniform candidate distribution | Just to add to answer by @NRH. The general idea follows the Goldilocks principal:
If the jumps are "too large", then the chain sticks;
If the jumps are "too small", then the chain explores the parameter space very slower;
We want the jumps to be just right.
Of course the question is, what do we mean by "just right". Essentially, for a particular case they minimise the expected square jump distance. This is equivalent to minimising the lag-1 autocorrelations. Recently, Sherlock and Roberts showed that the magic 0.234 holds for other target distributions:
C. Sherlock, G. Roberts
(2009);Optimal scaling of the random
walk Metropolis on elliptically
symmetric unimodal targets;
Bernoulli 15(3) | Acceptance rates for Metropolis-Hastings with uniform candidate distribution | Just to add to answer by @NRH. The general idea follows the Goldilocks principal:
If the jumps are "too large", then the chain sticks;
If the jumps are "too small", then the chain explores the parame | Acceptance rates for Metropolis-Hastings with uniform candidate distribution
Just to add to answer by @NRH. The general idea follows the Goldilocks principal:
If the jumps are "too large", then the chain sticks;
If the jumps are "too small", then the chain explores the parameter space very slower;
We want the jumps to be just right.
Of course the question is, what do we mean by "just right". Essentially, for a particular case they minimise the expected square jump distance. This is equivalent to minimising the lag-1 autocorrelations. Recently, Sherlock and Roberts showed that the magic 0.234 holds for other target distributions:
C. Sherlock, G. Roberts
(2009);Optimal scaling of the random
walk Metropolis on elliptically
symmetric unimodal targets;
Bernoulli 15(3) | Acceptance rates for Metropolis-Hastings with uniform candidate distribution
Just to add to answer by @NRH. The general idea follows the Goldilocks principal:
If the jumps are "too large", then the chain sticks;
If the jumps are "too small", then the chain explores the parame |
27,435 | Acceptance rates for Metropolis-Hastings with uniform candidate distribution | I am adding this as an answer because I don't have enough reputation for commenting under the question. I think you are confused between acceptance rate and acceptance ratio.
Acceptance ratio is used to decide whether to accept or reject a candidate. The ratio which you are calling as acceptance rate is actually called acceptance ratio and it is different from the acceptance rate.
Acceptance rate is the rate of accepting candidates. It is the ratio of number of unique values in the MCMC chain to the total number of values in the MCMC chain.
Now your doubt of optimal acceptance rate being 20% is actually about the real acceptance rate, not the acceptance ratio. The answer is given in the other answers. I just wanted to point out the confusion you are having. | Acceptance rates for Metropolis-Hastings with uniform candidate distribution | I am adding this as an answer because I don't have enough reputation for commenting under the question. I think you are confused between acceptance rate and acceptance ratio.
Acceptance ratio is use | Acceptance rates for Metropolis-Hastings with uniform candidate distribution
I am adding this as an answer because I don't have enough reputation for commenting under the question. I think you are confused between acceptance rate and acceptance ratio.
Acceptance ratio is used to decide whether to accept or reject a candidate. The ratio which you are calling as acceptance rate is actually called acceptance ratio and it is different from the acceptance rate.
Acceptance rate is the rate of accepting candidates. It is the ratio of number of unique values in the MCMC chain to the total number of values in the MCMC chain.
Now your doubt of optimal acceptance rate being 20% is actually about the real acceptance rate, not the acceptance ratio. The answer is given in the other answers. I just wanted to point out the confusion you are having. | Acceptance rates for Metropolis-Hastings with uniform candidate distribution
I am adding this as an answer because I don't have enough reputation for commenting under the question. I think you are confused between acceptance rate and acceptance ratio.
Acceptance ratio is use |
27,436 | How can I determine weibull parameters from data? | Maximum Likelihood Estimation of Weibull parameters may be a good idea in your case. A form of Weibull distribution looks like this:
$$(\gamma / \theta) (x)^{\gamma-1}\exp(-x^{\gamma}/\theta)$$
Where $\theta, \gamma > 0$ are parameters. Given observations $X_1, \ldots, X_n$, the log-likelihood function is
$$L(\theta, \gamma)=\displaystyle \sum_{i=1}^{n}\log f(X_i| \theta, \gamma)$$
One "programming based" solution would be optimize this function using constrained optimization. Solving for optimum solution:
$$\frac {\partial \log L} {\partial \gamma} = \frac{n}{\gamma} + \sum_1^n \log x_i - \frac{1}{\theta}\sum_1^nx_i^{\gamma}\log x_i = 0 $$
$$\frac {\partial \log L} {\partial \theta} = -\frac{n}{\theta} + \frac{1}{\theta^2}\sum_1^nx_i^{\gamma}=0$$
On eliminating $\theta$ we get:
$$\Bigg[ \frac {\sum_1^n x_i^{\gamma} \log x_i}{\sum_1^n x_i^{\gamma}} - \frac {1}{\gamma}\Bigg]=\frac{1}{n}\sum_1^n \log x_i$$
Now this can be solved for ML estimate $\hat \gamma$. This can be accomplished with the aid of standard iterative procedures which solve are used to find the solution of equation such as -- Newton-Raphson or other numerical procedures.
Now $\theta$ can be found in terms of $\hat \gamma$ as:
$$\hat \theta = \frac {\sum_1^n x_i^{\hat \gamma}}{n}$$ | How can I determine weibull parameters from data? | Maximum Likelihood Estimation of Weibull parameters may be a good idea in your case. A form of Weibull distribution looks like this:
$$(\gamma / \theta) (x)^{\gamma-1}\exp(-x^{\gamma}/\theta)$$
Where | How can I determine weibull parameters from data?
Maximum Likelihood Estimation of Weibull parameters may be a good idea in your case. A form of Weibull distribution looks like this:
$$(\gamma / \theta) (x)^{\gamma-1}\exp(-x^{\gamma}/\theta)$$
Where $\theta, \gamma > 0$ are parameters. Given observations $X_1, \ldots, X_n$, the log-likelihood function is
$$L(\theta, \gamma)=\displaystyle \sum_{i=1}^{n}\log f(X_i| \theta, \gamma)$$
One "programming based" solution would be optimize this function using constrained optimization. Solving for optimum solution:
$$\frac {\partial \log L} {\partial \gamma} = \frac{n}{\gamma} + \sum_1^n \log x_i - \frac{1}{\theta}\sum_1^nx_i^{\gamma}\log x_i = 0 $$
$$\frac {\partial \log L} {\partial \theta} = -\frac{n}{\theta} + \frac{1}{\theta^2}\sum_1^nx_i^{\gamma}=0$$
On eliminating $\theta$ we get:
$$\Bigg[ \frac {\sum_1^n x_i^{\gamma} \log x_i}{\sum_1^n x_i^{\gamma}} - \frac {1}{\gamma}\Bigg]=\frac{1}{n}\sum_1^n \log x_i$$
Now this can be solved for ML estimate $\hat \gamma$. This can be accomplished with the aid of standard iterative procedures which solve are used to find the solution of equation such as -- Newton-Raphson or other numerical procedures.
Now $\theta$ can be found in terms of $\hat \gamma$ as:
$$\hat \theta = \frac {\sum_1^n x_i^{\hat \gamma}}{n}$$ | How can I determine weibull parameters from data?
Maximum Likelihood Estimation of Weibull parameters may be a good idea in your case. A form of Weibull distribution looks like this:
$$(\gamma / \theta) (x)^{\gamma-1}\exp(-x^{\gamma}/\theta)$$
Where |
27,437 | How can I determine weibull parameters from data? | Use fitdistrplus:
Need help identifying a distribution by its histogram
Here's an example of how the Weibull Distribution is fit:
library(fitdistrplus)
#Generate fake data
shape <- 1.9
x <- rweibull(n=1000, shape=shape, scale=1)
#Fit x data with fitdist
fit.w <- fitdist(x, "weibull")
summary(fit.w)
plot(fit.w)
Fitting of the distribution ' weibull ' by maximum likelihood
Parameters :
estimate Std. Error
shape 1.8720133 0.04596699
scale 0.9976703 0.01776794
Loglikelihood: -636.1181 AIC: 1276.236 BIC: 1286.052
Correlation matrix:
shape scale
shape 1.0000000 0.3166085
scale 0.3166085 1.0000000 | How can I determine weibull parameters from data? | Use fitdistrplus:
Need help identifying a distribution by its histogram
Here's an example of how the Weibull Distribution is fit:
library(fitdistrplus)
#Generate fake data
shape <- 1.9
x <- rweibull( | How can I determine weibull parameters from data?
Use fitdistrplus:
Need help identifying a distribution by its histogram
Here's an example of how the Weibull Distribution is fit:
library(fitdistrplus)
#Generate fake data
shape <- 1.9
x <- rweibull(n=1000, shape=shape, scale=1)
#Fit x data with fitdist
fit.w <- fitdist(x, "weibull")
summary(fit.w)
plot(fit.w)
Fitting of the distribution ' weibull ' by maximum likelihood
Parameters :
estimate Std. Error
shape 1.8720133 0.04596699
scale 0.9976703 0.01776794
Loglikelihood: -636.1181 AIC: 1276.236 BIC: 1286.052
Correlation matrix:
shape scale
shape 1.0000000 0.3166085
scale 0.3166085 1.0000000 | How can I determine weibull parameters from data?
Use fitdistrplus:
Need help identifying a distribution by its histogram
Here's an example of how the Weibull Distribution is fit:
library(fitdistrplus)
#Generate fake data
shape <- 1.9
x <- rweibull( |
27,438 | Estimation of exponential model | There are several issues here.
(1) The model needs to be explicitly probabilistic. In almost all cases there will be no set of parameters for which the lhs matches the rhs for all your data: there will be residuals. You need to make assumptions about those residuals. Do you expect them to be zero on the average? To be symmetrically distributed? To be approximately normally distributed?
Here are two models that agree with the one specified but allow drastically different residual behavior (and therefore will typically result in different parameter estimates). You can vary these models by varying assumptions about the joint distribution of the $\epsilon_{i}$:
$$\text{A:}\ y_{i} =\beta_{0} \exp{\left(\beta_{1}x_{1i}+\ldots+\beta_{k}x_{ki} + \epsilon_{i}\right)}$$
$$\text{B:}\ y_{i} =\beta_{0} \exp{\left(\beta_{1}x_{1i}+\ldots+\beta_{k}x_{ki}\right)} + \epsilon_{i}.$$
(Note that these are models for the data $y_i$; there usually is no such thing as an estimated data value $\hat{y_i}$.)
(2) The need to handle zero values for the y's implies the stated model (A) is both wrong and inadequate, because it cannot produce a zero value no matter what the random error equals. The second model above (B) allows for zero (or even negative) values of y's. However, one should not choose a model solely on such a basis. To reiterate #1: it is important to model the errors reasonably well.
(3) Linearization changes the model. Typically, it results in models like (A) but not like (B). It is used by people who have analyzed their data enough to know this change will not appreciably affect the parameter estimates and by people who are ignorant of what is happening. (It is hard, many times, to tell the difference.)
(4) A common way to handle the possibility of a zero value is to propose that $y$ (or some re-expression thereof, such as the square root) has a strictly positive chance of equally zero. Mathematically, we are mixing a point mass (a "delta function") in with some other distribution. These models look like this:
$$\eqalign{
f(y_i) &\sim F(\mathbf{\theta}); \cr
\theta_j &= \beta_{j0} + \beta_{j1} x_{1i} + \cdots + \beta_{jk} x_{ki}
}$$
where $\Pr_{F_\theta}[f(Y) = 0] = \theta_{j+1} \gt 0$ is one of the parameters implicit in the vector $\mathbf{\theta}$, $F$ is some family of distributions parameterized by $\theta_1, \ldots, \theta_j$, and $f$ is the reexpression of the $y$'s (the "link" function of a generalized linear model: see onestop's reply). (Of course, then, $\Pr_{F_\theta}[f(Y) \le t]$ = $(1 - \theta_{j+1})F_\theta(t)$ when $t \ne 0$.) Examples are the zero-inflated Poisson and Negative Binomial models.
(5) The issues of constructing a model and fitting it are related but different. As a simple example, even an ordinary regression model $Y = \beta_0 + \beta_1 X + \epsilon$ can be fit in many ways by means of least squares (which gives the same parameter estimates as Maximum Likelihood and almost the same standard errors), iteratively reweighted least squares, various other forms of "robust least squares," etc. The choice of fitting is often based on convenience, expedience (e.g., availability of software), familiarity, habit, or convention, but at least some thought should be given to what is appropriate for the assumed distribution of the error terms $\epsilon_i$, to what the loss function for the problem might reasonably be, and to the possibility of exploiting additional information (such as a prior distribution for the parameters). | Estimation of exponential model | There are several issues here.
(1) The model needs to be explicitly probabilistic. In almost all cases there will be no set of parameters for which the lhs matches the rhs for all your data: there wi | Estimation of exponential model
There are several issues here.
(1) The model needs to be explicitly probabilistic. In almost all cases there will be no set of parameters for which the lhs matches the rhs for all your data: there will be residuals. You need to make assumptions about those residuals. Do you expect them to be zero on the average? To be symmetrically distributed? To be approximately normally distributed?
Here are two models that agree with the one specified but allow drastically different residual behavior (and therefore will typically result in different parameter estimates). You can vary these models by varying assumptions about the joint distribution of the $\epsilon_{i}$:
$$\text{A:}\ y_{i} =\beta_{0} \exp{\left(\beta_{1}x_{1i}+\ldots+\beta_{k}x_{ki} + \epsilon_{i}\right)}$$
$$\text{B:}\ y_{i} =\beta_{0} \exp{\left(\beta_{1}x_{1i}+\ldots+\beta_{k}x_{ki}\right)} + \epsilon_{i}.$$
(Note that these are models for the data $y_i$; there usually is no such thing as an estimated data value $\hat{y_i}$.)
(2) The need to handle zero values for the y's implies the stated model (A) is both wrong and inadequate, because it cannot produce a zero value no matter what the random error equals. The second model above (B) allows for zero (or even negative) values of y's. However, one should not choose a model solely on such a basis. To reiterate #1: it is important to model the errors reasonably well.
(3) Linearization changes the model. Typically, it results in models like (A) but not like (B). It is used by people who have analyzed their data enough to know this change will not appreciably affect the parameter estimates and by people who are ignorant of what is happening. (It is hard, many times, to tell the difference.)
(4) A common way to handle the possibility of a zero value is to propose that $y$ (or some re-expression thereof, such as the square root) has a strictly positive chance of equally zero. Mathematically, we are mixing a point mass (a "delta function") in with some other distribution. These models look like this:
$$\eqalign{
f(y_i) &\sim F(\mathbf{\theta}); \cr
\theta_j &= \beta_{j0} + \beta_{j1} x_{1i} + \cdots + \beta_{jk} x_{ki}
}$$
where $\Pr_{F_\theta}[f(Y) = 0] = \theta_{j+1} \gt 0$ is one of the parameters implicit in the vector $\mathbf{\theta}$, $F$ is some family of distributions parameterized by $\theta_1, \ldots, \theta_j$, and $f$ is the reexpression of the $y$'s (the "link" function of a generalized linear model: see onestop's reply). (Of course, then, $\Pr_{F_\theta}[f(Y) \le t]$ = $(1 - \theta_{j+1})F_\theta(t)$ when $t \ne 0$.) Examples are the zero-inflated Poisson and Negative Binomial models.
(5) The issues of constructing a model and fitting it are related but different. As a simple example, even an ordinary regression model $Y = \beta_0 + \beta_1 X + \epsilon$ can be fit in many ways by means of least squares (which gives the same parameter estimates as Maximum Likelihood and almost the same standard errors), iteratively reweighted least squares, various other forms of "robust least squares," etc. The choice of fitting is often based on convenience, expedience (e.g., availability of software), familiarity, habit, or convention, but at least some thought should be given to what is appropriate for the assumed distribution of the error terms $\epsilon_i$, to what the loss function for the problem might reasonably be, and to the possibility of exploiting additional information (such as a prior distribution for the parameters). | Estimation of exponential model
There are several issues here.
(1) The model needs to be explicitly probabilistic. In almost all cases there will be no set of parameters for which the lhs matches the rhs for all your data: there wi |
27,439 | Estimation of exponential model | This is a generalized linear model (GLM) with a log link function.
Any probability distribution on $[0,\infty)$ with non-zero density at zero will handle $y_i=0$ in some observations; the most common would be the Poisson distribution, resulting in Poisson regression, a.k.a. log-linear modelling. Another choice would be a negative binomial distribution.
If you don't have count data, or if $y_i$ takes non-integer values, you can still use the framework of generalized linear models without fully specifying a distribution for $\operatorname{P}(y_i|\bf{x})$ but instead only specifying the relationship between its mean and variance using quasi-likelihood. | Estimation of exponential model | This is a generalized linear model (GLM) with a log link function.
Any probability distribution on $[0,\infty)$ with non-zero density at zero will handle $y_i=0$ in some observations; the most common | Estimation of exponential model
This is a generalized linear model (GLM) with a log link function.
Any probability distribution on $[0,\infty)$ with non-zero density at zero will handle $y_i=0$ in some observations; the most common would be the Poisson distribution, resulting in Poisson regression, a.k.a. log-linear modelling. Another choice would be a negative binomial distribution.
If you don't have count data, or if $y_i$ takes non-integer values, you can still use the framework of generalized linear models without fully specifying a distribution for $\operatorname{P}(y_i|\bf{x})$ but instead only specifying the relationship between its mean and variance using quasi-likelihood. | Estimation of exponential model
This is a generalized linear model (GLM) with a log link function.
Any probability distribution on $[0,\infty)$ with non-zero density at zero will handle $y_i=0$ in some observations; the most common |
27,440 | Estimation of exponential model | You can always use non-linear least squares. Then your model will be:
$$y_i=\beta_0\exp(\beta_1x_{1i}+...+\beta_kx_{ki})+\varepsilon_i$$
The zeroes in $y_i$ then will be treated as deviations from the non-linear trend. | Estimation of exponential model | You can always use non-linear least squares. Then your model will be:
$$y_i=\beta_0\exp(\beta_1x_{1i}+...+\beta_kx_{ki})+\varepsilon_i$$
The zeroes in $y_i$ then will be treated as deviations from the | Estimation of exponential model
You can always use non-linear least squares. Then your model will be:
$$y_i=\beta_0\exp(\beta_1x_{1i}+...+\beta_kx_{ki})+\varepsilon_i$$
The zeroes in $y_i$ then will be treated as deviations from the non-linear trend. | Estimation of exponential model
You can always use non-linear least squares. Then your model will be:
$$y_i=\beta_0\exp(\beta_1x_{1i}+...+\beta_kx_{ki})+\varepsilon_i$$
The zeroes in $y_i$ then will be treated as deviations from the |
27,441 | Resources for learning about spurious time series regression | These concepts have been created to deal with regressions (for instance correlation) between non stationary series.
Clive Granger is the key author you should read.
Cointegration has been introduced in 2 steps:
1/ Granger, C., and P. Newbold (1974): "Spurious Regression in Econometrics,"
In this article, the authors point out that regression among non stationary variables should be conducted as regressions among changes (or log changes) of the variables. Otherwise you might find high correlation without any real significance. (= spurious regression)
2/ Engle, Robert F., Granger, Clive W. J. (1987) "Co-integration and error correction: Representation, estimation and testing", Econometrica, 55(2), 251-276.
In this article (for which Granger has been rewarded by the Nobel jury in 2003), the authors go further, and introduce cointegration as a way to study the error correction model that can exist between two non stationary variables.
Basically the 1974 advice to regress the change in the time series may lead to unspecified regression models. You can indeed have variables whose changes are uncorrelated, but which are connected through an "error correction model".
Hence, you can have correlation without cointegration, and cointegration without correlation. The two are complementary.
If there was only one paper to read, I suggest you start with this one, which is a very good and nice introduction:
(Murray 1993) Drunk and her dog | Resources for learning about spurious time series regression | These concepts have been created to deal with regressions (for instance correlation) between non stationary series.
Clive Granger is the key author you should read.
Cointegration has been introduced i | Resources for learning about spurious time series regression
These concepts have been created to deal with regressions (for instance correlation) between non stationary series.
Clive Granger is the key author you should read.
Cointegration has been introduced in 2 steps:
1/ Granger, C., and P. Newbold (1974): "Spurious Regression in Econometrics,"
In this article, the authors point out that regression among non stationary variables should be conducted as regressions among changes (or log changes) of the variables. Otherwise you might find high correlation without any real significance. (= spurious regression)
2/ Engle, Robert F., Granger, Clive W. J. (1987) "Co-integration and error correction: Representation, estimation and testing", Econometrica, 55(2), 251-276.
In this article (for which Granger has been rewarded by the Nobel jury in 2003), the authors go further, and introduce cointegration as a way to study the error correction model that can exist between two non stationary variables.
Basically the 1974 advice to regress the change in the time series may lead to unspecified regression models. You can indeed have variables whose changes are uncorrelated, but which are connected through an "error correction model".
Hence, you can have correlation without cointegration, and cointegration without correlation. The two are complementary.
If there was only one paper to read, I suggest you start with this one, which is a very good and nice introduction:
(Murray 1993) Drunk and her dog | Resources for learning about spurious time series regression
These concepts have been created to deal with regressions (for instance correlation) between non stationary series.
Clive Granger is the key author you should read.
Cointegration has been introduced i |
27,442 | Resources for learning about spurious time series regression | Let's start with the spurious regression. Take or imagine two series which are both driven by a dominant time trend: for example US population and US consumption of whatever (it doesn't matter what item you think about, be it soda or licorice or gas). Both series will be growing because of the common time trend. Now regress aggregate consumption on aggregate population size and presto, you have a great fit. (We could simulate that quickly in R too.)
But it means nothing. There is no relationship (as we as the modelers know) -- yet the linear model sees a fit (in the minimizing sum of squares sense) as both series happen to both be uptrending without a causal link. We fell victim to a spurious regression.
What could or should be modeled is change in one series on change in the other, or maybe per capita consumption, or ... All those changes make the variables stationary which helps to alleviate the issue.
Now, from 30,000 feet, unit roots and cointegration help you with formal inference in these case by providing rigorous statistical underpinning
(Econometrica publications and a Nobel don't come easily) where none was available.
As for question in good resources: it's tricky. I have read dozens of time series books, and most excel at the math and leave the intuition behind. There is nothing like Kennedy's Econometrics text for time series. Maybe Walter Enders text comes closest. I will try to think of some more and update here.
Other than books, software for actually doing this is important and R has what you need. The price is right too. | Resources for learning about spurious time series regression | Let's start with the spurious regression. Take or imagine two series which are both driven by a dominant time trend: for example US population and US consumption of whatever (it doesn't matter what i | Resources for learning about spurious time series regression
Let's start with the spurious regression. Take or imagine two series which are both driven by a dominant time trend: for example US population and US consumption of whatever (it doesn't matter what item you think about, be it soda or licorice or gas). Both series will be growing because of the common time trend. Now regress aggregate consumption on aggregate population size and presto, you have a great fit. (We could simulate that quickly in R too.)
But it means nothing. There is no relationship (as we as the modelers know) -- yet the linear model sees a fit (in the minimizing sum of squares sense) as both series happen to both be uptrending without a causal link. We fell victim to a spurious regression.
What could or should be modeled is change in one series on change in the other, or maybe per capita consumption, or ... All those changes make the variables stationary which helps to alleviate the issue.
Now, from 30,000 feet, unit roots and cointegration help you with formal inference in these case by providing rigorous statistical underpinning
(Econometrica publications and a Nobel don't come easily) where none was available.
As for question in good resources: it's tricky. I have read dozens of time series books, and most excel at the math and leave the intuition behind. There is nothing like Kennedy's Econometrics text for time series. Maybe Walter Enders text comes closest. I will try to think of some more and update here.
Other than books, software for actually doing this is important and R has what you need. The price is right too. | Resources for learning about spurious time series regression
Let's start with the spurious regression. Take or imagine two series which are both driven by a dominant time trend: for example US population and US consumption of whatever (it doesn't matter what i |
27,443 | Resources for learning about spurious time series regression | A series is said to have an unit root if it's non-stationary. When you have, say, two non-stationary processes integrated to order 1 (I(1) series) and you can find a linear combination of those processes which is I(0) then your series are cointegrated. This means that they evolve in a somewhat similar way.
This channel has some nice insights about time series, cointegration and so https://www.youtube.com/watch?v=vvTKjm94Ars
As for books, I quite like "Econometric Theory and Methods" by Davidson & MacKinnon. | Resources for learning about spurious time series regression | A series is said to have an unit root if it's non-stationary. When you have, say, two non-stationary processes integrated to order 1 (I(1) series) and you can find a linear combination of those proces | Resources for learning about spurious time series regression
A series is said to have an unit root if it's non-stationary. When you have, say, two non-stationary processes integrated to order 1 (I(1) series) and you can find a linear combination of those processes which is I(0) then your series are cointegrated. This means that they evolve in a somewhat similar way.
This channel has some nice insights about time series, cointegration and so https://www.youtube.com/watch?v=vvTKjm94Ars
As for books, I quite like "Econometric Theory and Methods" by Davidson & MacKinnon. | Resources for learning about spurious time series regression
A series is said to have an unit root if it's non-stationary. When you have, say, two non-stationary processes integrated to order 1 (I(1) series) and you can find a linear combination of those proces |
27,444 | Statistical learning when observations are not iid | There is nothing in the theory of statistical learning or machine learning that requires samples to be i.i.d.
When samples are i.i.d, you can write the joint probability of the samples given some model as a product, namely $P(\{x\}) = \Pi_{i} P_i(x_i)$ which makes the log-likelihood a sum of the individual log-likelihoods. This simplifies the calculation, but is by no means a requirement.
In your case, you can for example model the distribution of a pair $x_i,y_i$ with some bi-variate distribution, say $z_i=(x_i,y_i)^T$ , $z_i \sim \mathcal{N}(\mu,\Sigma)$ , and then estimate the parameter $\Sigma$ from the likelihood $P(\{z\}) = \Pi_{i} P(z_i | \mu, \Sigma)$.
It is true that many out-of-the-box algorithm implementations implicitly assume independence between samples, so you are correct in identifying that you will have a problem applying them to you data as is. You will either have to modify the algorithm or find ones that are better suited for your case. | Statistical learning when observations are not iid | There is nothing in the theory of statistical learning or machine learning that requires samples to be i.i.d.
When samples are i.i.d, you can write the joint probability of the samples given some mode | Statistical learning when observations are not iid
There is nothing in the theory of statistical learning or machine learning that requires samples to be i.i.d.
When samples are i.i.d, you can write the joint probability of the samples given some model as a product, namely $P(\{x\}) = \Pi_{i} P_i(x_i)$ which makes the log-likelihood a sum of the individual log-likelihoods. This simplifies the calculation, but is by no means a requirement.
In your case, you can for example model the distribution of a pair $x_i,y_i$ with some bi-variate distribution, say $z_i=(x_i,y_i)^T$ , $z_i \sim \mathcal{N}(\mu,\Sigma)$ , and then estimate the parameter $\Sigma$ from the likelihood $P(\{z\}) = \Pi_{i} P(z_i | \mu, \Sigma)$.
It is true that many out-of-the-box algorithm implementations implicitly assume independence between samples, so you are correct in identifying that you will have a problem applying them to you data as is. You will either have to modify the algorithm or find ones that are better suited for your case. | Statistical learning when observations are not iid
There is nothing in the theory of statistical learning or machine learning that requires samples to be i.i.d.
When samples are i.i.d, you can write the joint probability of the samples given some mode |
27,445 | Statistical learning when observations are not iid | Markov processes are not only very general ways to analyze longitudinal data with statistical models, they also lend themselves to machine learning. They work because by modeling transition probabilities conditional on previous states the records are conditionally independent and may be treated as coming from different independent subjects. One can use discrete or continuous time processes, discrete being simpler. The main work comes from post-estimation processing to convert transition probabilities into unconditional (on previous state) state occupancy probabilities AKA current status probabilities. See this and other documents in this. | Statistical learning when observations are not iid | Markov processes are not only very general ways to analyze longitudinal data with statistical models, they also lend themselves to machine learning. They work because by modeling transition probabili | Statistical learning when observations are not iid
Markov processes are not only very general ways to analyze longitudinal data with statistical models, they also lend themselves to machine learning. They work because by modeling transition probabilities conditional on previous states the records are conditionally independent and may be treated as coming from different independent subjects. One can use discrete or continuous time processes, discrete being simpler. The main work comes from post-estimation processing to convert transition probabilities into unconditional (on previous state) state occupancy probabilities AKA current status probabilities. See this and other documents in this. | Statistical learning when observations are not iid
Markov processes are not only very general ways to analyze longitudinal data with statistical models, they also lend themselves to machine learning. They work because by modeling transition probabili |
27,446 | Statistical learning when observations are not iid | There are a few good answers here already but I thought it worth noting that the answer to this question can change drastically depending on how the iid assumption is violated. For example, if a univariate dataset is not iid, but is stationary, then many very simple estimation procedures, such as the sample mean, still converge to the appropriate limit.
However, if the iid assumption is violated because the data is non-stationary, then life is much more difficult. Note that the very common Machine Learning tradition of splitting the dataset into a training, test, and sometimes validation set, is invalid in the presence of non-stationarity. If this is the difficulty you face then your best bet is usually to try and find a transformation of the data that is close to stationary (or ergodicity) and work with that instead. | Statistical learning when observations are not iid | There are a few good answers here already but I thought it worth noting that the answer to this question can change drastically depending on how the iid assumption is violated. For example, if a univa | Statistical learning when observations are not iid
There are a few good answers here already but I thought it worth noting that the answer to this question can change drastically depending on how the iid assumption is violated. For example, if a univariate dataset is not iid, but is stationary, then many very simple estimation procedures, such as the sample mean, still converge to the appropriate limit.
However, if the iid assumption is violated because the data is non-stationary, then life is much more difficult. Note that the very common Machine Learning tradition of splitting the dataset into a training, test, and sometimes validation set, is invalid in the presence of non-stationarity. If this is the difficulty you face then your best bet is usually to try and find a transformation of the data that is close to stationary (or ergodicity) and work with that instead. | Statistical learning when observations are not iid
There are a few good answers here already but I thought it worth noting that the answer to this question can change drastically depending on how the iid assumption is violated. For example, if a univa |
27,447 | URL Feature representations | For short-length text analysis with smaller datasets, I've found pretrained word embeddings useful. For example, taking the /path/to/the/myfile part of @Tim's answer, you can tokenize to [path, to, the, myfile] (in this specific case, probably dropping the common to, the, maybe trying to split long strings like myfile), and get their respective embeddings. From there, it seems common to just average the embeddings over all words in a document; depending on your specific usecase, some other aggregation may be worth exploring. For example, if you only need a distance between URLs, you could use the word-mover distance.
Common domains can probably also be found in a word embedding, but uncommon ones probably won't appear. Request parameters and anchors may also be usable, depending on how human-readable they are. The other components of Tim's answer can be used directly as categorical features (or numerical, in the case of domain length). | URL Feature representations | For short-length text analysis with smaller datasets, I've found pretrained word embeddings useful. For example, taking the /path/to/the/myfile part of @Tim's answer, you can tokenize to [path, to, t | URL Feature representations
For short-length text analysis with smaller datasets, I've found pretrained word embeddings useful. For example, taking the /path/to/the/myfile part of @Tim's answer, you can tokenize to [path, to, the, myfile] (in this specific case, probably dropping the common to, the, maybe trying to split long strings like myfile), and get their respective embeddings. From there, it seems common to just average the embeddings over all words in a document; depending on your specific usecase, some other aggregation may be worth exploring. For example, if you only need a distance between URLs, you could use the word-mover distance.
Common domains can probably also be found in a word embedding, but uncommon ones probably won't appear. Request parameters and anchors may also be usable, depending on how human-readable they are. The other components of Tim's answer can be used directly as categorical features (or numerical, in the case of domain length). | URL Feature representations
For short-length text analysis with smaller datasets, I've found pretrained word embeddings useful. For example, taking the /path/to/the/myfile part of @Tim's answer, you can tokenize to [path, to, t |
27,448 | URL Feature representations | URL's have the following format:
It tells you several things:
Using https vs http tells you if the site is encrypted, this may, or may not be important information. Notice that you can have both https and http URLs pointing to the same website, so the fact that http exists does not mean that the website does not offer encryption.
You would almost never see ports, so if you see one you can use a binary flag (yes/no) for it.
In the domain name, the example above uses the .com top-level domain suffix. Those tell you something about the origin of the website, for example, .de is German country code, .edu is for educational purposes, there are suffixes as .mil or .gov for official, government pages, etc. Also keep in mind that there may be regional variations, e.g. .co.uk, or .gov.pl, where both parts of the suffix give you useful information. Notice however that to a great degree those may be misleading, for example, the .ai suffix in most cases would be used by a Silicon Valley AI startup, rather than someone coming from Anguilla.
The domain itself can give you some clues about the content, for example, if you saw there wikipedia, amazon, or instagram, you would instantly be able to make a guess about the content. You could probably encode the most popular domains and keep the rare ones (johnnys-funny-cat-pics-blog.com) as the "other" category.
Notice that the length of the domain is meaningful as well: the short ones are usually already taken, so they are either old, or someone bought them for a larger amount of money.
The remaining /path/to/the/myfile.html can tell you something about the content of the page (e.g. login.php means that you can probably log in to something and penguins.html can be about penguins). In many cases, the words you see here would to some extent describe the content of the page. Treat them the same as any other words in other natural language processing tasks (bag-of-words, embeddings).
The file extensions like .html, .php, .asp, etc can tell you something about the technology used.
The ?key=value parameters would be used only in the pages that are more complicated than a static HTML-only page. They exist in pages that can receive parametrized GET requests. Their existence (yes/no) may be useful information in some cases. Their content (key=value pairs) can tell you what kind of information is send or received between the client and the server.
The #Anchor links to a section of a webpage. Those would be used in places like Wikipedia, blogs, documentation, etc and you are unlikely to see them in non-articles pages (e.g. online store). Information of an URL having the anchor (yes/no) may be useful, and the anchor itself may tell you something about the content (e.g. if you saw #Population, this may be an article describing some country). If you have them in your data, consider them as well. | URL Feature representations | URL's have the following format:
It tells you several things:
Using https vs http tells you if the site is encrypted, this may, or may not be important information. Notice that you can have both htt | URL Feature representations
URL's have the following format:
It tells you several things:
Using https vs http tells you if the site is encrypted, this may, or may not be important information. Notice that you can have both https and http URLs pointing to the same website, so the fact that http exists does not mean that the website does not offer encryption.
You would almost never see ports, so if you see one you can use a binary flag (yes/no) for it.
In the domain name, the example above uses the .com top-level domain suffix. Those tell you something about the origin of the website, for example, .de is German country code, .edu is for educational purposes, there are suffixes as .mil or .gov for official, government pages, etc. Also keep in mind that there may be regional variations, e.g. .co.uk, or .gov.pl, where both parts of the suffix give you useful information. Notice however that to a great degree those may be misleading, for example, the .ai suffix in most cases would be used by a Silicon Valley AI startup, rather than someone coming from Anguilla.
The domain itself can give you some clues about the content, for example, if you saw there wikipedia, amazon, or instagram, you would instantly be able to make a guess about the content. You could probably encode the most popular domains and keep the rare ones (johnnys-funny-cat-pics-blog.com) as the "other" category.
Notice that the length of the domain is meaningful as well: the short ones are usually already taken, so they are either old, or someone bought them for a larger amount of money.
The remaining /path/to/the/myfile.html can tell you something about the content of the page (e.g. login.php means that you can probably log in to something and penguins.html can be about penguins). In many cases, the words you see here would to some extent describe the content of the page. Treat them the same as any other words in other natural language processing tasks (bag-of-words, embeddings).
The file extensions like .html, .php, .asp, etc can tell you something about the technology used.
The ?key=value parameters would be used only in the pages that are more complicated than a static HTML-only page. They exist in pages that can receive parametrized GET requests. Their existence (yes/no) may be useful information in some cases. Their content (key=value pairs) can tell you what kind of information is send or received between the client and the server.
The #Anchor links to a section of a webpage. Those would be used in places like Wikipedia, blogs, documentation, etc and you are unlikely to see them in non-articles pages (e.g. online store). Information of an URL having the anchor (yes/no) may be useful, and the anchor itself may tell you something about the content (e.g. if you saw #Population, this may be an article describing some country). If you have them in your data, consider them as well. | URL Feature representations
URL's have the following format:
It tells you several things:
Using https vs http tells you if the site is encrypted, this may, or may not be important information. Notice that you can have both htt |
27,449 | Proving Ridge Regression is strictly convex | "you can prove a function is strictly convex if the 2nd derivative is strictly greater than 0"
That's in one dimension. A multivariate twice-differentiable function is convex iff the 2nd derivative matrix is positive semi-definite, because that corresponds to the directional derivative in any direction being non-negative. It's strictly convex iff the second derivative matrix is positive definite.
As you showed, the ridge loss function has second derivative $2\lambda I +2X^TX$, which is positive definite for any $\lambda>0$ because
$\lambda I$ is positive definite for any $\lambda>0$
$X^TX$ is positive semi-definite for any $X$
the sum of a positive definite and positive semi-definite matrix is positive definite
If you aren't sure about any of these and want to check in more detail it's useful to know that $A$ is positive definite iff $b^TAb>0$ for all (non-zero) column vectors $b$. Because of this relationship, many matrix proofs of positive definiteness just come from writing the scalar proofs of positiveness in matrix notation (including non-trivial results like the CramΓ©r-Rao lower bound for variances) | Proving Ridge Regression is strictly convex | "you can prove a function is strictly convex if the 2nd derivative is strictly greater than 0"
That's in one dimension. A multivariate twice-differentiable function is convex iff the 2nd derivative m | Proving Ridge Regression is strictly convex
"you can prove a function is strictly convex if the 2nd derivative is strictly greater than 0"
That's in one dimension. A multivariate twice-differentiable function is convex iff the 2nd derivative matrix is positive semi-definite, because that corresponds to the directional derivative in any direction being non-negative. It's strictly convex iff the second derivative matrix is positive definite.
As you showed, the ridge loss function has second derivative $2\lambda I +2X^TX$, which is positive definite for any $\lambda>0$ because
$\lambda I$ is positive definite for any $\lambda>0$
$X^TX$ is positive semi-definite for any $X$
the sum of a positive definite and positive semi-definite matrix is positive definite
If you aren't sure about any of these and want to check in more detail it's useful to know that $A$ is positive definite iff $b^TAb>0$ for all (non-zero) column vectors $b$. Because of this relationship, many matrix proofs of positive definiteness just come from writing the scalar proofs of positiveness in matrix notation (including non-trivial results like the CramΓ©r-Rao lower bound for variances) | Proving Ridge Regression is strictly convex
"you can prove a function is strictly convex if the 2nd derivative is strictly greater than 0"
That's in one dimension. A multivariate twice-differentiable function is convex iff the 2nd derivative m |
27,450 | Proving Ridge Regression is strictly convex | Less of a proof, and more of a convincing argument (that can lead you towards the proof): we all agree ordinary least squares with full rank covariance matrix $X^TX$ is strictly convex (see Convexity of linear regression), ridge regression is a form of OLS with augmented (virtual) data, thus it's also strictly convex.
The augmentation $X\text{aug} = \left[ \begin{matrix}X^T & \sqrt\lambda\mathbb I \end{matrix}\right]^T$ actually ensures that, in ridge, $X\text{aug}^TX\text{aug}$ is full rank, since it consists of concatenating a multiple of the identity matrix $\sqrt\lambda\mathbb I$.
So, if you can show that the equivalent OLS is strictly convex, so is ridge regression. | Proving Ridge Regression is strictly convex | Less of a proof, and more of a convincing argument (that can lead you towards the proof): we all agree ordinary least squares with full rank covariance matrix $X^TX$ is strictly convex (see Convexity | Proving Ridge Regression is strictly convex
Less of a proof, and more of a convincing argument (that can lead you towards the proof): we all agree ordinary least squares with full rank covariance matrix $X^TX$ is strictly convex (see Convexity of linear regression), ridge regression is a form of OLS with augmented (virtual) data, thus it's also strictly convex.
The augmentation $X\text{aug} = \left[ \begin{matrix}X^T & \sqrt\lambda\mathbb I \end{matrix}\right]^T$ actually ensures that, in ridge, $X\text{aug}^TX\text{aug}$ is full rank, since it consists of concatenating a multiple of the identity matrix $\sqrt\lambda\mathbb I$.
So, if you can show that the equivalent OLS is strictly convex, so is ridge regression. | Proving Ridge Regression is strictly convex
Less of a proof, and more of a convincing argument (that can lead you towards the proof): we all agree ordinary least squares with full rank covariance matrix $X^TX$ is strictly convex (see Convexity |
27,451 | Will log transformation always mitigate heteroskedasticity? | No; sometimes it will make it worse.
Heteroskedasticity where the spread is close to proportional to the conditional mean will tend to be improved by taking log(y), but if it's not increasing with the mean at close to that rate (or more), then the heteroskedasticity will often be made worse by that transformation.
Because taking logs "pulls in" more extreme values on the right (high values), while values at the far left (low values) tend to get stretched back:
this means spreads will become smaller if the values are large but may become stretched if the values are already small.
If you know the approximate form of the heteroskedasticity, then you can sometimes work out a transformation that will approximately make the variance constant. This is known as a variance-stabilizing transformation; it is a standard topic in mathematical statistics. There are a number of posts on our site that relate to variance-stabilizing transformations.
If the spread is proportional to the square root of the mean (variance proportional to the mean), then a square root transformation - the variance-stabilizing transformation for that case - will tend to do much better than a log transformation; the log transformation does "too much" in that case. In the second plot we have the spread decrease as the mean increased, and then taking either logs or square roots would make it worse. (It turns out that the 1.5 power actually does reasonably well at stabilizing variance in that case.) | Will log transformation always mitigate heteroskedasticity? | No; sometimes it will make it worse.
Heteroskedasticity where the spread is close to proportional to the conditional mean will tend to be improved by taking log(y), but if it's not increasing with the | Will log transformation always mitigate heteroskedasticity?
No; sometimes it will make it worse.
Heteroskedasticity where the spread is close to proportional to the conditional mean will tend to be improved by taking log(y), but if it's not increasing with the mean at close to that rate (or more), then the heteroskedasticity will often be made worse by that transformation.
Because taking logs "pulls in" more extreme values on the right (high values), while values at the far left (low values) tend to get stretched back:
this means spreads will become smaller if the values are large but may become stretched if the values are already small.
If you know the approximate form of the heteroskedasticity, then you can sometimes work out a transformation that will approximately make the variance constant. This is known as a variance-stabilizing transformation; it is a standard topic in mathematical statistics. There are a number of posts on our site that relate to variance-stabilizing transformations.
If the spread is proportional to the square root of the mean (variance proportional to the mean), then a square root transformation - the variance-stabilizing transformation for that case - will tend to do much better than a log transformation; the log transformation does "too much" in that case. In the second plot we have the spread decrease as the mean increased, and then taking either logs or square roots would make it worse. (It turns out that the 1.5 power actually does reasonably well at stabilizing variance in that case.) | Will log transformation always mitigate heteroskedasticity?
No; sometimes it will make it worse.
Heteroskedasticity where the spread is close to proportional to the conditional mean will tend to be improved by taking log(y), but if it's not increasing with the |
27,452 | Will log transformation always mitigate heteroskedasticity? | From my experience, when the data is 'cone-shaped' and skewed (lognormally or otherwise) the log-transformation is most helpful (see below). This sort of data often arises from populations of people, e.g. users of a system, where there will be a large population of casual infrequent users and a small tail of frequent users.
Here's an example of some cone shaped data:
x1 <- rlnorm(500,mean=2,sd=1.3)
x2 <- rlnorm(500,mean=2,sd=1.3)
y <- 2*x1+x2
z <- 2*x2+x1
#regression of unlogged values
fit <- lm(z ~ y)
plot(y,z,main=paste("R squared =",summary.lm(fit)[8]))
abline(coefficients(fit),col=2)
Taking the logs of both y and z gives :
#regression of logged values
fit <- lm(log(z) ~ log(y))
plot(log(y),log(z),main=paste("R squared =",summary.lm(fit)[8]))
abline(coefficients(fit),col=2)
Keep in mind that doing regression on logged data will change the form of the equation of the fit from $y=ax+b$
to
$log(y) = alog(x)+b$ (or alternatively $y=x^a e^b$).
Beyond this scenario, I would say it never hurts to try graphing the logged data, even if it doesn't make the residuals more homoscedastic. It often reveals details you wouldn't otherwise see or spreads out/squashes data in a useful way | Will log transformation always mitigate heteroskedasticity? | From my experience, when the data is 'cone-shaped' and skewed (lognormally or otherwise) the log-transformation is most helpful (see below). This sort of data often arises from populations of people, | Will log transformation always mitigate heteroskedasticity?
From my experience, when the data is 'cone-shaped' and skewed (lognormally or otherwise) the log-transformation is most helpful (see below). This sort of data often arises from populations of people, e.g. users of a system, where there will be a large population of casual infrequent users and a small tail of frequent users.
Here's an example of some cone shaped data:
x1 <- rlnorm(500,mean=2,sd=1.3)
x2 <- rlnorm(500,mean=2,sd=1.3)
y <- 2*x1+x2
z <- 2*x2+x1
#regression of unlogged values
fit <- lm(z ~ y)
plot(y,z,main=paste("R squared =",summary.lm(fit)[8]))
abline(coefficients(fit),col=2)
Taking the logs of both y and z gives :
#regression of logged values
fit <- lm(log(z) ~ log(y))
plot(log(y),log(z),main=paste("R squared =",summary.lm(fit)[8]))
abline(coefficients(fit),col=2)
Keep in mind that doing regression on logged data will change the form of the equation of the fit from $y=ax+b$
to
$log(y) = alog(x)+b$ (or alternatively $y=x^a e^b$).
Beyond this scenario, I would say it never hurts to try graphing the logged data, even if it doesn't make the residuals more homoscedastic. It often reveals details you wouldn't otherwise see or spreads out/squashes data in a useful way | Will log transformation always mitigate heteroskedasticity?
From my experience, when the data is 'cone-shaped' and skewed (lognormally or otherwise) the log-transformation is most helpful (see below). This sort of data often arises from populations of people, |
27,453 | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model? | You generally can't decompose error (residuals) into bias and variance components. The simple reason is that you generally don't know the true function. Recall that $bias(\hat f(x)) = E[\hat f(x) - f(x)],$ and that $f(x)$ is the unknown thing you wish to estimate.
What about bootstrapping?
It is possible to estimate the bias of an estimator by bootstrapping, but it's not about bagging models, and I don't believe there is a way to use the bootstrap to assess the bias in $\hat f(x),$ because bootstrapping is still based on some notion of the Truth and can't, in spite of the origins of its name, create something from nothing.
To clarify: the bootstrap estimate of bias in the estimator $\hat \theta$ is
$$\widehat{bias}_B = \hat\theta^*(\cdot) - \hat \theta,$$
with $\hat\theta^*(\cdot)$ being the average of your statistic computed on $B$ bootstrap samples. This process emulates that of sampling from some population and computing your quantity of interest. This only works if $\hat\theta$ could in principle be computed directly from the population. The bootstrap estimate of bias assesses whether the plug-in estimateβie just making the same computation on a sample instead of in the populationβis biased.
If you just want to use your residuals to evaluate model fit, that is entirely possible. If you, as you say in the comments, want to compare the nested models $f_1(x) = 3x_1 + 2x_2$ and $f_2(x) = 3x_1 + 2x_2 + x_1x_2$, you can do ANOVA to check whether the larger model significantly reduces sum of squared error. | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model? | You generally can't decompose error (residuals) into bias and variance components. The simple reason is that you generally don't know the true function. Recall that $bias(\hat f(x)) = E[\hat f(x) - f( | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model?
You generally can't decompose error (residuals) into bias and variance components. The simple reason is that you generally don't know the true function. Recall that $bias(\hat f(x)) = E[\hat f(x) - f(x)],$ and that $f(x)$ is the unknown thing you wish to estimate.
What about bootstrapping?
It is possible to estimate the bias of an estimator by bootstrapping, but it's not about bagging models, and I don't believe there is a way to use the bootstrap to assess the bias in $\hat f(x),$ because bootstrapping is still based on some notion of the Truth and can't, in spite of the origins of its name, create something from nothing.
To clarify: the bootstrap estimate of bias in the estimator $\hat \theta$ is
$$\widehat{bias}_B = \hat\theta^*(\cdot) - \hat \theta,$$
with $\hat\theta^*(\cdot)$ being the average of your statistic computed on $B$ bootstrap samples. This process emulates that of sampling from some population and computing your quantity of interest. This only works if $\hat\theta$ could in principle be computed directly from the population. The bootstrap estimate of bias assesses whether the plug-in estimateβie just making the same computation on a sample instead of in the populationβis biased.
If you just want to use your residuals to evaluate model fit, that is entirely possible. If you, as you say in the comments, want to compare the nested models $f_1(x) = 3x_1 + 2x_2$ and $f_2(x) = 3x_1 + 2x_2 + x_1x_2$, you can do ANOVA to check whether the larger model significantly reduces sum of squared error. | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model?
You generally can't decompose error (residuals) into bias and variance components. The simple reason is that you generally don't know the true function. Recall that $bias(\hat f(x)) = E[\hat f(x) - f( |
27,454 | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model? | One situation where you can get an estimate of the decomposition is if you have replicated points (i.e. to have more than one response for various combinations of the predictors).
This is mostly limited to situations where you have control of the independent variables (such as in experiments) or where they're all discrete (when there are not too many x-combinations and you can take a large enough sample that x-value combinations get multiple points).
The replicated points give you a model-free way of estimating the conditional mean. In such situations there's the possibility of decomposition of the residual sum of squares into pure error and lack of fit, but you also have direct (though necessarily noisy) estimates of the bias at each combination of x-values for which you have multiple responses. | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model? | One situation where you can get an estimate of the decomposition is if you have replicated points (i.e. to have more than one response for various combinations of the predictors).
This is mostly limi | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model?
One situation where you can get an estimate of the decomposition is if you have replicated points (i.e. to have more than one response for various combinations of the predictors).
This is mostly limited to situations where you have control of the independent variables (such as in experiments) or where they're all discrete (when there are not too many x-combinations and you can take a large enough sample that x-value combinations get multiple points).
The replicated points give you a model-free way of estimating the conditional mean. In such situations there's the possibility of decomposition of the residual sum of squares into pure error and lack of fit, but you also have direct (though necessarily noisy) estimates of the bias at each combination of x-values for which you have multiple responses. | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model?
One situation where you can get an estimate of the decomposition is if you have replicated points (i.e. to have more than one response for various combinations of the predictors).
This is mostly limi |
27,455 | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model? | In the somewhat more complex Kalman filtering realm, sometimes people test the residuals (observed measurements minus predicted measurements) to look for model changes or fault conditions. In theory, if the model is perfect, and the noise is Gaussian, then the residuals should also be Gaussian with zero mean and also be consistent with a predicted covariance matrix. People can test for nonzero mean with sequential tests like a Sequential Probability Ratio Test (SPRT). Your situation is different because you have a fixed batch of data rather than a steady stream of new data. But the basic idea of looking at the sample distribution of the residuals might still apply.
You indicate that the process you are modeling might change occasionally. Then, to do more with the data you have, you'd probably need to identify other factors causing that change. Consider 2 possibilities: (1) maybe you need local models rather than one global model, e.g., because there are severe nonlinearities only in some operating regions, or (2), maybe the process changes over time.
If this is a physical system, and your samples aren't taken huge time intervals apart, it's possible that these process changes persist over significant time periods. That is, true model parameters may occasionally change, persisting for some time period. If your data is time stamped, you might look at residuals over time. For instance, suppose you have fit y = Ax + b using all your data, finding A and b. Then go back and test the residual sequence r[k] = y[k] - Ax[k] - b , where k is an index corresponding to times in sequential order. Look for patterns over time, e.g., periods where summary statistics like ||r[k] || stays higher than normal for some time. Sequential tests would be the most sensitive to detecting sustained bias sorts of errors, something like SPRT or even CUSUM for individual vector indices. This could point to time periods where you need to consider more complex models. | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model? | In the somewhat more complex Kalman filtering realm, sometimes people test the residuals (observed measurements minus predicted measurements) to look for model changes or fault conditions. In theory, | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model?
In the somewhat more complex Kalman filtering realm, sometimes people test the residuals (observed measurements minus predicted measurements) to look for model changes or fault conditions. In theory, if the model is perfect, and the noise is Gaussian, then the residuals should also be Gaussian with zero mean and also be consistent with a predicted covariance matrix. People can test for nonzero mean with sequential tests like a Sequential Probability Ratio Test (SPRT). Your situation is different because you have a fixed batch of data rather than a steady stream of new data. But the basic idea of looking at the sample distribution of the residuals might still apply.
You indicate that the process you are modeling might change occasionally. Then, to do more with the data you have, you'd probably need to identify other factors causing that change. Consider 2 possibilities: (1) maybe you need local models rather than one global model, e.g., because there are severe nonlinearities only in some operating regions, or (2), maybe the process changes over time.
If this is a physical system, and your samples aren't taken huge time intervals apart, it's possible that these process changes persist over significant time periods. That is, true model parameters may occasionally change, persisting for some time period. If your data is time stamped, you might look at residuals over time. For instance, suppose you have fit y = Ax + b using all your data, finding A and b. Then go back and test the residual sequence r[k] = y[k] - Ax[k] - b , where k is an index corresponding to times in sequential order. Look for patterns over time, e.g., periods where summary statistics like ||r[k] || stays higher than normal for some time. Sequential tests would be the most sensitive to detecting sustained bias sorts of errors, something like SPRT or even CUSUM for individual vector indices. This could point to time periods where you need to consider more complex models. | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model?
In the somewhat more complex Kalman filtering realm, sometimes people test the residuals (observed measurements minus predicted measurements) to look for model changes or fault conditions. In theory, |
27,456 | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model? | The answer is no, because bias and variance are attributes of model parameters, rather than the data used to estimate them. There is a partial exception to that statement that pertains to bias and variance varying (ha!) through the predictor space; more on that below. Note that this has absolutely nothing to do with knowing some "true" function relating the predictors and response variables.
Consider the estimate of $Ξ²$ in a linear regression, $\hatΞ²=(X^TX)^{-1}X^TY$, where $X$ is an $NΓP$ matrix of predictors, $\hatΞ²$ is a $PΓ1$ vector of parameter estimates, and $Y$ is an $NΓ1$ vector of responses. Let's assume for argument's sake that we have an infinite population of data from which to draw (this is not completely ridiculous, by the way -- if we were actively recording data from some physical process we could record predictor and response data at a rapid rate, thus practically satisfying this assumption). So we draw $N$ observations, each consisting of a single response value and a value for each of the $P$ predictors. We then compute our estimate of $\hatΞ²$ and record the values. Let us then take this entire process and repeat it $N_{iter}$ times, each time making $N$ independent draws from the population. We will accumulate $N_{iter}$ estimates of $\hatΞ²$ over which we can compute the variance of each element in the parameter vector. Note that the variance of these parameter estimates is inversely proportional to $N$ and proportional to $P$, assuming orthogonality of the predictors.
The bias of each parameter can be estimated similarly. While we may not have access to the "true" function, let's suppose we can make an arbitrarily large number of draws from the population in order to compute $\hatΞ²_{best}$, which will serve as a proxy for the "true" parameter value. We'll assume that this is an unbiased estimate (ordinary least squares) and that the number of observations used was sufficiently large such that the variance of this estimate is negligible. For each of the $P$ parameters, we compute $\hatΞ²_{best_j}-\hatΞ²_j$, where $j$ ranges from $1$ to $N_{iter}$. We take the average of these differences as an estimate of the bias in the corresponding parameter.
There are corresponding ways of relating bias and variance to the data itself, but they're a little more complicated. As you can see, bias and variance can be estimated for linear models, but you will require quite a bit of hold-out data. A more insidious problem is the fact that once you start working with a fixed dataset, your analyses will be polluted by your personal variance, in that you'll have already begun wandering through the garden of forking paths and there's no way of knowing how that would replicate out-of-sample (unless you just came up with a single model and ran this analysis and committed to leaving it alone after that).
Regarding the matter of the data points themselves, the most correct (and trivial) answer is that if there is any difference between $Y$ and $\hat{Y}$, you need a more complex model (assuming that you could correctly identify all the relevant predictors; you can't). Without going into a boring treatise on the philosophical nature of "error," the bottom line is that there was something going on that caused your model to miss its mark. The problem is that adding complexity increases variance, which will likely cause it to miss the mark on other data points. Therefore, worrying about error attribution at the individual data point level is not likely to be a fruitful endeavor. The exception (mentioned in the first paragraph) stems from the fact that bias and variance are actually functions of the predictors themselves, so you may have large bias in one part of the predictor space and smaller bias in another (same for variance). You could assess this by computing $Y-\hat{Y}$ many times (where $\hat{Y}=X\hatΞ²$ and $\hatΞ²$ was not estimated based on $Y$) and plotting its bias (average) and variance as a function of the values of $X$. However, I think that's a pretty specialized concern. | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model? | The answer is no, because bias and variance are attributes of model parameters, rather than the data used to estimate them. There is a partial exception to that statement that pertains to bias and var | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model?
The answer is no, because bias and variance are attributes of model parameters, rather than the data used to estimate them. There is a partial exception to that statement that pertains to bias and variance varying (ha!) through the predictor space; more on that below. Note that this has absolutely nothing to do with knowing some "true" function relating the predictors and response variables.
Consider the estimate of $Ξ²$ in a linear regression, $\hatΞ²=(X^TX)^{-1}X^TY$, where $X$ is an $NΓP$ matrix of predictors, $\hatΞ²$ is a $PΓ1$ vector of parameter estimates, and $Y$ is an $NΓ1$ vector of responses. Let's assume for argument's sake that we have an infinite population of data from which to draw (this is not completely ridiculous, by the way -- if we were actively recording data from some physical process we could record predictor and response data at a rapid rate, thus practically satisfying this assumption). So we draw $N$ observations, each consisting of a single response value and a value for each of the $P$ predictors. We then compute our estimate of $\hatΞ²$ and record the values. Let us then take this entire process and repeat it $N_{iter}$ times, each time making $N$ independent draws from the population. We will accumulate $N_{iter}$ estimates of $\hatΞ²$ over which we can compute the variance of each element in the parameter vector. Note that the variance of these parameter estimates is inversely proportional to $N$ and proportional to $P$, assuming orthogonality of the predictors.
The bias of each parameter can be estimated similarly. While we may not have access to the "true" function, let's suppose we can make an arbitrarily large number of draws from the population in order to compute $\hatΞ²_{best}$, which will serve as a proxy for the "true" parameter value. We'll assume that this is an unbiased estimate (ordinary least squares) and that the number of observations used was sufficiently large such that the variance of this estimate is negligible. For each of the $P$ parameters, we compute $\hatΞ²_{best_j}-\hatΞ²_j$, where $j$ ranges from $1$ to $N_{iter}$. We take the average of these differences as an estimate of the bias in the corresponding parameter.
There are corresponding ways of relating bias and variance to the data itself, but they're a little more complicated. As you can see, bias and variance can be estimated for linear models, but you will require quite a bit of hold-out data. A more insidious problem is the fact that once you start working with a fixed dataset, your analyses will be polluted by your personal variance, in that you'll have already begun wandering through the garden of forking paths and there's no way of knowing how that would replicate out-of-sample (unless you just came up with a single model and ran this analysis and committed to leaving it alone after that).
Regarding the matter of the data points themselves, the most correct (and trivial) answer is that if there is any difference between $Y$ and $\hat{Y}$, you need a more complex model (assuming that you could correctly identify all the relevant predictors; you can't). Without going into a boring treatise on the philosophical nature of "error," the bottom line is that there was something going on that caused your model to miss its mark. The problem is that adding complexity increases variance, which will likely cause it to miss the mark on other data points. Therefore, worrying about error attribution at the individual data point level is not likely to be a fruitful endeavor. The exception (mentioned in the first paragraph) stems from the fact that bias and variance are actually functions of the predictors themselves, so you may have large bias in one part of the predictor space and smaller bias in another (same for variance). You could assess this by computing $Y-\hat{Y}$ many times (where $\hat{Y}=X\hatΞ²$ and $\hatΞ²$ was not estimated based on $Y$) and plotting its bias (average) and variance as a function of the values of $X$. However, I think that's a pretty specialized concern. | Is it possible to decompose fitted residuals into bias and variance, after fitting a linear model?
The answer is no, because bias and variance are attributes of model parameters, rather than the data used to estimate them. There is a partial exception to that statement that pertains to bias and var |
27,457 | Zero-sum property of the difference between the data and the mean | You already got more formal answers. This answer is ought to give you some "intuition" behind the math.
Arithmetic mean is sensitive to your data (including outliers). Imagine a lever, like the one illustrated below. Your data are the orange balls that lie on a beam (imagine that it is an x-axis of some kind of plot and your data are values scattered around it on various positions). For the rod to be in horizontal position, the hinge needs to be placed in such place that balances the balls. You can recall from elementary physics (or just playground experiences from your childhood), that the placement of the balls plays a role in how much they influence the lever. The "outlying" balls, how we call them in statistics, have much greater influence then the balls cluttered around the "center". Mean is the value that places the hinge in the exact position that makes the lever balanced.
So we can say that mean lies in the center, between the values. The center is defined in terms of distances (i.e. differences) between the points and the mean. Since it is in the center, then we would expect that the distances are balanced, i.e. they zero-out each other, so the sum of distances needs to be zero and mean has this property (and only the mean).
Check also the related Arithmetic mean. Why does it work? thread on math.stackexchange.com. | Zero-sum property of the difference between the data and the mean | You already got more formal answers. This answer is ought to give you some "intuition" behind the math.
Arithmetic mean is sensitive to your data (including outliers). Imagine a lever, like the one il | Zero-sum property of the difference between the data and the mean
You already got more formal answers. This answer is ought to give you some "intuition" behind the math.
Arithmetic mean is sensitive to your data (including outliers). Imagine a lever, like the one illustrated below. Your data are the orange balls that lie on a beam (imagine that it is an x-axis of some kind of plot and your data are values scattered around it on various positions). For the rod to be in horizontal position, the hinge needs to be placed in such place that balances the balls. You can recall from elementary physics (or just playground experiences from your childhood), that the placement of the balls plays a role in how much they influence the lever. The "outlying" balls, how we call them in statistics, have much greater influence then the balls cluttered around the "center". Mean is the value that places the hinge in the exact position that makes the lever balanced.
So we can say that mean lies in the center, between the values. The center is defined in terms of distances (i.e. differences) between the points and the mean. Since it is in the center, then we would expect that the distances are balanced, i.e. they zero-out each other, so the sum of distances needs to be zero and mean has this property (and only the mean).
Check also the related Arithmetic mean. Why does it work? thread on math.stackexchange.com. | Zero-sum property of the difference between the data and the mean
You already got more formal answers. This answer is ought to give you some "intuition" behind the math.
Arithmetic mean is sensitive to your data (including outliers). Imagine a lever, like the one il |
27,458 | Zero-sum property of the difference between the data and the mean | let $y_1,y_2, \dots, y_n$ be $n$ observational values from a variable $Y$ and let $\overline{y} := \frac{1}{n}\sum_{i=1}^n y_i$ denote the arithmetic mean of the observations. The zero-sum property can be written mathematically as:
$$0 = \sum_{i=1}^n (y_i - \overline{y}).$$
Proof: By definition of $\overline{y}$ we have $n\overline{y} = n\frac{1}{n}\sum_{i=1}^n y_i = \sum_{i=1}^n y_i$ and hence:
$$\sum_{i=1}^n (y_i - \overline{y}) = \sum_{i=1}^n y_i - n \overline{y} =n \overline{y} - n \overline{y}= 0.$$
Interpretation: Note that $(y_i - \overline{y})$ is essentially the "distance" between the observation $y_i$ and the arithmetic mean $\overline{y}$ where the information wether the observation is smaller or greater than the arithmetic mean is still preserved through the sign of $(y_i - \overline{y})$ (of course, the distance itself would have to be nonnegative and would be $|y_i-\overline{y}|$).
The zero sum property can then be interpreted, that the arithmetic mean is the number $\overline{y}$ such that observational values of $Y$ which are smaller than $\overline{y}$ and the values of $Y$ which are larger than $\overline{y}$ keep in balance, i.e. they sum up to zero.
In fact it is easy to see from the proof that it is the only number for which this property holds.
You could obviously use this property to check if the calculations of the mean were correct. | Zero-sum property of the difference between the data and the mean | let $y_1,y_2, \dots, y_n$ be $n$ observational values from a variable $Y$ and let $\overline{y} := \frac{1}{n}\sum_{i=1}^n y_i$ denote the arithmetic mean of the observations. The zero-sum property ca | Zero-sum property of the difference between the data and the mean
let $y_1,y_2, \dots, y_n$ be $n$ observational values from a variable $Y$ and let $\overline{y} := \frac{1}{n}\sum_{i=1}^n y_i$ denote the arithmetic mean of the observations. The zero-sum property can be written mathematically as:
$$0 = \sum_{i=1}^n (y_i - \overline{y}).$$
Proof: By definition of $\overline{y}$ we have $n\overline{y} = n\frac{1}{n}\sum_{i=1}^n y_i = \sum_{i=1}^n y_i$ and hence:
$$\sum_{i=1}^n (y_i - \overline{y}) = \sum_{i=1}^n y_i - n \overline{y} =n \overline{y} - n \overline{y}= 0.$$
Interpretation: Note that $(y_i - \overline{y})$ is essentially the "distance" between the observation $y_i$ and the arithmetic mean $\overline{y}$ where the information wether the observation is smaller or greater than the arithmetic mean is still preserved through the sign of $(y_i - \overline{y})$ (of course, the distance itself would have to be nonnegative and would be $|y_i-\overline{y}|$).
The zero sum property can then be interpreted, that the arithmetic mean is the number $\overline{y}$ such that observational values of $Y$ which are smaller than $\overline{y}$ and the values of $Y$ which are larger than $\overline{y}$ keep in balance, i.e. they sum up to zero.
In fact it is easy to see from the proof that it is the only number for which this property holds.
You could obviously use this property to check if the calculations of the mean were correct. | Zero-sum property of the difference between the data and the mean
let $y_1,y_2, \dots, y_n$ be $n$ observational values from a variable $Y$ and let $\overline{y} := \frac{1}{n}\sum_{i=1}^n y_i$ denote the arithmetic mean of the observations. The zero-sum property ca |
27,459 | Zero-sum property of the difference between the data and the mean | Verba docent exempla trahunt.
Seneka
Take three numbers: 1, 2 and 3.
Mean value is 2
Differences between values and a mean are:
1-2 = -1
2-2 = 0
3-2 = 1
Sum of these differences is
-1 + 0 + 1 = 0
Zero-sum property states that no matter what numbers you start with, a result (sum of diiferences between them and their mean) would be 0 | Zero-sum property of the difference between the data and the mean | Verba docent exempla trahunt.
Seneka
Take three numbers: 1, 2 and 3.
Mean value is 2
Differences between values and a mean are:
1-2 = -1
2-2 = 0
3-2 = 1
Sum of these differences is
-1 + 0 + 1 = 0
Zer | Zero-sum property of the difference between the data and the mean
Verba docent exempla trahunt.
Seneka
Take three numbers: 1, 2 and 3.
Mean value is 2
Differences between values and a mean are:
1-2 = -1
2-2 = 0
3-2 = 1
Sum of these differences is
-1 + 0 + 1 = 0
Zero-sum property states that no matter what numbers you start with, a result (sum of diiferences between them and their mean) would be 0 | Zero-sum property of the difference between the data and the mean
Verba docent exempla trahunt.
Seneka
Take three numbers: 1, 2 and 3.
Mean value is 2
Differences between values and a mean are:
1-2 = -1
2-2 = 0
3-2 = 1
Sum of these differences is
-1 + 0 + 1 = 0
Zer |
27,460 | Zero-sum property of the difference between the data and the mean | Here's a simple handy little general proof of the result $\sum (x_i - \overline{x}) = 0$
Let's take the sequence of numbers:
$$x_1,x_2,x_3,...,x_n$$
we acknowledge that the mean of this number set can be denoted by,
$$\overline{x}=\frac{\sum x_i}{n}$$
Going back to the LHS of the original statement $\sum (x_i - \overline{x})$ we can write this out in full as follows:
$$\sum (x_i - \overline{x}) = \Bigl(x_1-\frac{\sum x_i}{n}\Bigl) + \Bigl(x_2-\frac{\sum x_i}{n}\Bigl) + \Bigl(x_3-\frac{\sum x_i}{n}\Bigl) +...+\Bigl(x_n-\frac{\sum x_i}{n}\Bigl)$$
This can be simplified down to 0 in the following steps:
$$x_1+x_2+x_3+...+x_n-\Bigl(\frac{n\sum x_i}{n}\Bigl)$$
$$\sum x_i-\sum x_i$$
$$=0$$ | Zero-sum property of the difference between the data and the mean | Here's a simple handy little general proof of the result $\sum (x_i - \overline{x}) = 0$
Let's take the sequence of numbers:
$$x_1,x_2,x_3,...,x_n$$
we acknowledge that the mean of this number set can | Zero-sum property of the difference between the data and the mean
Here's a simple handy little general proof of the result $\sum (x_i - \overline{x}) = 0$
Let's take the sequence of numbers:
$$x_1,x_2,x_3,...,x_n$$
we acknowledge that the mean of this number set can be denoted by,
$$\overline{x}=\frac{\sum x_i}{n}$$
Going back to the LHS of the original statement $\sum (x_i - \overline{x})$ we can write this out in full as follows:
$$\sum (x_i - \overline{x}) = \Bigl(x_1-\frac{\sum x_i}{n}\Bigl) + \Bigl(x_2-\frac{\sum x_i}{n}\Bigl) + \Bigl(x_3-\frac{\sum x_i}{n}\Bigl) +...+\Bigl(x_n-\frac{\sum x_i}{n}\Bigl)$$
This can be simplified down to 0 in the following steps:
$$x_1+x_2+x_3+...+x_n-\Bigl(\frac{n\sum x_i}{n}\Bigl)$$
$$\sum x_i-\sum x_i$$
$$=0$$ | Zero-sum property of the difference between the data and the mean
Here's a simple handy little general proof of the result $\sum (x_i - \overline{x}) = 0$
Let's take the sequence of numbers:
$$x_1,x_2,x_3,...,x_n$$
we acknowledge that the mean of this number set can |
27,461 | Understanding this PCA plot of ice cream sales vs temperature | I know that PCA objective is to reduce dimensionality
This is often what people assume, but in fact PCA is just a representation of your data onto an orthogonal basis. This basis still has the same dimensionality as your original data. Nothing is lost...yet. The dimensionality reduction part is completely up to you. What PCA ensures is that the top $k$ dimensions of your new projection are the best $k$ dimensions that your data could possibly be represented as. What does best mean? That's where the variance explained comes in.
obviously not in this case
I wouldn't be so sure about that! From your second plot, visually it looks like a lot of the information from your data can be projected onto a horizontal line. That's 1 dimension, instead of the original plot which was in 2 dimensions! Obviously you lose some information because you're removing the Y-axis, but whether this information loss is acceptable to you, is your call.
There are a ton of questions related to what PCA is on the site so I encourage you to check them out here, here, here or here. If you have other questions after that, please post them and I'd be happy to help.
As your actual question:
what is the story you can tell about temperature vs the ice cream in the PCA plot?
Since the new coordinate axes is a linear combination of the original coordinates, then...basically nothing! PCA will give you an answer like (numbers made up):
\begin{split}
\mathrm{PC1} &= 2.5\times \text{ice cream} - 3.6\times \text{temperature}\\
\mathrm{PC2} &= -1.5\times \text{ice cream} + 0.6\times \text{temperature}
\end{split}
Is that useful to you? Maybe. But I'd guess not :)
Edited
I'll add this resource which I think is helpful because interactive charts are cool.
Edited again
To clarify what best $k$ means:
PCA tries to find the dimensions that yield the highest variance when the data is projected onto them. Assuming your data has $n > k$ dimensions, the first $k$ PCs explain more variance in your data than any other $k$ dimensions can. That's what I mean by best $k$. Whether or not that's useful to you is another thing. | Understanding this PCA plot of ice cream sales vs temperature | I know that PCA objective is to reduce dimensionality
This is often what people assume, but in fact PCA is just a representation of your data onto an orthogonal basis. This basis still has the same d | Understanding this PCA plot of ice cream sales vs temperature
I know that PCA objective is to reduce dimensionality
This is often what people assume, but in fact PCA is just a representation of your data onto an orthogonal basis. This basis still has the same dimensionality as your original data. Nothing is lost...yet. The dimensionality reduction part is completely up to you. What PCA ensures is that the top $k$ dimensions of your new projection are the best $k$ dimensions that your data could possibly be represented as. What does best mean? That's where the variance explained comes in.
obviously not in this case
I wouldn't be so sure about that! From your second plot, visually it looks like a lot of the information from your data can be projected onto a horizontal line. That's 1 dimension, instead of the original plot which was in 2 dimensions! Obviously you lose some information because you're removing the Y-axis, but whether this information loss is acceptable to you, is your call.
There are a ton of questions related to what PCA is on the site so I encourage you to check them out here, here, here or here. If you have other questions after that, please post them and I'd be happy to help.
As your actual question:
what is the story you can tell about temperature vs the ice cream in the PCA plot?
Since the new coordinate axes is a linear combination of the original coordinates, then...basically nothing! PCA will give you an answer like (numbers made up):
\begin{split}
\mathrm{PC1} &= 2.5\times \text{ice cream} - 3.6\times \text{temperature}\\
\mathrm{PC2} &= -1.5\times \text{ice cream} + 0.6\times \text{temperature}
\end{split}
Is that useful to you? Maybe. But I'd guess not :)
Edited
I'll add this resource which I think is helpful because interactive charts are cool.
Edited again
To clarify what best $k$ means:
PCA tries to find the dimensions that yield the highest variance when the data is projected onto them. Assuming your data has $n > k$ dimensions, the first $k$ PCs explain more variance in your data than any other $k$ dimensions can. That's what I mean by best $k$. Whether or not that's useful to you is another thing. | Understanding this PCA plot of ice cream sales vs temperature
I know that PCA objective is to reduce dimensionality
This is often what people assume, but in fact PCA is just a representation of your data onto an orthogonal basis. This basis still has the same d |
27,462 | Understanding this PCA plot of ice cream sales vs temperature | To the good answer of Ilan man I would add that there is a quite straightforward interpretation of your principal components, although in this simple 2D case it doesn't add much to what we could have interpreted just looking at the scatterplot.
The first PC is a weighted sum (that is, a linear combination where both coeficients are positive) of temperatura and ice-cream consumption. In the right side you have hot days where a lot of ice-cream is sold, and in the left side you have colder days where less ice-cream is sold. That PC explains most of your variance and the groups you got match those two sides.
The second PC measures how temperature and ice-cream consumption moves away from the close linear relation underlined by first PC. In the upper part of the graph we have days with more ice-cream sold compared with other days of the same temperature and in the lower part days with less ice-cream sold than expected according to temperature. That PC explains just a little part of variance.
That is, we can tell a story from principal components, although with just two variables it's the same story we could have noticed without PCA. With more variables PCA becomes more useful because it tells stories that would be harder to notice otherwise. | Understanding this PCA plot of ice cream sales vs temperature | To the good answer of Ilan man I would add that there is a quite straightforward interpretation of your principal components, although in this simple 2D case it doesn't add much to what we could have | Understanding this PCA plot of ice cream sales vs temperature
To the good answer of Ilan man I would add that there is a quite straightforward interpretation of your principal components, although in this simple 2D case it doesn't add much to what we could have interpreted just looking at the scatterplot.
The first PC is a weighted sum (that is, a linear combination where both coeficients are positive) of temperatura and ice-cream consumption. In the right side you have hot days where a lot of ice-cream is sold, and in the left side you have colder days where less ice-cream is sold. That PC explains most of your variance and the groups you got match those two sides.
The second PC measures how temperature and ice-cream consumption moves away from the close linear relation underlined by first PC. In the upper part of the graph we have days with more ice-cream sold compared with other days of the same temperature and in the lower part days with less ice-cream sold than expected according to temperature. That PC explains just a little part of variance.
That is, we can tell a story from principal components, although with just two variables it's the same story we could have noticed without PCA. With more variables PCA becomes more useful because it tells stories that would be harder to notice otherwise. | Understanding this PCA plot of ice cream sales vs temperature
To the good answer of Ilan man I would add that there is a quite straightforward interpretation of your principal components, although in this simple 2D case it doesn't add much to what we could have |
27,463 | Skewness Kurtosis Plot for different distribution | Pearson plot
Let:
$$\beta _1=\frac{\mu _3^2}{\mu _2^3} \quad \text{and} \quad \beta _2=\frac{\mu _4}{\mu _2^2}$$
where $\sqrt{\beta_1}$ is often used as a measure of skewness, and $\beta_2$ is often used as a measure of kurtosis.
The Pearson plot diagram characterises distributions that are members of the Pearson family in terms of skewness ($x$-axis) and kurtosis ($y$-axis):
The point at (0,3) denotes the Normal distribution.
A Gamma distribution defines the green Type III line. The Exponential distribution is just a point on the Type III line.
An Inverse Gamma distribution defines the blue Type V line.
A Power Function [Beta(a,1)] defines the Type I(J) line. Type I nests the Beta distribution.
Type VII (symmetrical: when $\beta_1 = 0$) nests Student's t distribution etc
Adding other distributions to the Pearson diagram
The OP asks how to add other distributions (that may not even be members of the Pearson family) onto the Pearson plot diagram. This is something done as an exercise in Chapter 5 of our Springer text: Rose and Smith, Mathematical Statistics with Mathematica. A free copy of the chapter can be downloaded here:
http://www.mathstatica.com/book/bookcontents.html
To illustrate, suppose we want to add a skew-Normal distribution onto the Pearson plot diagram. Let $X \sim \text{skewNormal}(\lambda)$ with pdf $f(x)$:
The mean is:
... while the second, third and fourth central moments are:
Then, $\beta_1$ and $\beta_2$ are given by:
Since $\beta_1$ and $\beta_2$ are both determined by parameter $\lambda$, it follows that $\beta_1$ and $\beta_2$ are related. Eliminating parameter $\lambda$ looks tricky, so we shall use numerical methods instead. For instance, we can plot $\beta_1$ and $\beta_2$ parametrically, as a function of $\lambda$, as $\lambda$ increases from 0 to say 300:
Where will this line be located on a Pearson diagram? To see the answer exactly, we need to superimpose plot P1 onto a standard Pearson diagram. Since a Pearson plot has its vertical axis inverted, we still need to invert the vertical axis of plot P1. This can be done by converting all points {x,y} in P1 into {x,9-y}, and then showing P1 and the Pearson plot together. Doing so yields:
In summary, the black line in the diagram depicts the possible values of ($\beta_1, \beta_2$) that a $\text{skew-Normal}(\lambda)$ distribution can exhibit. We start out at (0,3) i.e. the Normal distribution when $\lambda=0$, and then move along the black line as $\lambda$ increases towards infinity.
The same method can conceptually be adopted for any distribution: in some cases, the distribution will be captured by a single point; in others, such as the case here, as a line; and in others, something more general.
Notes
Expect and PearsonPlot are functions from the mathStatica add-on to Mathematica; Erf denotes the error function, and ParametricPlot is a Mathematica function.
In the limit, as $\lambda$ increases towards infinity, at the upper extremum, $\beta_1$ and $\beta_2$ take the following values, here expressed numerically: {0.990566, 3.86918} | Skewness Kurtosis Plot for different distribution | Pearson plot
Let:
$$\beta _1=\frac{\mu _3^2}{\mu _2^3} \quad \text{and} \quad \beta _2=\frac{\mu _4}{\mu _2^2}$$
where $\sqrt{\beta_1}$ is often used as a measure of skewness, and $\beta_2$ is often | Skewness Kurtosis Plot for different distribution
Pearson plot
Let:
$$\beta _1=\frac{\mu _3^2}{\mu _2^3} \quad \text{and} \quad \beta _2=\frac{\mu _4}{\mu _2^2}$$
where $\sqrt{\beta_1}$ is often used as a measure of skewness, and $\beta_2$ is often used as a measure of kurtosis.
The Pearson plot diagram characterises distributions that are members of the Pearson family in terms of skewness ($x$-axis) and kurtosis ($y$-axis):
The point at (0,3) denotes the Normal distribution.
A Gamma distribution defines the green Type III line. The Exponential distribution is just a point on the Type III line.
An Inverse Gamma distribution defines the blue Type V line.
A Power Function [Beta(a,1)] defines the Type I(J) line. Type I nests the Beta distribution.
Type VII (symmetrical: when $\beta_1 = 0$) nests Student's t distribution etc
Adding other distributions to the Pearson diagram
The OP asks how to add other distributions (that may not even be members of the Pearson family) onto the Pearson plot diagram. This is something done as an exercise in Chapter 5 of our Springer text: Rose and Smith, Mathematical Statistics with Mathematica. A free copy of the chapter can be downloaded here:
http://www.mathstatica.com/book/bookcontents.html
To illustrate, suppose we want to add a skew-Normal distribution onto the Pearson plot diagram. Let $X \sim \text{skewNormal}(\lambda)$ with pdf $f(x)$:
The mean is:
... while the second, third and fourth central moments are:
Then, $\beta_1$ and $\beta_2$ are given by:
Since $\beta_1$ and $\beta_2$ are both determined by parameter $\lambda$, it follows that $\beta_1$ and $\beta_2$ are related. Eliminating parameter $\lambda$ looks tricky, so we shall use numerical methods instead. For instance, we can plot $\beta_1$ and $\beta_2$ parametrically, as a function of $\lambda$, as $\lambda$ increases from 0 to say 300:
Where will this line be located on a Pearson diagram? To see the answer exactly, we need to superimpose plot P1 onto a standard Pearson diagram. Since a Pearson plot has its vertical axis inverted, we still need to invert the vertical axis of plot P1. This can be done by converting all points {x,y} in P1 into {x,9-y}, and then showing P1 and the Pearson plot together. Doing so yields:
In summary, the black line in the diagram depicts the possible values of ($\beta_1, \beta_2$) that a $\text{skew-Normal}(\lambda)$ distribution can exhibit. We start out at (0,3) i.e. the Normal distribution when $\lambda=0$, and then move along the black line as $\lambda$ increases towards infinity.
The same method can conceptually be adopted for any distribution: in some cases, the distribution will be captured by a single point; in others, such as the case here, as a line; and in others, something more general.
Notes
Expect and PearsonPlot are functions from the mathStatica add-on to Mathematica; Erf denotes the error function, and ParametricPlot is a Mathematica function.
In the limit, as $\lambda$ increases towards infinity, at the upper extremum, $\beta_1$ and $\beta_2$ take the following values, here expressed numerically: {0.990566, 3.86918} | Skewness Kurtosis Plot for different distribution
Pearson plot
Let:
$$\beta _1=\frac{\mu _3^2}{\mu _2^3} \quad \text{and} \quad \beta _2=\frac{\mu _4}{\mu _2^2}$$
where $\sqrt{\beta_1}$ is often used as a measure of skewness, and $\beta_2$ is often |
27,464 | Skewness Kurtosis Plot for different distribution | There are two main types of skewness-kurtosis plot; one where the skewness is plotted against kurtosis (where the boundary of impossibility is a parabola) and one where squared skewness is plotted against kurtosis (where it becomes a line).
Some plots don't just plot the sample value for a single sample -- some use the bootstrap to produce a spread of values that's supposed to reflect sampling variability (so you can see which distributions are plausible and which ones are "too far away"). [However, by trying some example population distributions, it seems often to underestimate the variability in one direction and slightly overestimate it in another (the resampled values often tended to cluster about a line more than they should if they really indicated the sampling variability).]
But in any case the way to plot a distribution is to look up its skewness and kurtosis and plot them.
If the skewness and kurtosis are fixed, just plot that point (and label it). However, note that some distributions may not have both skewness and kurtosis being finite (if kurtosis is finite then skewness must be too, and if skewness is not finite then kurtosis won't be either).
If the distribution is a family whose skewness and kurtosis depends on a parameter, if you can't write the relationship between skewness and kurtosis directly you can use that parameter to parameterize the curve it lays on.
If skewness and/or kurtosis depends on a couple of parameters between them (or perhaps more that two), you probably have a region rather than a curve, in which case you'll need to try to figure out the boundaries from the relationship between the possible values of the parameters and the skewness/kurtosis.
Here's a basic diagram (based on one I did for some notes I wrote last year on the Pearson family):
Note that I have $\beta_2$ (kurtosis) on the x-axis and $\beta_1$ (squared skewness) on the y-axis. This is because the plots work better in this orientation as you need more space for kurtosis.
The Pearson distribution types are marked with Roman numerals in parentheses. Note that the named distributions (like the "gamma" on the "(III)" line) encompass scaled and shifted versions of those families (including negative scales).
[My diagram doesn't split up the beta (type $\text{I}$) region into "U"-shaped/"J"-shaped/hill-shaped subregions but some diagrams do so]
Here's some examples of adding some distributions which are not Pearson family:
case 1 (single distribution before location/scale transform): Logistic distribution
Wikipedia gives skewness as 0 and Excess kurtosis as 1.2, so $\beta_1=$ 0 and $\beta_2=$ 4.2. So we want to plot a point (marked with say "L") at (4.2,0)
case 2 (1-parameter family before location-scale): Lognormal distribution
Wikipedia gives skewness as $(e^{\sigma^2}\!\!+2) \sqrt{e^{\sigma^2}\!\!-1}$
and excess kurtosis as $e^{4\sigma^2}\!\! + 2e^{3\sigma^2}\!\! + 3e^{2\sigma^2}\!\! - 6$
Writing $a$ for $e^{\sigma^2}$, this means that
$\beta_1=(a+2)^2(a-1)$
$\beta_2=a^4\!\! + 2a^3\!\! + 3a^2\!\! - 3$
The direct relationship between these two is not immediately obvious so let's
use $a$ to parameterize it. As $a\to 1$, note that this approaches the normal.
At $a=1.724$ we reach the right edge of our diagram so we should have $a$ vary
between those values. If we choose enough values (10 is probably enough since
it isn't strongly curved) we get a smooth looking curve.
The logistic point is at the bottom to the right of the normal point.
The lognormal curve is shown with dashed black lines, in the type $\text{VI}$ region (but it's not type VI).
In R a plot like this can most readily be generated using the descdist function in the fitdistrplus package:
Here the blue point is a particular sample (as it happens, from an ExGaussian). Note that this display has the axes flipped around from my earlier diagram. (It's harder to add additional distribution points and lines to this, but it is still possible if you look at what the code in descdist does, specifically with how it uses the kurtmax variable.)
Since people are sure to ask for it, here's R code for the first plot:
plot(c(0,25),c(0,14),type="n", frame=FALSE,
ylab=expression(beta[1]),
xlab=expression(beta[2]) )
#region 0
polygon(c(0,0,1,15),c(14,0,0,14),col=rgb(.8,0.8,.85,.5),border=FALSE)
lines(c(1,15),c(0,14),lwd=2,col="grey")
# region I
polygon(c(15,1,3,3+21),c(14,0,0,14),col=rgb(.99,0.5,.8,.5),border=FALSE)
lines(c(3,3+21),c(0,14),lwd=2,col=rgb(.99,0.5,.8,1))
# region VI
a=rev(exp(seq(log(5.759),log(1000),.2)))
polygon(c(25,3+21,3,3*(a+5)*(a-2)/(a-3)/(a-4)),
c(14,14,0,16*(a-2)/(a-3)^2),
col=rgb(.95,0.8,.3,.4),border=FALSE)
lines(c(3,3*(a+5)*(a-2)/(a-3)/(a-4)),
c(0,16*(a-2)/(a-3)^2),lwd=2,col=rgb(0.95,.8,.3,.9))
# region IV
a=rev(a)
polygon(c(3*(a+5)*(a-2)/(a-3)/(a-4),3,25),
c(16*(a-2)/(a-3)^2,0,0),
col=rgb(.5,0.95,.6,.4),border=FALSE)
points(1.8,0,pch="U")
points(3,0,pch="N")
points(9,4,pch="E")
text(0.5,13,labels="U - uniform",pos=4)
text(0.5,12,labels="N - normal",pos=4)
text(0.5,11,labels="E - exponential",pos=4)
text(1,8,labels="impossible",pos=4)
text(9,7,labels="beta",pos=4)
text(16,7.75,labels="beta prime, F",pos=4)
text(9.2,6.25,labels="(I)",pos=4,family="serif")
text(1.8,0,labels="(II)",pos=4,cex=0.8,family="serif")
text(16.2,7,labels="(VI)",pos=4,family="serif")
text(15,2,labels="(IV)",pos=4,family="serif")
text(13,7.1,labels="(III)",pos=4,cex=0.8,family="serif")
text(17,5.7,labels="(V)",pos=4,cex=0.8,family="serif")
text(12.5,0,labels="(VII)",pos=4,cex=0.8,family="serif")
text(12.2,0,labels="t",pos=4)
text(11.25,5.8,labels="gamma",srt=25,pos=4)
text(15.3,5,labels="inverse",srt=14,pos=2)
text(14.9,5.15,labels="gamma",srt=11,pos=4)
(It is necessary to stretch the plot area to be roughly the shape I made it above or the slanted text won't be in the right place.)
The code to place the point for the logistic and the curve for the lognormal is then:
points(4.2,0,pch="L")
a=c((10:17)/10,1.724)
lines(a^4 + 2*a^3 + 3*a^2 - 3,(a+2)^2*(a-1),lty=2) | Skewness Kurtosis Plot for different distribution | There are two main types of skewness-kurtosis plot; one where the skewness is plotted against kurtosis (where the boundary of impossibility is a parabola) and one where squared skewness is plotted aga | Skewness Kurtosis Plot for different distribution
There are two main types of skewness-kurtosis plot; one where the skewness is plotted against kurtosis (where the boundary of impossibility is a parabola) and one where squared skewness is plotted against kurtosis (where it becomes a line).
Some plots don't just plot the sample value for a single sample -- some use the bootstrap to produce a spread of values that's supposed to reflect sampling variability (so you can see which distributions are plausible and which ones are "too far away"). [However, by trying some example population distributions, it seems often to underestimate the variability in one direction and slightly overestimate it in another (the resampled values often tended to cluster about a line more than they should if they really indicated the sampling variability).]
But in any case the way to plot a distribution is to look up its skewness and kurtosis and plot them.
If the skewness and kurtosis are fixed, just plot that point (and label it). However, note that some distributions may not have both skewness and kurtosis being finite (if kurtosis is finite then skewness must be too, and if skewness is not finite then kurtosis won't be either).
If the distribution is a family whose skewness and kurtosis depends on a parameter, if you can't write the relationship between skewness and kurtosis directly you can use that parameter to parameterize the curve it lays on.
If skewness and/or kurtosis depends on a couple of parameters between them (or perhaps more that two), you probably have a region rather than a curve, in which case you'll need to try to figure out the boundaries from the relationship between the possible values of the parameters and the skewness/kurtosis.
Here's a basic diagram (based on one I did for some notes I wrote last year on the Pearson family):
Note that I have $\beta_2$ (kurtosis) on the x-axis and $\beta_1$ (squared skewness) on the y-axis. This is because the plots work better in this orientation as you need more space for kurtosis.
The Pearson distribution types are marked with Roman numerals in parentheses. Note that the named distributions (like the "gamma" on the "(III)" line) encompass scaled and shifted versions of those families (including negative scales).
[My diagram doesn't split up the beta (type $\text{I}$) region into "U"-shaped/"J"-shaped/hill-shaped subregions but some diagrams do so]
Here's some examples of adding some distributions which are not Pearson family:
case 1 (single distribution before location/scale transform): Logistic distribution
Wikipedia gives skewness as 0 and Excess kurtosis as 1.2, so $\beta_1=$ 0 and $\beta_2=$ 4.2. So we want to plot a point (marked with say "L") at (4.2,0)
case 2 (1-parameter family before location-scale): Lognormal distribution
Wikipedia gives skewness as $(e^{\sigma^2}\!\!+2) \sqrt{e^{\sigma^2}\!\!-1}$
and excess kurtosis as $e^{4\sigma^2}\!\! + 2e^{3\sigma^2}\!\! + 3e^{2\sigma^2}\!\! - 6$
Writing $a$ for $e^{\sigma^2}$, this means that
$\beta_1=(a+2)^2(a-1)$
$\beta_2=a^4\!\! + 2a^3\!\! + 3a^2\!\! - 3$
The direct relationship between these two is not immediately obvious so let's
use $a$ to parameterize it. As $a\to 1$, note that this approaches the normal.
At $a=1.724$ we reach the right edge of our diagram so we should have $a$ vary
between those values. If we choose enough values (10 is probably enough since
it isn't strongly curved) we get a smooth looking curve.
The logistic point is at the bottom to the right of the normal point.
The lognormal curve is shown with dashed black lines, in the type $\text{VI}$ region (but it's not type VI).
In R a plot like this can most readily be generated using the descdist function in the fitdistrplus package:
Here the blue point is a particular sample (as it happens, from an ExGaussian). Note that this display has the axes flipped around from my earlier diagram. (It's harder to add additional distribution points and lines to this, but it is still possible if you look at what the code in descdist does, specifically with how it uses the kurtmax variable.)
Since people are sure to ask for it, here's R code for the first plot:
plot(c(0,25),c(0,14),type="n", frame=FALSE,
ylab=expression(beta[1]),
xlab=expression(beta[2]) )
#region 0
polygon(c(0,0,1,15),c(14,0,0,14),col=rgb(.8,0.8,.85,.5),border=FALSE)
lines(c(1,15),c(0,14),lwd=2,col="grey")
# region I
polygon(c(15,1,3,3+21),c(14,0,0,14),col=rgb(.99,0.5,.8,.5),border=FALSE)
lines(c(3,3+21),c(0,14),lwd=2,col=rgb(.99,0.5,.8,1))
# region VI
a=rev(exp(seq(log(5.759),log(1000),.2)))
polygon(c(25,3+21,3,3*(a+5)*(a-2)/(a-3)/(a-4)),
c(14,14,0,16*(a-2)/(a-3)^2),
col=rgb(.95,0.8,.3,.4),border=FALSE)
lines(c(3,3*(a+5)*(a-2)/(a-3)/(a-4)),
c(0,16*(a-2)/(a-3)^2),lwd=2,col=rgb(0.95,.8,.3,.9))
# region IV
a=rev(a)
polygon(c(3*(a+5)*(a-2)/(a-3)/(a-4),3,25),
c(16*(a-2)/(a-3)^2,0,0),
col=rgb(.5,0.95,.6,.4),border=FALSE)
points(1.8,0,pch="U")
points(3,0,pch="N")
points(9,4,pch="E")
text(0.5,13,labels="U - uniform",pos=4)
text(0.5,12,labels="N - normal",pos=4)
text(0.5,11,labels="E - exponential",pos=4)
text(1,8,labels="impossible",pos=4)
text(9,7,labels="beta",pos=4)
text(16,7.75,labels="beta prime, F",pos=4)
text(9.2,6.25,labels="(I)",pos=4,family="serif")
text(1.8,0,labels="(II)",pos=4,cex=0.8,family="serif")
text(16.2,7,labels="(VI)",pos=4,family="serif")
text(15,2,labels="(IV)",pos=4,family="serif")
text(13,7.1,labels="(III)",pos=4,cex=0.8,family="serif")
text(17,5.7,labels="(V)",pos=4,cex=0.8,family="serif")
text(12.5,0,labels="(VII)",pos=4,cex=0.8,family="serif")
text(12.2,0,labels="t",pos=4)
text(11.25,5.8,labels="gamma",srt=25,pos=4)
text(15.3,5,labels="inverse",srt=14,pos=2)
text(14.9,5.15,labels="gamma",srt=11,pos=4)
(It is necessary to stretch the plot area to be roughly the shape I made it above or the slanted text won't be in the right place.)
The code to place the point for the logistic and the curve for the lognormal is then:
points(4.2,0,pch="L")
a=c((10:17)/10,1.724)
lines(a^4 + 2*a^3 + 3*a^2 - 3,(a+2)^2*(a-1),lty=2) | Skewness Kurtosis Plot for different distribution
There are two main types of skewness-kurtosis plot; one where the skewness is plotted against kurtosis (where the boundary of impossibility is a parabola) and one where squared skewness is plotted aga |
27,465 | What are criteria and decision making for non-linearity in statistical models? | The model building process involves a model builder making many decisions. One of the decisions involves choosing among different classes of models to explore. There are many classes of models that could be considered; for example, ARIMA models, ARDL models, Multiple Source of Error State-Space models, LSTAR models, Min-Max models, to name but a few. Of course, some classes of models are broader than others and it's not common to find that some classes of models are sub-classes of others.
Given the nature of the question, we can focus mainly on just two classes of models; linear models and non-linear models.
With the above picture in mind, I'll begin to address the OPs question of when it is useful to adopt a non-linear model and if there is a logical framework for doing so - from a statistical and methodological perspective.
The first thing to notice is that linear models are a small subclass of non-linear models. In other words, linear models are special cases of non-linear models. There are some exceptions to that statement, but, for present purposes, we won't lose much by accepting it to simplify matters.
Typically, a model builder will select a class of models and proceed to choose a model from within that particular class by employing some methodology. A simple example is when one decides to model a time-series as an ARIMA process and then follows the Box-Jenkins methodology to select a model from among the class of ARIMA models. Working in this fashion, with methodologies associated with families of models, is a matter of practical necessity.
A consequence of deciding to build a non-linear model is that the model selection problem becomes much greater (more models must be considered and more decisions are faced) when compared to choosing from among the smaller set of linear models, so there is a real practical issue at hand. Furthermore, there may not even be fully developed methodologies (known, accepted, understood, easy to communicate) to use in order to select from some families of non-linear models. Further still, another disadvantage of building non-linear models is that linear models are easier to use and their probabilistic properties are better known (TerΓ€svirta, TjΓΈstheim, and Granger (2010)).
That said, the OP asks for statistical grounds for guiding the decision rather than practical or domain theoretic ones, so I must carry on.
Before even contemplating how to deal with selecting which non-linear models to work with, one must decide initially whether to work with linear models or non-linear models, instead. A decision! How to make this choice?
By appeal to Granger and Terasvirta (1993), I adopt the following argument, which has two main points in response to the following two questions.
Q: When is it useful to build a non-linear model? In short, it may be useful to build a non-linear model when the class of linear models has already been considered and deemed insufficient to characterize the relationship under inspection. This non-linear modelling procedure (decision making process) can be said to go from simple to general, in the sense that it goes from linear to non-linear.
Q: Are there statistical grounds that can be used to justify building a non-linear model? If one decides to build a non-linear model based on the results of linearity tests, I would say, yes, there are. If linearity testing suggests that there is no significant nonlinearity in the relationship then building a nonlinear model would not be recommended; testing should precede the decision to build.
I will flesh these points out by direct reference to Granger and Terasvirta (1993):
Before building a nonlinear model it is advisable to find out if
indeed a linear model would adequately characterize the [economic]
relationships under analysis. If this were the case, there would be
more statistical theory available for building a reasonable model than
if a nonlinear model were appropriate. Furthermore, obtaining optimal
forecasts for more than one period ahead would be much simpler if the
model were linear. It may happen, at least when the time-series are
short, that the investigator successfully estimates a nonlinear model
although the true relationship between the variables is linear. The
danger of unnecessarily complicating the model-building is therefore
real, but can be diminished by linearity testing.
In the more recent book, TerΓ€svirta, TjΓΈstheim, and Granger (2010), the same sort of advice is given, which I now quote:
From the practical point of view it is [therefore] useful to test
linearity before attempting estimation of the more complicated
nonlinear model. In many cases, testing is even necessary from a
statistical point of view. A number of popular nonlinear models are
not identified under linearity. If the true model that generated the
data is linear and the nonlinear model one is interested in nests this
linear model, the parameters of the nonlinear model cannot be
estimated consistently. Thus linearity testing has to precede any
nonlinear modelling and estimation.
Let me end with an example.
In the context of modelling business cycles, a practical example of using statistical grounds to justify building a non-linear model may be as follows. Since linear univariate or vector autoregressive models are unable to generate asymmetrical cyclical time-series, a non-linear modelling approach, which can handle asymmetries in the data, is worth consideration. An expanded version of this example about data reversibility can be found in Tong (1993).
Apologies if I've concentrated too much on time-series models. I'm sure, however, that some of the ideas are applicable in other settings, too. | What are criteria and decision making for non-linearity in statistical models? | The model building process involves a model builder making many decisions. One of the decisions involves choosing among different classes of models to explore. There are many classes of models that co | What are criteria and decision making for non-linearity in statistical models?
The model building process involves a model builder making many decisions. One of the decisions involves choosing among different classes of models to explore. There are many classes of models that could be considered; for example, ARIMA models, ARDL models, Multiple Source of Error State-Space models, LSTAR models, Min-Max models, to name but a few. Of course, some classes of models are broader than others and it's not common to find that some classes of models are sub-classes of others.
Given the nature of the question, we can focus mainly on just two classes of models; linear models and non-linear models.
With the above picture in mind, I'll begin to address the OPs question of when it is useful to adopt a non-linear model and if there is a logical framework for doing so - from a statistical and methodological perspective.
The first thing to notice is that linear models are a small subclass of non-linear models. In other words, linear models are special cases of non-linear models. There are some exceptions to that statement, but, for present purposes, we won't lose much by accepting it to simplify matters.
Typically, a model builder will select a class of models and proceed to choose a model from within that particular class by employing some methodology. A simple example is when one decides to model a time-series as an ARIMA process and then follows the Box-Jenkins methodology to select a model from among the class of ARIMA models. Working in this fashion, with methodologies associated with families of models, is a matter of practical necessity.
A consequence of deciding to build a non-linear model is that the model selection problem becomes much greater (more models must be considered and more decisions are faced) when compared to choosing from among the smaller set of linear models, so there is a real practical issue at hand. Furthermore, there may not even be fully developed methodologies (known, accepted, understood, easy to communicate) to use in order to select from some families of non-linear models. Further still, another disadvantage of building non-linear models is that linear models are easier to use and their probabilistic properties are better known (TerΓ€svirta, TjΓΈstheim, and Granger (2010)).
That said, the OP asks for statistical grounds for guiding the decision rather than practical or domain theoretic ones, so I must carry on.
Before even contemplating how to deal with selecting which non-linear models to work with, one must decide initially whether to work with linear models or non-linear models, instead. A decision! How to make this choice?
By appeal to Granger and Terasvirta (1993), I adopt the following argument, which has two main points in response to the following two questions.
Q: When is it useful to build a non-linear model? In short, it may be useful to build a non-linear model when the class of linear models has already been considered and deemed insufficient to characterize the relationship under inspection. This non-linear modelling procedure (decision making process) can be said to go from simple to general, in the sense that it goes from linear to non-linear.
Q: Are there statistical grounds that can be used to justify building a non-linear model? If one decides to build a non-linear model based on the results of linearity tests, I would say, yes, there are. If linearity testing suggests that there is no significant nonlinearity in the relationship then building a nonlinear model would not be recommended; testing should precede the decision to build.
I will flesh these points out by direct reference to Granger and Terasvirta (1993):
Before building a nonlinear model it is advisable to find out if
indeed a linear model would adequately characterize the [economic]
relationships under analysis. If this were the case, there would be
more statistical theory available for building a reasonable model than
if a nonlinear model were appropriate. Furthermore, obtaining optimal
forecasts for more than one period ahead would be much simpler if the
model were linear. It may happen, at least when the time-series are
short, that the investigator successfully estimates a nonlinear model
although the true relationship between the variables is linear. The
danger of unnecessarily complicating the model-building is therefore
real, but can be diminished by linearity testing.
In the more recent book, TerΓ€svirta, TjΓΈstheim, and Granger (2010), the same sort of advice is given, which I now quote:
From the practical point of view it is [therefore] useful to test
linearity before attempting estimation of the more complicated
nonlinear model. In many cases, testing is even necessary from a
statistical point of view. A number of popular nonlinear models are
not identified under linearity. If the true model that generated the
data is linear and the nonlinear model one is interested in nests this
linear model, the parameters of the nonlinear model cannot be
estimated consistently. Thus linearity testing has to precede any
nonlinear modelling and estimation.
Let me end with an example.
In the context of modelling business cycles, a practical example of using statistical grounds to justify building a non-linear model may be as follows. Since linear univariate or vector autoregressive models are unable to generate asymmetrical cyclical time-series, a non-linear modelling approach, which can handle asymmetries in the data, is worth consideration. An expanded version of this example about data reversibility can be found in Tong (1993).
Apologies if I've concentrated too much on time-series models. I'm sure, however, that some of the ideas are applicable in other settings, too. | What are criteria and decision making for non-linearity in statistical models?
The model building process involves a model builder making many decisions. One of the decisions involves choosing among different classes of models to explore. There are many classes of models that co |
27,466 | What are criteria and decision making for non-linearity in statistical models? | The over-arching issue is to decide for what types of problems linearity is to be expected, otherwise allow relationships to be nonlinear as the sample size allows. Most processes in biology, social sciences, and other fields are nonlinear. The only situations where I expect linear relationships are:
Newtonian mechanics
Prediction of $Y$ from $Y$ measured at an earlier time
The latter example includes the case where one has a dependent variable $Y$ that is also measured at baseline (time zero).
I rarely see a relationship that is everywhere linear in a large dataset.
The decision to include nonlinearities in regression models does not come so much from a global statistical principle but rather from the way the world works. One exception is when a sub-optimal statistical framework has been chosen and nonlinearities or interaction terms have to be introduced just to make up for badly choosing the framework. Interaction terms can sometimes be needed to offset under-modeling (e.g., by assuming linearity) main effects. More main effects may be needed to offset the information loss resulting from under-modeling the other main effects.
Researchers sometimes agonize over whether to include a certain variable while they are underfitting a host of other variables by forcing them to act linearly. In my experience the linearity assumption is one of the most violated of all assumptions that strongly matter. | What are criteria and decision making for non-linearity in statistical models? | The over-arching issue is to decide for what types of problems linearity is to be expected, otherwise allow relationships to be nonlinear as the sample size allows. Most processes in biology, social | What are criteria and decision making for non-linearity in statistical models?
The over-arching issue is to decide for what types of problems linearity is to be expected, otherwise allow relationships to be nonlinear as the sample size allows. Most processes in biology, social sciences, and other fields are nonlinear. The only situations where I expect linear relationships are:
Newtonian mechanics
Prediction of $Y$ from $Y$ measured at an earlier time
The latter example includes the case where one has a dependent variable $Y$ that is also measured at baseline (time zero).
I rarely see a relationship that is everywhere linear in a large dataset.
The decision to include nonlinearities in regression models does not come so much from a global statistical principle but rather from the way the world works. One exception is when a sub-optimal statistical framework has been chosen and nonlinearities or interaction terms have to be introduced just to make up for badly choosing the framework. Interaction terms can sometimes be needed to offset under-modeling (e.g., by assuming linearity) main effects. More main effects may be needed to offset the information loss resulting from under-modeling the other main effects.
Researchers sometimes agonize over whether to include a certain variable while they are underfitting a host of other variables by forcing them to act linearly. In my experience the linearity assumption is one of the most violated of all assumptions that strongly matter. | What are criteria and decision making for non-linearity in statistical models?
The over-arching issue is to decide for what types of problems linearity is to be expected, otherwise allow relationships to be nonlinear as the sample size allows. Most processes in biology, social |
27,467 | What are criteria and decision making for non-linearity in statistical models? | When building model I always try the squares of variables together with linear components. For instance, when building a simple regression model $$y_i=\alpha +\beta x_i+\varepsilon_i$$ I'll throw in a square term $$y_i=\alpha +\beta x_i+\gamma x_i^2+\varepsilon_i$$
If $\gamma$ is significant, it may be a case for a nonlinear model. The intuition is , of course, the Taylor expansion. If you have a linear function, only the first derivative must be nonzero. For nonlinear functions higher order derivatives would be nonzero.
I also often try asymmetric specification candidate:
$$y_i=\alpha +\beta \max(0,x_i)+\gamma \min(0,x_i)+\varepsilon_i$$
If $\gamma\ne\beta$ is significant, then it makes me consider exploring asymmetric specifications.
Sometimes, I have some special values or bands in my data; or my histograms of explanatory variables have kinks and inflection points. So, I try out the linear splines around these special points or regions. The simplest linear splines would be: $$x^{a-}=\min(x,a)$$
$$x^{a+}=\max(x,a)$$
This would introduce the different slopes for $x$ before and after point $x=a$. You can have several slopes for the same variable in different regions. If my linear spline is significant, then I either play with knot points and use it, or think about nonlinear models.
This is not the systematic approach, but it's just one of the things I always do. | What are criteria and decision making for non-linearity in statistical models? | When building model I always try the squares of variables together with linear components. For instance, when building a simple regression model $$y_i=\alpha +\beta x_i+\varepsilon_i$$ I'll throw in a | What are criteria and decision making for non-linearity in statistical models?
When building model I always try the squares of variables together with linear components. For instance, when building a simple regression model $$y_i=\alpha +\beta x_i+\varepsilon_i$$ I'll throw in a square term $$y_i=\alpha +\beta x_i+\gamma x_i^2+\varepsilon_i$$
If $\gamma$ is significant, it may be a case for a nonlinear model. The intuition is , of course, the Taylor expansion. If you have a linear function, only the first derivative must be nonzero. For nonlinear functions higher order derivatives would be nonzero.
I also often try asymmetric specification candidate:
$$y_i=\alpha +\beta \max(0,x_i)+\gamma \min(0,x_i)+\varepsilon_i$$
If $\gamma\ne\beta$ is significant, then it makes me consider exploring asymmetric specifications.
Sometimes, I have some special values or bands in my data; or my histograms of explanatory variables have kinks and inflection points. So, I try out the linear splines around these special points or regions. The simplest linear splines would be: $$x^{a-}=\min(x,a)$$
$$x^{a+}=\max(x,a)$$
This would introduce the different slopes for $x$ before and after point $x=a$. You can have several slopes for the same variable in different regions. If my linear spline is significant, then I either play with knot points and use it, or think about nonlinear models.
This is not the systematic approach, but it's just one of the things I always do. | What are criteria and decision making for non-linearity in statistical models?
When building model I always try the squares of variables together with linear components. For instance, when building a simple regression model $$y_i=\alpha +\beta x_i+\varepsilon_i$$ I'll throw in a |
27,468 | Smooth a circular/periodic time series | To make a periodic smooth (on any platform), just append the data to themselves, smooth the longer list, and cut off the ends.
Here is an R illustration:
y <- sqrt(table(factor(x[,"hour"], levels=0:23)))
y <- c(y,y,y)
x.mid <- 1:24; offset <- 24
plot(x.mid-1, y[x.mid+offset]^2, pch=19, xlab="Hour", ylab="Count")
y.smooth <- lowess(y, f=1/8)
lines(x.mid-1, y.smooth$y[x.mid+offset]^2, lwd=2, col="Blue")
(Because these are counts I chose to smooth their square roots; they were converted back to counts for plotting.) The span in lowess has been shrunk considerably from its default of f=2/3 because (a) we are now processing an array three times longer, which should cause us to reduce $f$ to $2/9$, and (b) I want a fairly local smooth so that no appreciable endpoint effects show up in the middle third.
It has done a pretty good job with these data. In particular, the anomaly at hour 0 has been smoothed right through. | Smooth a circular/periodic time series | To make a periodic smooth (on any platform), just append the data to themselves, smooth the longer list, and cut off the ends.
Here is an R illustration:
y <- sqrt(table(factor(x[,"hour"], levels=0:23 | Smooth a circular/periodic time series
To make a periodic smooth (on any platform), just append the data to themselves, smooth the longer list, and cut off the ends.
Here is an R illustration:
y <- sqrt(table(factor(x[,"hour"], levels=0:23)))
y <- c(y,y,y)
x.mid <- 1:24; offset <- 24
plot(x.mid-1, y[x.mid+offset]^2, pch=19, xlab="Hour", ylab="Count")
y.smooth <- lowess(y, f=1/8)
lines(x.mid-1, y.smooth$y[x.mid+offset]^2, lwd=2, col="Blue")
(Because these are counts I chose to smooth their square roots; they were converted back to counts for plotting.) The span in lowess has been shrunk considerably from its default of f=2/3 because (a) we are now processing an array three times longer, which should cause us to reduce $f$ to $2/9$, and (b) I want a fairly local smooth so that no appreciable endpoint effects show up in the middle third.
It has done a pretty good job with these data. In particular, the anomaly at hour 0 has been smoothed right through. | Smooth a circular/periodic time series
To make a periodic smooth (on any platform), just append the data to themselves, smooth the longer list, and cut off the ends.
Here is an R illustration:
y <- sqrt(table(factor(x[,"hour"], levels=0:23 |
27,469 | Smooth a circular/periodic time series | I don't use R routinely and I have never used ggplot, but there is a simple story here, or so I guess.
Time of day is manifestly a circular or periodic variable. In your data you have hours 0(1)23 which wrap around, so that 23 is followed by 0. However, ggplot does not know that, at least from the information you have given it. So far as it is concerned there could be values at -1, -2, etc. or at 24, 25, etc. and so some of the probability is presumably smoothed beyond the limits of the observed data, and indeed beyond the limits of the possible data.
This will be happening for your main data too, but it is just not quite so noticeable.
If you want kernel density estimates for such data, you need a routine smart enough to handle such periodic or circular variables properly. "Properly" means that the routine smooths on a circular space, recognising that 0 follows 23. In some ways the smoothing of such distributions is easier than the usual case, as there are no boundary problems (as there are no boundaries). Others should be able to advise on functions to use in R.
This kind of data falls somewhere between periodic time series and circular statistics.
The data presented have 99 observations. For that a histogram works quite well, although I can see that you might want to smooth it a little.
(UPDATE) It's a matter of taste and judgement but I'd consider your smooth curve drastically oversmoothed.
Here as a sample is a biweight density estimate. I used my own Stata program for circular data in degrees with the ad hoc conversion 15 * (hour + 0.5) but densities expressed per hour. This in contrast is a little undersmoothed, but you can tune your choices. | Smooth a circular/periodic time series | I don't use R routinely and I have never used ggplot, but there is a simple story here, or so I guess.
Time of day is manifestly a circular or periodic variable. In your data you have hours 0(1)23 wh | Smooth a circular/periodic time series
I don't use R routinely and I have never used ggplot, but there is a simple story here, or so I guess.
Time of day is manifestly a circular or periodic variable. In your data you have hours 0(1)23 which wrap around, so that 23 is followed by 0. However, ggplot does not know that, at least from the information you have given it. So far as it is concerned there could be values at -1, -2, etc. or at 24, 25, etc. and so some of the probability is presumably smoothed beyond the limits of the observed data, and indeed beyond the limits of the possible data.
This will be happening for your main data too, but it is just not quite so noticeable.
If you want kernel density estimates for such data, you need a routine smart enough to handle such periodic or circular variables properly. "Properly" means that the routine smooths on a circular space, recognising that 0 follows 23. In some ways the smoothing of such distributions is easier than the usual case, as there are no boundary problems (as there are no boundaries). Others should be able to advise on functions to use in R.
This kind of data falls somewhere between periodic time series and circular statistics.
The data presented have 99 observations. For that a histogram works quite well, although I can see that you might want to smooth it a little.
(UPDATE) It's a matter of taste and judgement but I'd consider your smooth curve drastically oversmoothed.
Here as a sample is a biweight density estimate. I used my own Stata program for circular data in degrees with the ad hoc conversion 15 * (hour + 0.5) but densities expressed per hour. This in contrast is a little undersmoothed, but you can tune your choices. | Smooth a circular/periodic time series
I don't use R routinely and I have never used ggplot, but there is a simple story here, or so I guess.
Time of day is manifestly a circular or periodic variable. In your data you have hours 0(1)23 wh |
27,470 | Smooth a circular/periodic time series | Doing Tukey's 4253H,twice on three concatenated copies the raw counts and then taking the middle set of smoothed values gives much the same picture as whuber's lowess on the square roots of the counts. | Smooth a circular/periodic time series | Doing Tukey's 4253H,twice on three concatenated copies the raw counts and then taking the middle set of smoothed values gives much the same picture as whuber's lowess on the square roots of the counts | Smooth a circular/periodic time series
Doing Tukey's 4253H,twice on three concatenated copies the raw counts and then taking the middle set of smoothed values gives much the same picture as whuber's lowess on the square roots of the counts. | Smooth a circular/periodic time series
Doing Tukey's 4253H,twice on three concatenated copies the raw counts and then taking the middle set of smoothed values gives much the same picture as whuber's lowess on the square roots of the counts |
27,471 | Smooth a circular/periodic time series | In addition, and as a more complex alternative, to what has been suggested, you
might want to look to periodic splines. You can find tools to fit them in R packages splines aand mgcv. The advantage I see over approaches already suggested is that you can compute degrees of freedom of the fit, which are not obvious with the 'three copies' method. | Smooth a circular/periodic time series | In addition, and as a more complex alternative, to what has been suggested, you
might want to look to periodic splines. You can find tools to fit them in R packages splines aand mgcv. The advantage I | Smooth a circular/periodic time series
In addition, and as a more complex alternative, to what has been suggested, you
might want to look to periodic splines. You can find tools to fit them in R packages splines aand mgcv. The advantage I see over approaches already suggested is that you can compute degrees of freedom of the fit, which are not obvious with the 'three copies' method. | Smooth a circular/periodic time series
In addition, and as a more complex alternative, to what has been suggested, you
might want to look to periodic splines. You can find tools to fit them in R packages splines aand mgcv. The advantage I |
27,472 | Smooth a circular/periodic time series | Still another approach, periodic splines (as suggested in answer by F.Tusell), but here we show also an implementation in R. We will use a Poisson glm to fit to the histogram counts, resulting in the following histogram with smooth:
The code used (starting with the data object x given in question):
library(pbs) # basis for periodic spline
x.tab <- with(x, table(factor(hour,levels=as.character(0:23))))
x.df <- data.frame(time=0:23, count=as.vector(x.tab))
mod.hist <- with(x.df, glm(count ~ pbs::pbs(time, df=4, Boundary.knots=c(0,24)), family=poisson))
pred <- predict(mod.hist, type="response", newdata=data.frame(time=0:24))
with(x.df, {plot(time, count,type="h",col="blue", main="Histogram") ; lines(time, pred[1:24], col="red")} ) | Smooth a circular/periodic time series | Still another approach, periodic splines (as suggested in answer by F.Tusell), but here we show also an implementation in R. We will use a Poisson glm to fit to the histogram counts, resulting in the | Smooth a circular/periodic time series
Still another approach, periodic splines (as suggested in answer by F.Tusell), but here we show also an implementation in R. We will use a Poisson glm to fit to the histogram counts, resulting in the following histogram with smooth:
The code used (starting with the data object x given in question):
library(pbs) # basis for periodic spline
x.tab <- with(x, table(factor(hour,levels=as.character(0:23))))
x.df <- data.frame(time=0:23, count=as.vector(x.tab))
mod.hist <- with(x.df, glm(count ~ pbs::pbs(time, df=4, Boundary.knots=c(0,24)), family=poisson))
pred <- predict(mod.hist, type="response", newdata=data.frame(time=0:24))
with(x.df, {plot(time, count,type="h",col="blue", main="Histogram") ; lines(time, pred[1:24], col="red")} ) | Smooth a circular/periodic time series
Still another approach, periodic splines (as suggested in answer by F.Tusell), but here we show also an implementation in R. We will use a Poisson glm to fit to the histogram counts, resulting in the |
27,473 | Linear regression explanations | Well, it's also linear in the predictors.
For example, if you fit a quadratic you might say 'see, not linear!'... but it is! If $x_1 = x$ and $x_2 = x^2$, and you regress on $x_1$ and $x_2$, it's certainly linear in $(1,x_1,x_2)$. It's linear in the predictors you gave it.
If you regress on $x_1 = \sin(\pi x)$ and $x_2 = \cos(\pi x)$... well, it's still linear in $(1,x_1,x_2)$.
and so on.
By judicious choices of your $x$'s you can use it to fit curves, but it's still linear in what you give it.
Even a local polynomial (kernel-type) fit is actually linear in the predictors. You can write the whole thing as one large linear model.
If $E(y) = X\beta$, $X\beta$ is clearly linear in either $X$ (in the columns of X) or $\beta$.
But yes, the linear-in-the-parameters is what the 'linear' in linear regression 'means'.
Is it at least partly misleading that the elementary presentations are always drawing straight line relationships when regression can fit curves? Perhaps, but you pretty much have to start with lines. | Linear regression explanations | Well, it's also linear in the predictors.
For example, if you fit a quadratic you might say 'see, not linear!'... but it is! If $x_1 = x$ and $x_2 = x^2$, and you regress on $x_1$ and $x_2$, it's cer | Linear regression explanations
Well, it's also linear in the predictors.
For example, if you fit a quadratic you might say 'see, not linear!'... but it is! If $x_1 = x$ and $x_2 = x^2$, and you regress on $x_1$ and $x_2$, it's certainly linear in $(1,x_1,x_2)$. It's linear in the predictors you gave it.
If you regress on $x_1 = \sin(\pi x)$ and $x_2 = \cos(\pi x)$... well, it's still linear in $(1,x_1,x_2)$.
and so on.
By judicious choices of your $x$'s you can use it to fit curves, but it's still linear in what you give it.
Even a local polynomial (kernel-type) fit is actually linear in the predictors. You can write the whole thing as one large linear model.
If $E(y) = X\beta$, $X\beta$ is clearly linear in either $X$ (in the columns of X) or $\beta$.
But yes, the linear-in-the-parameters is what the 'linear' in linear regression 'means'.
Is it at least partly misleading that the elementary presentations are always drawing straight line relationships when regression can fit curves? Perhaps, but you pretty much have to start with lines. | Linear regression explanations
Well, it's also linear in the predictors.
For example, if you fit a quadratic you might say 'see, not linear!'... but it is! If $x_1 = x$ and $x_2 = x^2$, and you regress on $x_1$ and $x_2$, it's cer |
27,474 | Linear regression explanations | You're right that the "linear" in linear regression or linear models actually stands for linear in the parameters. That means that the parameters you are estimating are coefficients.
For what it's worth however, a curvilinear-looking function that is modeled with a polynomial (e.g., $Y=\beta_0+\beta_1X_1+\beta_2X_1^2$) is actually a multiple regression model, even though we plot the model on a 2-dimensional scatterplot and even though we think of $X_1$ and $X_1^2$ as the same underlying variable. When situated in the appropriate space though, it really is a straight line / flat plane.
It can be hard to see it in a 3-dimensional plot as well, because the relationship between $X_1$ and $X^2_1$ is curvilinear. But imagine a perfectly flat plane that slices through a coke can; from the plane's point of view, the line where the plane intersects the can's wall is straight. If you could arrange it such that you were looking at this perfectly edge-on through the plane, you would see that the function was linear.
Update: I have an example of this worked out in my answer here: Why is polynomial regression considered a special case of multiple linear regression? | Linear regression explanations | You're right that the "linear" in linear regression or linear models actually stands for linear in the parameters. That means that the parameters you are estimating are coefficients.
For what it's | Linear regression explanations
You're right that the "linear" in linear regression or linear models actually stands for linear in the parameters. That means that the parameters you are estimating are coefficients.
For what it's worth however, a curvilinear-looking function that is modeled with a polynomial (e.g., $Y=\beta_0+\beta_1X_1+\beta_2X_1^2$) is actually a multiple regression model, even though we plot the model on a 2-dimensional scatterplot and even though we think of $X_1$ and $X_1^2$ as the same underlying variable. When situated in the appropriate space though, it really is a straight line / flat plane.
It can be hard to see it in a 3-dimensional plot as well, because the relationship between $X_1$ and $X^2_1$ is curvilinear. But imagine a perfectly flat plane that slices through a coke can; from the plane's point of view, the line where the plane intersects the can's wall is straight. If you could arrange it such that you were looking at this perfectly edge-on through the plane, you would see that the function was linear.
Update: I have an example of this worked out in my answer here: Why is polynomial regression considered a special case of multiple linear regression? | Linear regression explanations
You're right that the "linear" in linear regression or linear models actually stands for linear in the parameters. That means that the parameters you are estimating are coefficients.
For what it's |
27,475 | Linear regression explanations | Well, simple linear regression usually refers to a model with only a single predictor, so the relationship would be linear (if you transform a variable then the relationship is still linear between the transformed variables).
Many curved relationships are modeled using polynomials or splines which moves away from simple linear regression into multiple linear regression.
Though when I teach simple linear regression I also try to do a teaser saying that curved relationships can be modeled as well, but the students have to come back for another statistics course to learn the details (or they should consult with a statistician to fit these models). | Linear regression explanations | Well, simple linear regression usually refers to a model with only a single predictor, so the relationship would be linear (if you transform a variable then the relationship is still linear between th | Linear regression explanations
Well, simple linear regression usually refers to a model with only a single predictor, so the relationship would be linear (if you transform a variable then the relationship is still linear between the transformed variables).
Many curved relationships are modeled using polynomials or splines which moves away from simple linear regression into multiple linear regression.
Though when I teach simple linear regression I also try to do a teaser saying that curved relationships can be modeled as well, but the students have to come back for another statistics course to learn the details (or they should consult with a statistician to fit these models). | Linear regression explanations
Well, simple linear regression usually refers to a model with only a single predictor, so the relationship would be linear (if you transform a variable then the relationship is still linear between th |
27,476 | What is a full conditional probability? | I think the context is a MCMC algorithm, because this terminology is rather standard in such a context. The goal is to simulate a multivariate distribution, that is, the distribution of a random vector $(\theta_1, \ldots,\theta_p)$. The full conditional distribution of $\theta_1$ is then nothing but the conditional distribution of $\theta_1$ given all other variables. | What is a full conditional probability? | I think the context is a MCMC algorithm, because this terminology is rather standard in such a context. The goal is to simulate a multivariate distribution, that is, the distribution of a random vecto | What is a full conditional probability?
I think the context is a MCMC algorithm, because this terminology is rather standard in such a context. The goal is to simulate a multivariate distribution, that is, the distribution of a random vector $(\theta_1, \ldots,\theta_p)$. The full conditional distribution of $\theta_1$ is then nothing but the conditional distribution of $\theta_1$ given all other variables. | What is a full conditional probability?
I think the context is a MCMC algorithm, because this terminology is rather standard in such a context. The goal is to simulate a multivariate distribution, that is, the distribution of a random vecto |
27,477 | What is a full conditional probability? | It is the probability distribution of a variable (node) in probabilistic graphical model (PGM) conditioned on the value of all the other variables in the PGM. It is equal to the distribution of the variable conditioned on the values of its Markov blanket which is formed by the children, the parents and coparents of the node.
As suggested by Stephane this is useful for MCMC sampling. | What is a full conditional probability? | It is the probability distribution of a variable (node) in probabilistic graphical model (PGM) conditioned on the value of all the other variables in the PGM. It is equal to the distribution of the va | What is a full conditional probability?
It is the probability distribution of a variable (node) in probabilistic graphical model (PGM) conditioned on the value of all the other variables in the PGM. It is equal to the distribution of the variable conditioned on the values of its Markov blanket which is formed by the children, the parents and coparents of the node.
As suggested by Stephane this is useful for MCMC sampling. | What is a full conditional probability?
It is the probability distribution of a variable (node) in probabilistic graphical model (PGM) conditioned on the value of all the other variables in the PGM. It is equal to the distribution of the va |
27,478 | How to use principal components as predictors in GLM? | It is possible and sometimes appropriate to use a subset of the principal components as explanatory variables in a linear model rather than the the original variables. The resulting coefficients then need to be be back-transformed to apply to the original variables. The results are biased but may be superior to more straightforward techniques.
PCA delivers a set of principal components that are linear combinations of the original variables. If you have $k$ original variables you still have $k$ principal components in the end, but they have been rotated through $k$-dimensional space so they are orthogonal to (ie uncorrelated with) eachother (this is easiest to think through with just two variables).
The trick to using PCA results in a linear model is that you make a decision to eliminate a certain number of the principal components. This decision is based on similar criteria to the "usual" black-art variable selection processes for building models.
The method is used to deal with multi-collinearity. It is reasonably common in linear regression with a Normal response and identity link function from the linear predictor to the response; but less common with a generalized linear model. There is at least one article on the issues on the web.
I'm not aware of any user-friendly software implementations. It would be fairly straightforward to do the PCA and use the resulting principal components as your explanatory variables in a generalized linear model; and then to translate back to the original scale. Estimating the distribution (variance, bias and shape) of your estimators having done this would be tricky however; the standard output from your generalized linear model will be wrong because it assumes you are dealing with original observations. You could build a bootstrap around the whole procedure (PCA and glm combined), which would be feasible in either R or SAS. | How to use principal components as predictors in GLM? | It is possible and sometimes appropriate to use a subset of the principal components as explanatory variables in a linear model rather than the the original variables. The resulting coefficients then | How to use principal components as predictors in GLM?
It is possible and sometimes appropriate to use a subset of the principal components as explanatory variables in a linear model rather than the the original variables. The resulting coefficients then need to be be back-transformed to apply to the original variables. The results are biased but may be superior to more straightforward techniques.
PCA delivers a set of principal components that are linear combinations of the original variables. If you have $k$ original variables you still have $k$ principal components in the end, but they have been rotated through $k$-dimensional space so they are orthogonal to (ie uncorrelated with) eachother (this is easiest to think through with just two variables).
The trick to using PCA results in a linear model is that you make a decision to eliminate a certain number of the principal components. This decision is based on similar criteria to the "usual" black-art variable selection processes for building models.
The method is used to deal with multi-collinearity. It is reasonably common in linear regression with a Normal response and identity link function from the linear predictor to the response; but less common with a generalized linear model. There is at least one article on the issues on the web.
I'm not aware of any user-friendly software implementations. It would be fairly straightforward to do the PCA and use the resulting principal components as your explanatory variables in a generalized linear model; and then to translate back to the original scale. Estimating the distribution (variance, bias and shape) of your estimators having done this would be tricky however; the standard output from your generalized linear model will be wrong because it assumes you are dealing with original observations. You could build a bootstrap around the whole procedure (PCA and glm combined), which would be feasible in either R or SAS. | How to use principal components as predictors in GLM?
It is possible and sometimes appropriate to use a subset of the principal components as explanatory variables in a linear model rather than the the original variables. The resulting coefficients then |
27,479 | How to use principal components as predictors in GLM? | My answer is not for the original question, but comments on your approach.
First apply PCA, then run generalized linear model is not recommended. The reason is PCA will select variable importance by "variable variance" but not "how variable is correlated with the prediction target". In other words, the "variable select" can be totally misleading, that select not important variables.
Here is an example: left future shows x1 is the important to classify two types of points. But PCA shows opposite.
Details can be found in my answer here. How to decide between PCA and logistic regression? | How to use principal components as predictors in GLM? | My answer is not for the original question, but comments on your approach.
First apply PCA, then run generalized linear model is not recommended. The reason is PCA will select variable importance by " | How to use principal components as predictors in GLM?
My answer is not for the original question, but comments on your approach.
First apply PCA, then run generalized linear model is not recommended. The reason is PCA will select variable importance by "variable variance" but not "how variable is correlated with the prediction target". In other words, the "variable select" can be totally misleading, that select not important variables.
Here is an example: left future shows x1 is the important to classify two types of points. But PCA shows opposite.
Details can be found in my answer here. How to decide between PCA and logistic regression? | How to use principal components as predictors in GLM?
My answer is not for the original question, but comments on your approach.
First apply PCA, then run generalized linear model is not recommended. The reason is PCA will select variable importance by " |
27,480 | How to use principal components as predictors in GLM? | I would suggest you take a look at this paper. It does a nice job showing the relationship between gaussian family distributions and PCA-like learner systems.
http://papers.nips.cc/paper/2078-a-generalization-of-principal-components-analysis-to-the-exponential-family.pdf
EDIT
Synopsis : while many think of PCA from the geometric interpretation of finding the orthogonal vectors within a dataset most responsible for the variance and then providing parameters to correctly re-orient one's space to those vectors, this paper builds up PCA using exponential probability functions in the context of generalized linear models, and offers a more powerful extension of PCA for other probability functions within the exponential family. In addition, they build a PCA-like learner algorithm using bregman divergences. It's fairly easy to follow and for you, it seems like it could help you understand the link between PCA and generalized linear models.
citation :
Collins, Michael et al. "A Generalization of Principal Component Analysis to the Exponential Family". Neural Information Processing Systems | How to use principal components as predictors in GLM? | I would suggest you take a look at this paper. It does a nice job showing the relationship between gaussian family distributions and PCA-like learner systems.
http://papers.nips.cc/paper/2078-a-genera | How to use principal components as predictors in GLM?
I would suggest you take a look at this paper. It does a nice job showing the relationship between gaussian family distributions and PCA-like learner systems.
http://papers.nips.cc/paper/2078-a-generalization-of-principal-components-analysis-to-the-exponential-family.pdf
EDIT
Synopsis : while many think of PCA from the geometric interpretation of finding the orthogonal vectors within a dataset most responsible for the variance and then providing parameters to correctly re-orient one's space to those vectors, this paper builds up PCA using exponential probability functions in the context of generalized linear models, and offers a more powerful extension of PCA for other probability functions within the exponential family. In addition, they build a PCA-like learner algorithm using bregman divergences. It's fairly easy to follow and for you, it seems like it could help you understand the link between PCA and generalized linear models.
citation :
Collins, Michael et al. "A Generalization of Principal Component Analysis to the Exponential Family". Neural Information Processing Systems | How to use principal components as predictors in GLM?
I would suggest you take a look at this paper. It does a nice job showing the relationship between gaussian family distributions and PCA-like learner systems.
http://papers.nips.cc/paper/2078-a-genera |
27,481 | Introductory material on splines | Echoing my comment, I would put forth the monograph "Nonparametric Regression and Generalized Linear Models" by Green, Silverman. It is not highly meticulous as the likes of de Boor's "A Practical Guide to Splines" (after all, it's not spline-centric treatise) but it is comprehensive and provides a lucid introductory account of splines: the authors motivate the concept of spline by observing that when a spline is bent in the shape of a curve $g, $ the leading term in the strain energy is $\propto \int {g^{\prime\prime}}^2.$ In doing that, they basically "quantify" the roughness of a curve.
Now, that is enough of an intuitive opening to a new realm. Again, this book doesn't delve too much in the functional analysis formalism as in "Smoothing Splines: Methods and Applications" by Wang but this must not deter any one to set it aside. The authors cover interpolating, cubic, natural cubic, smoothing splines, their properties, constructions, plotting, existence of minimizing spline and associated algorithms. There is a chapter on partial spline (unfortunately, I didn't cover that, so won't comment).
In all, while this is definitely not a spline centric book, it's worth a try. I am not aware of OP's students' level but as a student myself, I enjoyed the first reading with enough relevant mathematical materials for a first read.
Recommendation:
Nonparametric Regression and Generalized Linear Models: A roughness penalty approach, P. J. Green, B. W. Silverman, Chapman & Hall, $1994.$ | Introductory material on splines | Echoing my comment, I would put forth the monograph "Nonparametric Regression and Generalized Linear Models" by Green, Silverman. It is not highly meticulous as the likes of de Boor's "A Practical Gui | Introductory material on splines
Echoing my comment, I would put forth the monograph "Nonparametric Regression and Generalized Linear Models" by Green, Silverman. It is not highly meticulous as the likes of de Boor's "A Practical Guide to Splines" (after all, it's not spline-centric treatise) but it is comprehensive and provides a lucid introductory account of splines: the authors motivate the concept of spline by observing that when a spline is bent in the shape of a curve $g, $ the leading term in the strain energy is $\propto \int {g^{\prime\prime}}^2.$ In doing that, they basically "quantify" the roughness of a curve.
Now, that is enough of an intuitive opening to a new realm. Again, this book doesn't delve too much in the functional analysis formalism as in "Smoothing Splines: Methods and Applications" by Wang but this must not deter any one to set it aside. The authors cover interpolating, cubic, natural cubic, smoothing splines, their properties, constructions, plotting, existence of minimizing spline and associated algorithms. There is a chapter on partial spline (unfortunately, I didn't cover that, so won't comment).
In all, while this is definitely not a spline centric book, it's worth a try. I am not aware of OP's students' level but as a student myself, I enjoyed the first reading with enough relevant mathematical materials for a first read.
Recommendation:
Nonparametric Regression and Generalized Linear Models: A roughness penalty approach, P. J. Green, B. W. Silverman, Chapman & Hall, $1994.$ | Introductory material on splines
Echoing my comment, I would put forth the monograph "Nonparametric Regression and Generalized Linear Models" by Green, Silverman. It is not highly meticulous as the likes of de Boor's "A Practical Gui |
27,482 | Introductory material on splines | I found the section on splines in Frank Harrell's Regression Modeling Strategies very helpful. Yes, the book is not only about splines, but if your students are learning about them, they may find the rest of this tome helpful, too. | Introductory material on splines | I found the section on splines in Frank Harrell's Regression Modeling Strategies very helpful. Yes, the book is not only about splines, but if your students are learning about them, they may find the | Introductory material on splines
I found the section on splines in Frank Harrell's Regression Modeling Strategies very helpful. Yes, the book is not only about splines, but if your students are learning about them, they may find the rest of this tome helpful, too. | Introductory material on splines
I found the section on splines in Frank Harrell's Regression Modeling Strategies very helpful. Yes, the book is not only about splines, but if your students are learning about them, they may find the |
27,483 | Introductory material on splines | This answer is coming from a biostatistics angle.
I would second Frank Harrell's Regression Modelling Strategies, as per Stephan's answer https://link.springer.com/book/10.1007/978-3-319-19425-7.
Other resources I have used include a 2010 paper on general implementation (1). Per the title, it is focused on restricted cubic splines and includes a SAS macro (I think SAS has a bit more built-in functionality nowadays, but I use R).
I also read a more recent paper (2) focused on discussing different spline types and then considering their implementations in R. This is quite handy since there is some review therein about maturity of different packages for fitting splines. For me this seems reasonably accessible at a conceptual level (for those wanting to consider and use splines in applied settings) while also containing sufficient detail on basis forms for those who prefer a mathematical presentation.
(1) Desquilbet L, Mariotti F. Dose-response analyses using restricted cubic spline functions in public health research. Stat Med 2010;29(9):1037-57. doi: 10.1002/sim.3841 https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.3841
(2) Perperoglou, A., Sauerbrei, W., Abrahamowicz, M. et al. A review of spline function procedures in R. BMC Med Res Methodol 19, 46 (2019). https://doi.org/10.1186/s12874-019-0666-3
https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-019-0666-3 | Introductory material on splines | This answer is coming from a biostatistics angle.
I would second Frank Harrell's Regression Modelling Strategies, as per Stephan's answer https://link.springer.com/book/10.1007/978-3-319-19425-7.
Othe | Introductory material on splines
This answer is coming from a biostatistics angle.
I would second Frank Harrell's Regression Modelling Strategies, as per Stephan's answer https://link.springer.com/book/10.1007/978-3-319-19425-7.
Other resources I have used include a 2010 paper on general implementation (1). Per the title, it is focused on restricted cubic splines and includes a SAS macro (I think SAS has a bit more built-in functionality nowadays, but I use R).
I also read a more recent paper (2) focused on discussing different spline types and then considering their implementations in R. This is quite handy since there is some review therein about maturity of different packages for fitting splines. For me this seems reasonably accessible at a conceptual level (for those wanting to consider and use splines in applied settings) while also containing sufficient detail on basis forms for those who prefer a mathematical presentation.
(1) Desquilbet L, Mariotti F. Dose-response analyses using restricted cubic spline functions in public health research. Stat Med 2010;29(9):1037-57. doi: 10.1002/sim.3841 https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.3841
(2) Perperoglou, A., Sauerbrei, W., Abrahamowicz, M. et al. A review of spline function procedures in R. BMC Med Res Methodol 19, 46 (2019). https://doi.org/10.1186/s12874-019-0666-3
https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-019-0666-3 | Introductory material on splines
This answer is coming from a biostatistics angle.
I would second Frank Harrell's Regression Modelling Strategies, as per Stephan's answer https://link.springer.com/book/10.1007/978-3-319-19425-7.
Othe |
27,484 | Introductory material on splines | I think Semiparametric Regression with R, in particular Chapter 2 "Penalized Splines", by Harezlak, Ruppert, and Wand (2018) would work as a first introduction. The book is focused on practical implementation in $\mathsf{R}$ and accompanied by the $\mathsf{R}$ package HRW.
Probably Chapter 3 "Scatterplot Smoothing" in Semiparametric Regression by Ruppert, Wand, and Carroll (2003) would be a good addition. | Introductory material on splines | I think Semiparametric Regression with R, in particular Chapter 2 "Penalized Splines", by Harezlak, Ruppert, and Wand (2018) would work as a first introduction. The book is focused on practical implem | Introductory material on splines
I think Semiparametric Regression with R, in particular Chapter 2 "Penalized Splines", by Harezlak, Ruppert, and Wand (2018) would work as a first introduction. The book is focused on practical implementation in $\mathsf{R}$ and accompanied by the $\mathsf{R}$ package HRW.
Probably Chapter 3 "Scatterplot Smoothing" in Semiparametric Regression by Ruppert, Wand, and Carroll (2003) would be a good addition. | Introductory material on splines
I think Semiparametric Regression with R, in particular Chapter 2 "Penalized Splines", by Harezlak, Ruppert, and Wand (2018) would work as a first introduction. The book is focused on practical implem |
27,485 | Do neural networks create a separating hyperplane? | Disclaimer: I assume your question refers to multi-layer perceptrons, used as classifiers. But, if you want to engage in hair splitting, there are other neural network architectures and applications which do not create a separating hyperplane.
In a multi-layer perceptron, each neuron first computes a linear function on its inputs and then passes it through an activation function, which is almost always non-linear. All layers except the last are used to perform some kind of non-linear data transformation, with the eventual purpose of making the data linearly separable, or close to linearly separable. The neurons in the final, output layer also compute a linear function of their inputs. Typically, they would also pass it through a non-linear function, like a sigmoid, but this is not necessary for understanding the concept. It suffices to set a threshold on the linear output of the output neuron, and you get a separating hyperplane.
Mathematically: Let us denote by $x_i$ the output of the $i$-th neuron in the last hidden layer, i.e. the last layer before the output layer. Let there be just one output neuron in the output layer and let $w_i$ be the weight connecting the $i$-th neuron to the output neuron. Also, let $w_0$ be the bias of the output neuron. Then, the output neuron computes the linear function:
$$
y = w_0 + \sum_i w_i x_i
$$
Let us say, we classify all observations for which $y < 0$ as "class A" and the others as "class B". In other words, the equation
$$
w_0 + \sum_i w_i x_i = 0
$$
describes the class boundary. This is again a linear function; an equation describing a hyperplane in the implicit form.
Example: Let's say you have two neurons in the last hidden layer, with the outputs $x_1$ and $x_2$. Then, the equation
$$
w_0 + w_1 x_1 + w_2 x_2 = 0
$$
describes a one-dimensional class boundary (a straight line) in the two-dimensional space, spanned by $x_1$ and $x_2$. If you want, you can reformat the equation to get a perhaps more common explicit form:
$$
x_2 = -\frac{1}{w_2}(w_0 + w_1 x_1)
$$
The same works for any number of dimensions.
Update in response to comments:
The above described network, with only one neuron in the output layer, is suitable only for classifying the data into two classes. For more classes, you'll need more output neurons. In practice, people usually use as many output neurons as they have classes. Each output neuron is "responsible" for recognising one class. It should output a high value if it "thinks" that the present observation belongs to "its" class, and a low value otherwise. Again, each output neuron determines a separating hyperplane, with points belonging to "its" class ideally on one, positive, high-valued side of the hyperplane and all other points on the other.
Of course, in such case it is possible for the network to produce ambiguous results, e.g. two or more neurons claiming that a point belongs to their class. Resolving these ambiguities is beyond the scope of this question, but the basic idea is to trust more the neuron with the highest value. People commonly use some kind of normalisation, like softmax, to convert the network output to "class probabilities".
As noted above, it is also common for the output neurons to apply a non-linear transformation to the previously computed linear function. Usual choices are $\tanh$ and logistic function, which are both sigmoid, meaning that they approach horizontal asymptotes as their arguments approach positive or negative infinity. With a suitable encoding of class labels as the values of these asymptotes, the training of the neural network consists of adjusting the weights so that its output, according to some error metric, approaches the true class labels. | Do neural networks create a separating hyperplane? | Disclaimer: I assume your question refers to multi-layer perceptrons, used as classifiers. But, if you want to engage in hair splitting, there are other neural network architectures and applications w | Do neural networks create a separating hyperplane?
Disclaimer: I assume your question refers to multi-layer perceptrons, used as classifiers. But, if you want to engage in hair splitting, there are other neural network architectures and applications which do not create a separating hyperplane.
In a multi-layer perceptron, each neuron first computes a linear function on its inputs and then passes it through an activation function, which is almost always non-linear. All layers except the last are used to perform some kind of non-linear data transformation, with the eventual purpose of making the data linearly separable, or close to linearly separable. The neurons in the final, output layer also compute a linear function of their inputs. Typically, they would also pass it through a non-linear function, like a sigmoid, but this is not necessary for understanding the concept. It suffices to set a threshold on the linear output of the output neuron, and you get a separating hyperplane.
Mathematically: Let us denote by $x_i$ the output of the $i$-th neuron in the last hidden layer, i.e. the last layer before the output layer. Let there be just one output neuron in the output layer and let $w_i$ be the weight connecting the $i$-th neuron to the output neuron. Also, let $w_0$ be the bias of the output neuron. Then, the output neuron computes the linear function:
$$
y = w_0 + \sum_i w_i x_i
$$
Let us say, we classify all observations for which $y < 0$ as "class A" and the others as "class B". In other words, the equation
$$
w_0 + \sum_i w_i x_i = 0
$$
describes the class boundary. This is again a linear function; an equation describing a hyperplane in the implicit form.
Example: Let's say you have two neurons in the last hidden layer, with the outputs $x_1$ and $x_2$. Then, the equation
$$
w_0 + w_1 x_1 + w_2 x_2 = 0
$$
describes a one-dimensional class boundary (a straight line) in the two-dimensional space, spanned by $x_1$ and $x_2$. If you want, you can reformat the equation to get a perhaps more common explicit form:
$$
x_2 = -\frac{1}{w_2}(w_0 + w_1 x_1)
$$
The same works for any number of dimensions.
Update in response to comments:
The above described network, with only one neuron in the output layer, is suitable only for classifying the data into two classes. For more classes, you'll need more output neurons. In practice, people usually use as many output neurons as they have classes. Each output neuron is "responsible" for recognising one class. It should output a high value if it "thinks" that the present observation belongs to "its" class, and a low value otherwise. Again, each output neuron determines a separating hyperplane, with points belonging to "its" class ideally on one, positive, high-valued side of the hyperplane and all other points on the other.
Of course, in such case it is possible for the network to produce ambiguous results, e.g. two or more neurons claiming that a point belongs to their class. Resolving these ambiguities is beyond the scope of this question, but the basic idea is to trust more the neuron with the highest value. People commonly use some kind of normalisation, like softmax, to convert the network output to "class probabilities".
As noted above, it is also common for the output neurons to apply a non-linear transformation to the previously computed linear function. Usual choices are $\tanh$ and logistic function, which are both sigmoid, meaning that they approach horizontal asymptotes as their arguments approach positive or negative infinity. With a suitable encoding of class labels as the values of these asymptotes, the training of the neural network consists of adjusting the weights so that its output, according to some error metric, approaches the true class labels. | Do neural networks create a separating hyperplane?
Disclaimer: I assume your question refers to multi-layer perceptrons, used as classifiers. But, if you want to engage in hair splitting, there are other neural network architectures and applications w |
27,486 | Do neural networks create a separating hyperplane? | The final layer can be seen as a generalized linear model (such a a logistic regression) on extracted features.
Consider a basic network with an input layer, a hidden layer, and an output layer. Draw it out and then cover up the input layer. Now it looks like a linear or generalized linear model (such as logistic regression) on some features.
Thatβs whatβs happening and what the author means: the network uses early layers to figure out the features. Then the network regresses on those features. | Do neural networks create a separating hyperplane? | The final layer can be seen as a generalized linear model (such a a logistic regression) on extracted features.
Consider a basic network with an input layer, a hidden layer, and an output layer. Draw | Do neural networks create a separating hyperplane?
The final layer can be seen as a generalized linear model (such a a logistic regression) on extracted features.
Consider a basic network with an input layer, a hidden layer, and an output layer. Draw it out and then cover up the input layer. Now it looks like a linear or generalized linear model (such as logistic regression) on some features.
Thatβs whatβs happening and what the author means: the network uses early layers to figure out the features. Then the network regresses on those features. | Do neural networks create a separating hyperplane?
The final layer can be seen as a generalized linear model (such a a logistic regression) on extracted features.
Consider a basic network with an input layer, a hidden layer, and an output layer. Draw |
27,487 | Do neural networks create a separating hyperplane? | As the other answers have stated, all the layers up to the last layer can be viewed as a complicated way to create features (feature map), with the last layer representing a linear model. The traditional neural network construction with sigmoid activation function corresponds nicely with logistic regression.
The power of neural networks comes from the nonlinearity of the activation function, since stacking linear layers is equivalent to just one linear layer. Theoretical results are various universal approximation theorems, the first by Cybenko (1989) stating that arbitrary width neural networks with only one hidden layer can approximate any continuous function arbitrarily well. However this is an existence result and does not indicate how to actually learn these weights.
More recently (2018), neural tangent kernels (NTKs) describe the learning process of gradient descent in NNs when considering the limit of infinite width. However, as a kernel method, in the unrealistic infinite-width regime this simplifies to a linear model and does not match up with empirical performance of over-parameterized neural networks (Arona et al, 2019). | Do neural networks create a separating hyperplane? | As the other answers have stated, all the layers up to the last layer can be viewed as a complicated way to create features (feature map), with the last layer representing a linear model. The traditio | Do neural networks create a separating hyperplane?
As the other answers have stated, all the layers up to the last layer can be viewed as a complicated way to create features (feature map), with the last layer representing a linear model. The traditional neural network construction with sigmoid activation function corresponds nicely with logistic regression.
The power of neural networks comes from the nonlinearity of the activation function, since stacking linear layers is equivalent to just one linear layer. Theoretical results are various universal approximation theorems, the first by Cybenko (1989) stating that arbitrary width neural networks with only one hidden layer can approximate any continuous function arbitrarily well. However this is an existence result and does not indicate how to actually learn these weights.
More recently (2018), neural tangent kernels (NTKs) describe the learning process of gradient descent in NNs when considering the limit of infinite width. However, as a kernel method, in the unrealistic infinite-width regime this simplifies to a linear model and does not match up with empirical performance of over-parameterized neural networks (Arona et al, 2019). | Do neural networks create a separating hyperplane?
As the other answers have stated, all the layers up to the last layer can be viewed as a complicated way to create features (feature map), with the last layer representing a linear model. The traditio |
27,488 | Do neural networks create a separating hyperplane? | I think I may have answered my question.....
Let $x$ be the last hidden layer before the output layer in the neural network
Say I have $n$ classes
Let the multi-class decision rule be simply the class with the highest output value
Mathematically, we classify $x$ as belonging to a class $i$ if $\forall$$j$:
$f(W_ix+b_i) > f(W_{j\neq i}x+b_j)$ where $f$ is a nonlinear activation.
We now solve for the values of $x$ which satisfy this decision rule (i.e. $\{x|decision(x) = i\}$):
If $f$ is monotonically increasing this implies $f(W_ix+b_i) > f(W_{j\neq i}x +b_j) \iff W_ix + b_i > W_{j\neq i}x+b_j$
bringing everything to one side gives:
$\implies (W_{i1}-W_{j1})x_1+(W_{i2}-W_{j2})x_2+(W_{i3}-W_{j3})x_3 +...+ (b_i -b_j) > 0$
so this decision rule does correspond to a separating hyperplane
Edit: it's a hyperplane for fixed $j$, but the inequality must hold for all $j$, so I guess it must be the region that's in common/intersection of all of the hyperplanes.....?
It would be interesting to obtain an explicit expression and solve for the set $x$ that satisfies a decision rule where instead of $x$ being the layer before the output layer, it instead would be the input to a multi-layer neural network | Do neural networks create a separating hyperplane? | I think I may have answered my question.....
Let $x$ be the last hidden layer before the output layer in the neural network
Say I have $n$ classes
Let the multi-class decision rule be simply the class | Do neural networks create a separating hyperplane?
I think I may have answered my question.....
Let $x$ be the last hidden layer before the output layer in the neural network
Say I have $n$ classes
Let the multi-class decision rule be simply the class with the highest output value
Mathematically, we classify $x$ as belonging to a class $i$ if $\forall$$j$:
$f(W_ix+b_i) > f(W_{j\neq i}x+b_j)$ where $f$ is a nonlinear activation.
We now solve for the values of $x$ which satisfy this decision rule (i.e. $\{x|decision(x) = i\}$):
If $f$ is monotonically increasing this implies $f(W_ix+b_i) > f(W_{j\neq i}x +b_j) \iff W_ix + b_i > W_{j\neq i}x+b_j$
bringing everything to one side gives:
$\implies (W_{i1}-W_{j1})x_1+(W_{i2}-W_{j2})x_2+(W_{i3}-W_{j3})x_3 +...+ (b_i -b_j) > 0$
so this decision rule does correspond to a separating hyperplane
Edit: it's a hyperplane for fixed $j$, but the inequality must hold for all $j$, so I guess it must be the region that's in common/intersection of all of the hyperplanes.....?
It would be interesting to obtain an explicit expression and solve for the set $x$ that satisfies a decision rule where instead of $x$ being the layer before the output layer, it instead would be the input to a multi-layer neural network | Do neural networks create a separating hyperplane?
I think I may have answered my question.....
Let $x$ be the last hidden layer before the output layer in the neural network
Say I have $n$ classes
Let the multi-class decision rule be simply the class |
27,489 | What are the k-means algorithm assumptions? | This is a complicated question, as I believe that the role of model assumptions in statistics is generally widely misunderstood, and the situation for k-means is even less clear than for many other situations.
Generally having a "model assumption" means that there exist a theoretical result that a method does an in some sense good or even optimal job if the model assumption is in fact fulfilled. However, model assumptions are never precisely fulfilled in real data, so it doesn't make sense to say that "model assumptions have to be fulfilled". It is more important to understand what happens if they are not fulfilled, and this pretty much always depends on how exactly they are not fulfilled.
Some statements regarding k-means:
k-means can be derived as maximum likelihood estimator under a certain model for clusters that are normally distributed with a spherical covariance matrix, the same for all clusters.
Bock, H. H. (1996) Probabilistic models in cluster analysis. Computational Statistics & Data Analysis, 23, 5β28. URL: http://www.sciencedirect.com/science/article/pii/0167947396889195
Pollard has shown a general consistency result for k-means in a nonparametric setup, meaning that for (pretty much, existing second moments assumed) all distributions k-means is a consistent estimator of its own canonical functional (see below).
Pollard, D. (1981) Strong consistency of k-means clustering. Annals of Statistics, 9, 135β140. URL: https://doi.org/10.1214/aos/1176345339.
Clustering can generally be interpreted as "constructive", meaning that a clustering method can be seen as not in the first place recovering "true" underlying clusters following a certain model, but rather as constructing a grouping that satisfies certain criteria defined by the clustering method itself. As such, the k-means objective function, minimising object's squared Euclidean distances to the centroid of the cluster they are assigned to, defines its own concept of what a cluster actually is, and will give you the corresponding clusters whatever the underlying distribution is. This can be generalised to a definition of a functional defining k-means type clusters for any underlying distributions, which Pollard's theory is about. The important question here is whether this definition of clusters in a given application is what is relevant and useful to the user. This depends on specifics of the situation, and particularly not only on the data or the data generating process, but also on the aim of clustering, and how the clusters are meant to be used. (Similar statements can by the way be made about least squares regression and many other statistical methods.)
As far as I know, there is no theoretical result about k-means that states that it requires similar cluster sizes (at least not if "size" refers to the number of points; the "same covariance matrix" assumption of item 1 translates as "same spread in data space").
What is important is to understand what kind of clusters k-means tend to produce. And here in fact item 1 enters again. One can say several things:
(a) k-means is based on a variance criterion that treats all variables and clusters in the same manner, meaning that within-cluster variation tends to be the same for all clusters and all variables (the latter means "spherical").
(b) Comparing the k-means objective function with objective functions based on corresponding mixture models shows that k-means in comparison favours similar cluster sizes, although it doesn't enforce them.
(c) k-means strongly avoids large distances within clusters. This particularly means that if the data has groups with strong separation, k-means will find them (provided k is specified correctly), even if they are not spherical and/or have strongly different numbers of points. What this also means is that clusters will tend to be compact, i.e., not have large within-cluster distances, even if there are connected subsets in the data that spread widely and do have such large distances within them.
(d) One additional way to understand k-means is that it provides a Voronoi tesselation of the data space.
As normally in cluster analysis data don't come with the clusters known, it is very hard to check the assumptions automatically and formally. Particularly, if you run k-means and indeed find clusters that are not of similar sizes or not spherical, this doesn't necessarily mean that anything has gone wrong for the reasons stated above.
It is important to make sure that the k-means objective function makes sense for the specific clustering problem at hand (aim and use of clustering).
The best data-based diagnoses are in my view visual, scatterplots, maybe with dimension reduction or rotation. k-means is not good if then cluster structures of interest can be seen that disagree with the found clusters. Depending on the clustering aim, it may also not good if the found clusters turn out to be not separated from each other (a lower number of clusters may be advisable in such a case, or no clustering at all).
As k-means doesn't scale variables (related to "sphericity"), k-means is not advisable with variables that have different and unrelated measurement units, although it may be fine after variables have been standardised.
PS: I add comments on two issues.
(a) Some say that "a key assumption of k-means is that the number of clusters has to be known." Not really. There are several methods in the literature that allow to fit k-means for different numbers of clusters and then will decide which one is best, either by optimising a validity criterion or by looking for a small number of clusters after which the objective function is no longer strongly (or significantly) improved. Look for Average Silhouette Width, Calinski & Harabasz criterion, gap statistic, Sugar and James approach, prediction strength, bootstrap stability etc. Now it is true that these different approaches in many situations do not agree. But the number of clusters problem ultimately comes with the same issues as the clustering problem more generally, namely that there is no uniquely defined true number of clusters, and an appropriate number of clusters cannot be objectively decided from the data alone but requires knowledge of aim and use of the clustering. The data cannot decide, for example, how compact clusters ultimately have to be. There are methods that come with an in-build decision of the number of clusters, but what I wrote before applies to them as well. Many users like an automatic decision of the number of clusters as they don't feel confident to make decisions themselves, but this basically means that the user gives the control about this decision to some usually not well understood algorithm without any guarantee that the algorithm does something appropriate for the application at hand (and if required, such decision rules are also available for k-means).
(b) Here is an older thread discussing k-means: How to understand the drawbacks of K-means. This focuses strongly on the "drawbacks" of k-means, and there are some good examples that help to understand what k-means actually does and doesn't do. As such that thread is very useful. However, I have an issue with pretty much all answers there, which is that they seem to implicitly assume that it is clear in any given situation what "the true clusters" are, but it isn't. No formal definition is given, people just show pictures and say "what k-means do is obviously wrong here", appealing to some supposedly general human intuition what the true clusters should be. Here is an example (taken from David Robinson's answer in that thread):
So k-means gives a counterintuitive solution here and the writer claims that the correct solution should be to have one cluster for the points around the outer circle and the other one for the inner point cloud. But no definition of a clustering has been given according to which this is the "truth". There is no scientific basis for this. True is that such a solution is desired in certain applications, particularly when mimicking human pattern recognition. But this is not what clustering always is about. Particularly, in some applications large within cluster distances are not acceptable (for example when clustering is used to represent small compact data subsets by their centroid for efficient information reduction), and then the supposedly "correct" clustering doesn't work at all (although the 2-means solution is arguably not much better, and a larger number of clusters should be fitted that respects the separation between outer circle and inner point cloud, which can well be done with k-means). The baseline of this is that k-means has a certain behaviour that may not be appropriate in a given situation, but this doesn't mean it's "wrong" in any objective sense. | What are the k-means algorithm assumptions? | This is a complicated question, as I believe that the role of model assumptions in statistics is generally widely misunderstood, and the situation for k-means is even less clear than for many other si | What are the k-means algorithm assumptions?
This is a complicated question, as I believe that the role of model assumptions in statistics is generally widely misunderstood, and the situation for k-means is even less clear than for many other situations.
Generally having a "model assumption" means that there exist a theoretical result that a method does an in some sense good or even optimal job if the model assumption is in fact fulfilled. However, model assumptions are never precisely fulfilled in real data, so it doesn't make sense to say that "model assumptions have to be fulfilled". It is more important to understand what happens if they are not fulfilled, and this pretty much always depends on how exactly they are not fulfilled.
Some statements regarding k-means:
k-means can be derived as maximum likelihood estimator under a certain model for clusters that are normally distributed with a spherical covariance matrix, the same for all clusters.
Bock, H. H. (1996) Probabilistic models in cluster analysis. Computational Statistics & Data Analysis, 23, 5β28. URL: http://www.sciencedirect.com/science/article/pii/0167947396889195
Pollard has shown a general consistency result for k-means in a nonparametric setup, meaning that for (pretty much, existing second moments assumed) all distributions k-means is a consistent estimator of its own canonical functional (see below).
Pollard, D. (1981) Strong consistency of k-means clustering. Annals of Statistics, 9, 135β140. URL: https://doi.org/10.1214/aos/1176345339.
Clustering can generally be interpreted as "constructive", meaning that a clustering method can be seen as not in the first place recovering "true" underlying clusters following a certain model, but rather as constructing a grouping that satisfies certain criteria defined by the clustering method itself. As such, the k-means objective function, minimising object's squared Euclidean distances to the centroid of the cluster they are assigned to, defines its own concept of what a cluster actually is, and will give you the corresponding clusters whatever the underlying distribution is. This can be generalised to a definition of a functional defining k-means type clusters for any underlying distributions, which Pollard's theory is about. The important question here is whether this definition of clusters in a given application is what is relevant and useful to the user. This depends on specifics of the situation, and particularly not only on the data or the data generating process, but also on the aim of clustering, and how the clusters are meant to be used. (Similar statements can by the way be made about least squares regression and many other statistical methods.)
As far as I know, there is no theoretical result about k-means that states that it requires similar cluster sizes (at least not if "size" refers to the number of points; the "same covariance matrix" assumption of item 1 translates as "same spread in data space").
What is important is to understand what kind of clusters k-means tend to produce. And here in fact item 1 enters again. One can say several things:
(a) k-means is based on a variance criterion that treats all variables and clusters in the same manner, meaning that within-cluster variation tends to be the same for all clusters and all variables (the latter means "spherical").
(b) Comparing the k-means objective function with objective functions based on corresponding mixture models shows that k-means in comparison favours similar cluster sizes, although it doesn't enforce them.
(c) k-means strongly avoids large distances within clusters. This particularly means that if the data has groups with strong separation, k-means will find them (provided k is specified correctly), even if they are not spherical and/or have strongly different numbers of points. What this also means is that clusters will tend to be compact, i.e., not have large within-cluster distances, even if there are connected subsets in the data that spread widely and do have such large distances within them.
(d) One additional way to understand k-means is that it provides a Voronoi tesselation of the data space.
As normally in cluster analysis data don't come with the clusters known, it is very hard to check the assumptions automatically and formally. Particularly, if you run k-means and indeed find clusters that are not of similar sizes or not spherical, this doesn't necessarily mean that anything has gone wrong for the reasons stated above.
It is important to make sure that the k-means objective function makes sense for the specific clustering problem at hand (aim and use of clustering).
The best data-based diagnoses are in my view visual, scatterplots, maybe with dimension reduction or rotation. k-means is not good if then cluster structures of interest can be seen that disagree with the found clusters. Depending on the clustering aim, it may also not good if the found clusters turn out to be not separated from each other (a lower number of clusters may be advisable in such a case, or no clustering at all).
As k-means doesn't scale variables (related to "sphericity"), k-means is not advisable with variables that have different and unrelated measurement units, although it may be fine after variables have been standardised.
PS: I add comments on two issues.
(a) Some say that "a key assumption of k-means is that the number of clusters has to be known." Not really. There are several methods in the literature that allow to fit k-means for different numbers of clusters and then will decide which one is best, either by optimising a validity criterion or by looking for a small number of clusters after which the objective function is no longer strongly (or significantly) improved. Look for Average Silhouette Width, Calinski & Harabasz criterion, gap statistic, Sugar and James approach, prediction strength, bootstrap stability etc. Now it is true that these different approaches in many situations do not agree. But the number of clusters problem ultimately comes with the same issues as the clustering problem more generally, namely that there is no uniquely defined true number of clusters, and an appropriate number of clusters cannot be objectively decided from the data alone but requires knowledge of aim and use of the clustering. The data cannot decide, for example, how compact clusters ultimately have to be. There are methods that come with an in-build decision of the number of clusters, but what I wrote before applies to them as well. Many users like an automatic decision of the number of clusters as they don't feel confident to make decisions themselves, but this basically means that the user gives the control about this decision to some usually not well understood algorithm without any guarantee that the algorithm does something appropriate for the application at hand (and if required, such decision rules are also available for k-means).
(b) Here is an older thread discussing k-means: How to understand the drawbacks of K-means. This focuses strongly on the "drawbacks" of k-means, and there are some good examples that help to understand what k-means actually does and doesn't do. As such that thread is very useful. However, I have an issue with pretty much all answers there, which is that they seem to implicitly assume that it is clear in any given situation what "the true clusters" are, but it isn't. No formal definition is given, people just show pictures and say "what k-means do is obviously wrong here", appealing to some supposedly general human intuition what the true clusters should be. Here is an example (taken from David Robinson's answer in that thread):
So k-means gives a counterintuitive solution here and the writer claims that the correct solution should be to have one cluster for the points around the outer circle and the other one for the inner point cloud. But no definition of a clustering has been given according to which this is the "truth". There is no scientific basis for this. True is that such a solution is desired in certain applications, particularly when mimicking human pattern recognition. But this is not what clustering always is about. Particularly, in some applications large within cluster distances are not acceptable (for example when clustering is used to represent small compact data subsets by their centroid for efficient information reduction), and then the supposedly "correct" clustering doesn't work at all (although the 2-means solution is arguably not much better, and a larger number of clusters should be fitted that respects the separation between outer circle and inner point cloud, which can well be done with k-means). The baseline of this is that k-means has a certain behaviour that may not be appropriate in a given situation, but this doesn't mean it's "wrong" in any objective sense. | What are the k-means algorithm assumptions?
This is a complicated question, as I believe that the role of model assumptions in statistics is generally widely misunderstood, and the situation for k-means is even less clear than for many other si |
27,490 | What are the k-means algorithm assumptions? | There are a bunch of different algorithms that can be applied to the k-means inference problem, so the particular assumptions and statistical interpretation will depend on the specific algorithm you're using. Notwithstanding this generality, I'm going to assume that you're talking about the "standard" k-means algorithm that proceeds by minimising the within-cluster sum-of-squares for pre-specified values of $k$, where clusters are determined by distance from a set of cluster-means.
Algorithms for this variant of the problem take in a data vector $\mathbf{x}=(x_1,...,x_n)$ and determine a set of means $\boldsymbol{\mu}=\{\mu_1,...,\mu_k\}$ that solve the following optimisation problem:
$$\underset{\boldsymbol{\mu}}{\text{Minimise}} \quad \quad \sum_{i=1}^n \underset{r}{\min} ||x_i - \mu_r||^2.$$
In this optimisation problem the clusters are determined by closeness to the means (each data point is assigned to the cluster for the mean that is closest to it in terms of Euclidean distance) and we minimise the within-cluster sum-of-squares of the resulting clusters.
Statistical Interpretation: Because this optimisation minimises sums-of-squares, the standard statistical interpretation is that it is equivalent to taking the maximum likelihood estimator (MLE) for a statistical model that is a "mixture" of $k$ Gaussian distributions with the same variances but with means given by $\boldsymbol{\mu}$, where each data point is taken to belong to the Gaussian distribution with the closest mean. In this interpretation, we reframe the obective function as:
$$\boldsymbol{\mu}_\text{MLE} \equiv \underset{\boldsymbol{\mu}}{\text{arg min}} \ \ell_\mathbf{x}(\boldsymbol{\mu})
\quad \quad \quad
\ell_\mathbf{x}(\boldsymbol{\mu}) = \sum_{i=1}^n \log \text{N}(x_i | \mu_{R(i)}, \sigma^2),$$
where the index $R(i) \equiv \text{arg min}_r ||x_i - \mu_r||^2$ gives the group with the closest mean. Note that this is just one statistical interpretation for one algorithm for k-means β it operates by reinterpreting minimisation of a sum-of-squares as maximisation of a normal density. There are other interpretations that can be given for other variants of the algorithm; since there are many different algorithms there are many different statistical interpretations and the general problem is complex. | What are the k-means algorithm assumptions? | There are a bunch of different algorithms that can be applied to the k-means inference problem, so the particular assumptions and statistical interpretation will depend on the specific algorithm you'r | What are the k-means algorithm assumptions?
There are a bunch of different algorithms that can be applied to the k-means inference problem, so the particular assumptions and statistical interpretation will depend on the specific algorithm you're using. Notwithstanding this generality, I'm going to assume that you're talking about the "standard" k-means algorithm that proceeds by minimising the within-cluster sum-of-squares for pre-specified values of $k$, where clusters are determined by distance from a set of cluster-means.
Algorithms for this variant of the problem take in a data vector $\mathbf{x}=(x_1,...,x_n)$ and determine a set of means $\boldsymbol{\mu}=\{\mu_1,...,\mu_k\}$ that solve the following optimisation problem:
$$\underset{\boldsymbol{\mu}}{\text{Minimise}} \quad \quad \sum_{i=1}^n \underset{r}{\min} ||x_i - \mu_r||^2.$$
In this optimisation problem the clusters are determined by closeness to the means (each data point is assigned to the cluster for the mean that is closest to it in terms of Euclidean distance) and we minimise the within-cluster sum-of-squares of the resulting clusters.
Statistical Interpretation: Because this optimisation minimises sums-of-squares, the standard statistical interpretation is that it is equivalent to taking the maximum likelihood estimator (MLE) for a statistical model that is a "mixture" of $k$ Gaussian distributions with the same variances but with means given by $\boldsymbol{\mu}$, where each data point is taken to belong to the Gaussian distribution with the closest mean. In this interpretation, we reframe the obective function as:
$$\boldsymbol{\mu}_\text{MLE} \equiv \underset{\boldsymbol{\mu}}{\text{arg min}} \ \ell_\mathbf{x}(\boldsymbol{\mu})
\quad \quad \quad
\ell_\mathbf{x}(\boldsymbol{\mu}) = \sum_{i=1}^n \log \text{N}(x_i | \mu_{R(i)}, \sigma^2),$$
where the index $R(i) \equiv \text{arg min}_r ||x_i - \mu_r||^2$ gives the group with the closest mean. Note that this is just one statistical interpretation for one algorithm for k-means β it operates by reinterpreting minimisation of a sum-of-squares as maximisation of a normal density. There are other interpretations that can be given for other variants of the algorithm; since there are many different algorithms there are many different statistical interpretations and the general problem is complex. | What are the k-means algorithm assumptions?
There are a bunch of different algorithms that can be applied to the k-means inference problem, so the particular assumptions and statistical interpretation will depend on the specific algorithm you'r |
27,491 | OLS regression results: p-values > 0.10, how to proceed? | Assuming that there are no problems with model assumptions, the model should be used as it is. Insignificant variables should not be removed. Removing them would invalidate any tests that are run within the reduced models. (Removing insignificant variables seems to be a common practice, but that doesn't make it better. Occasionally there are reasons such as removing variables that are potentially expensive to observe in the future when using the model for prediction, or that the number of observations is too small for fitting a full model with reasonable reliability, but I don't see such reasons here; even in such cases there are often better criteria than significance.) | OLS regression results: p-values > 0.10, how to proceed? | Assuming that there are no problems with model assumptions, the model should be used as it is. Insignificant variables should not be removed. Removing them would invalidate any tests that are run with | OLS regression results: p-values > 0.10, how to proceed?
Assuming that there are no problems with model assumptions, the model should be used as it is. Insignificant variables should not be removed. Removing them would invalidate any tests that are run within the reduced models. (Removing insignificant variables seems to be a common practice, but that doesn't make it better. Occasionally there are reasons such as removing variables that are potentially expensive to observe in the future when using the model for prediction, or that the number of observations is too small for fitting a full model with reasonable reliability, but I don't see such reasons here; even in such cases there are often better criteria than significance.) | OLS regression results: p-values > 0.10, how to proceed?
Assuming that there are no problems with model assumptions, the model should be used as it is. Insignificant variables should not be removed. Removing them would invalidate any tests that are run with |
27,492 | OLS regression results: p-values > 0.10, how to proceed? | If you remove independent variables, even ones that are not statistically significant, it will change the coefficient for the wealth variable (and the other variables). It will also reduce your adjusted R-squared value, which at .287 is already not great (but not bad either). This means of course that 28.7% of the outcome variable is explained by the independent variables in the model.
Removing some of the independent variables could possibly make the p-value for the Wealth variable smaller, but only because Wealth may in some way correlate with those independent variables. In other words, you want all those independent variables left in the equation because the regression then controls for them, and does not falsely ascribe their effect to Wealth. A classic example of this is a regression associating drinking with bad health. But smoking is often associated with drinking. When smoking is also included in the regression, drinking is no longer significantly associated with bad health.
You can actually remove some highly non-significant independent variables and see how the other variables' coefficients change. Juggling them around in this way is part of the art of regression, but this requires good subject area knowledge. It is also useful to see how well these independent variables correlate with each other. If they do, you will have multicollinearity which will weaken the predictive power of the variables that correlate with each other; in which case it's usually best to remove one of them from the regression. This transfers their effect on the dependent variable to the remaining independent variable(s), at least to the extent that they were correlated with each other. | OLS regression results: p-values > 0.10, how to proceed? | If you remove independent variables, even ones that are not statistically significant, it will change the coefficient for the wealth variable (and the other variables). It will also reduce your adjus | OLS regression results: p-values > 0.10, how to proceed?
If you remove independent variables, even ones that are not statistically significant, it will change the coefficient for the wealth variable (and the other variables). It will also reduce your adjusted R-squared value, which at .287 is already not great (but not bad either). This means of course that 28.7% of the outcome variable is explained by the independent variables in the model.
Removing some of the independent variables could possibly make the p-value for the Wealth variable smaller, but only because Wealth may in some way correlate with those independent variables. In other words, you want all those independent variables left in the equation because the regression then controls for them, and does not falsely ascribe their effect to Wealth. A classic example of this is a regression associating drinking with bad health. But smoking is often associated with drinking. When smoking is also included in the regression, drinking is no longer significantly associated with bad health.
You can actually remove some highly non-significant independent variables and see how the other variables' coefficients change. Juggling them around in this way is part of the art of regression, but this requires good subject area knowledge. It is also useful to see how well these independent variables correlate with each other. If they do, you will have multicollinearity which will weaken the predictive power of the variables that correlate with each other; in which case it's usually best to remove one of them from the regression. This transfers their effect on the dependent variable to the remaining independent variable(s), at least to the extent that they were correlated with each other. | OLS regression results: p-values > 0.10, how to proceed?
If you remove independent variables, even ones that are not statistically significant, it will change the coefficient for the wealth variable (and the other variables). It will also reduce your adjus |
27,493 | How to tune hyperparameters in a random forest | Number of trees is not a parameter that should be tuned, but just set large enough usually. There is no risk of overfitting in random forest with growing number of trees, as they are trained independently from each other. See our paper for more information about this: https://arxiv.org/abs/1705.05654
Max depth is a parameter that most of the times should be set as high as possible, but possibly better performance can be achieved by setting it lower.
There are more parameters in random forest that can be tuned, see here for a discussion: https://arxiv.org/pdf/1804.03515.pdf
If you have more than one parameter you can also try out random search and not grid search, see here good arguments for random search: http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf
If you just want to tune this two parameters, I would set ntree to 1000 and try out different values of max_depth. You can evaluate your predictions by using the out-of-bag observations, that is much faster than cross-validation. ;) | How to tune hyperparameters in a random forest | Number of trees is not a parameter that should be tuned, but just set large enough usually. There is no risk of overfitting in random forest with growing number of trees, as they are trained independe | How to tune hyperparameters in a random forest
Number of trees is not a parameter that should be tuned, but just set large enough usually. There is no risk of overfitting in random forest with growing number of trees, as they are trained independently from each other. See our paper for more information about this: https://arxiv.org/abs/1705.05654
Max depth is a parameter that most of the times should be set as high as possible, but possibly better performance can be achieved by setting it lower.
There are more parameters in random forest that can be tuned, see here for a discussion: https://arxiv.org/pdf/1804.03515.pdf
If you have more than one parameter you can also try out random search and not grid search, see here good arguments for random search: http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf
If you just want to tune this two parameters, I would set ntree to 1000 and try out different values of max_depth. You can evaluate your predictions by using the out-of-bag observations, that is much faster than cross-validation. ;) | How to tune hyperparameters in a random forest
Number of trees is not a parameter that should be tuned, but just set large enough usually. There is no risk of overfitting in random forest with growing number of trees, as they are trained independe |
27,494 | How to tune hyperparameters in a random forest | Okay,
So do max_depth = [5,10,15..], n_estimators = [10,20,30]. Separate your training set into a training' and validation' set. Then loop over all combinations of max_depth and n_estimators. For the most optimal loss on the validation' ,select that. Then you can finally calculate the validation error on the actual validation set.
You can simply just choose the optimal just based on the validation also,but you would be using the same set to evaluate again and again and due to multiple hypotheses testing it is likely you would get a better appearing solution on your validation but not on the final test. | How to tune hyperparameters in a random forest | Okay,
So do max_depth = [5,10,15..], n_estimators = [10,20,30]. Separate your training set into a training' and validation' set. Then loop over all combinations of max_depth and n_estimators. For the | How to tune hyperparameters in a random forest
Okay,
So do max_depth = [5,10,15..], n_estimators = [10,20,30]. Separate your training set into a training' and validation' set. Then loop over all combinations of max_depth and n_estimators. For the most optimal loss on the validation' ,select that. Then you can finally calculate the validation error on the actual validation set.
You can simply just choose the optimal just based on the validation also,but you would be using the same set to evaluate again and again and due to multiple hypotheses testing it is likely you would get a better appearing solution on your validation but not on the final test. | How to tune hyperparameters in a random forest
Okay,
So do max_depth = [5,10,15..], n_estimators = [10,20,30]. Separate your training set into a training' and validation' set. Then loop over all combinations of max_depth and n_estimators. For the |
27,495 | How to tune hyperparameters in a random forest | Lets use some convention. Let P be the number of features in your data, X, and N be the total number of examples. mtry is the parameter in RF that determines the number of features you subsample from all of P before you determine the best split. nodesize is the parameter that determines the minimum number of nodes in your leaf nodes(i.e. you don't want to split a node beyond this).
When the parameter has a high range it is suggested using log scale. Why? You typically have larger jumps in accuracy(or what ever metric) when mtry goes from 1 to 10 than when it goes from 90 to 100 and so testing all the numbers between 90 and 100 would waste more time. This is true of some other parameters as well. I would suggest watching this video as well as some of the resources linked by answers in this thread.
The second idea is using a modern hyper param search algorithm like Bayesian Optimization scheme here
Next you should get a feel for your data:
Do a small number of predictor variables have an outsized effect on response(higher mtry may do better in this case). Or do a lot of small other variables matter as well(lower mtry does better)? So you see this will effect the choice of your mtry. Just fit a linear model to get a sense of feature importance. Maybe you will catch a lucky break and get a couple of features with really high importance and your model is simple.
If there is a lot of noise features then lower mtry is not so useful. Scaltter plots are useful. You can also determine noise features by randomly permuting the X's of a single feature(i.e. just 1 column of the X) and comparing its fit before and after using a linear model against the Y. Noisy features will definitely not register any improvement in fit or somehow make it better after permutation (Word of Caution: Interactions between features may make this last step a bit dicey to use as a guiding tool to determine noise).
mtry ~ sqrt(P) is the rule of thumb based on academic paper linked in some of the answers here.
You can do similar tests for other parameters to get an intuition(or positive confirmation) about your parameter values.
Breiman(2001) commented that the randomness used in the tree has to aim for low correlation while maintaining reasonable strength This is better unpacked in this https://arxiv.org/pdf/1804.03515.pdf paper that was linked by an answer above. I enjoyed reading it. | How to tune hyperparameters in a random forest | Lets use some convention. Let P be the number of features in your data, X, and N be the total number of examples. mtry is the parameter in RF that determines the number of features you subsample from | How to tune hyperparameters in a random forest
Lets use some convention. Let P be the number of features in your data, X, and N be the total number of examples. mtry is the parameter in RF that determines the number of features you subsample from all of P before you determine the best split. nodesize is the parameter that determines the minimum number of nodes in your leaf nodes(i.e. you don't want to split a node beyond this).
When the parameter has a high range it is suggested using log scale. Why? You typically have larger jumps in accuracy(or what ever metric) when mtry goes from 1 to 10 than when it goes from 90 to 100 and so testing all the numbers between 90 and 100 would waste more time. This is true of some other parameters as well. I would suggest watching this video as well as some of the resources linked by answers in this thread.
The second idea is using a modern hyper param search algorithm like Bayesian Optimization scheme here
Next you should get a feel for your data:
Do a small number of predictor variables have an outsized effect on response(higher mtry may do better in this case). Or do a lot of small other variables matter as well(lower mtry does better)? So you see this will effect the choice of your mtry. Just fit a linear model to get a sense of feature importance. Maybe you will catch a lucky break and get a couple of features with really high importance and your model is simple.
If there is a lot of noise features then lower mtry is not so useful. Scaltter plots are useful. You can also determine noise features by randomly permuting the X's of a single feature(i.e. just 1 column of the X) and comparing its fit before and after using a linear model against the Y. Noisy features will definitely not register any improvement in fit or somehow make it better after permutation (Word of Caution: Interactions between features may make this last step a bit dicey to use as a guiding tool to determine noise).
mtry ~ sqrt(P) is the rule of thumb based on academic paper linked in some of the answers here.
You can do similar tests for other parameters to get an intuition(or positive confirmation) about your parameter values.
Breiman(2001) commented that the randomness used in the tree has to aim for low correlation while maintaining reasonable strength This is better unpacked in this https://arxiv.org/pdf/1804.03515.pdf paper that was linked by an answer above. I enjoyed reading it. | How to tune hyperparameters in a random forest
Lets use some convention. Let P be the number of features in your data, X, and N be the total number of examples. mtry is the parameter in RF that determines the number of features you subsample from |
27,496 | What's higher, $E(X^2)^3$ or $E(X^3)^2$ | This indeed can be proven by Jensen inequality.
Hint: Note that for $\alpha > 1$ the function $x^{\alpha}$ is convex in $\left[0, -\infty\right)$ (That's where you use the assumption $X \ge 0$). Then Jensen inequality gives
$$
\mathbb{E}\left[Y\right]^{\alpha} \le \mathbb{E}\left[Y^{\alpha}\right]
$$
and for $\alpha < 1$, it is the other way arround.
Now, transform the variables to something comparable, and find the relevant $\alpha$. | What's higher, $E(X^2)^3$ or $E(X^3)^2$ | This indeed can be proven by Jensen inequality.
Hint: Note that for $\alpha > 1$ the function $x^{\alpha}$ is convex in $\left[0, -\infty\right)$ (That's where you use the assumption $X \ge 0$). Then | What's higher, $E(X^2)^3$ or $E(X^3)^2$
This indeed can be proven by Jensen inequality.
Hint: Note that for $\alpha > 1$ the function $x^{\alpha}$ is convex in $\left[0, -\infty\right)$ (That's where you use the assumption $X \ge 0$). Then Jensen inequality gives
$$
\mathbb{E}\left[Y\right]^{\alpha} \le \mathbb{E}\left[Y^{\alpha}\right]
$$
and for $\alpha < 1$, it is the other way arround.
Now, transform the variables to something comparable, and find the relevant $\alpha$. | What's higher, $E(X^2)^3$ or $E(X^3)^2$
This indeed can be proven by Jensen inequality.
Hint: Note that for $\alpha > 1$ the function $x^{\alpha}$ is convex in $\left[0, -\infty\right)$ (That's where you use the assumption $X \ge 0$). Then |
27,497 | What's higher, $E(X^2)^3$ or $E(X^3)^2$ | Lyapunov's Inequality (See: Casella and Berger, Statistical Inference 4.7.6):
For $1 < r < s < \infty$:
$$
\mathbb{E}[|X|^r]^\frac{1}{r} \leq \mathbb{E}[|X|^s]^\frac{1}{s}
$$
Proof:
By Jensens' inequality for convex $\phi(x)$: $\phi(\mathbb{E}X) \leq \mathbb{E}[\phi(x)]$
Consider $\phi(Y) = Y^t$, then $(\mathbb{E}[Y])^t \leq \mathbb{E}[Y^t]$ where $Y = |X|^r$
Substitute $t = \frac{s}{r}$: $(\mathbb{E}[|X|^r])^{\frac{s}{r}} \leq \mathbb{E}[|X|^{r\frac{s}{r}}]$ $\implies \mathbb{E}[|X|^r]^\frac{1}{r} \leq \mathbb{E}[|X|^s]^\frac{1}{s}$
In general for $X >0$ this implies:
$ \mathbb{E}[X] \leq (\mathbb{E}[X^2])^\frac{1}{2} \leq (\mathbb{E}[X^3])^\frac{1}{3} \leq (\mathbb{E}[X^4])^\frac{1}{4} \leq \dots $ | What's higher, $E(X^2)^3$ or $E(X^3)^2$ | Lyapunov's Inequality (See: Casella and Berger, Statistical Inference 4.7.6):
For $1 < r < s < \infty$:
$$
\mathbb{E}[|X|^r]^\frac{1}{r} \leq \mathbb{E}[|X|^s]^\frac{1}{s}
$$
Proof:
By Jensens' inequ | What's higher, $E(X^2)^3$ or $E(X^3)^2$
Lyapunov's Inequality (See: Casella and Berger, Statistical Inference 4.7.6):
For $1 < r < s < \infty$:
$$
\mathbb{E}[|X|^r]^\frac{1}{r} \leq \mathbb{E}[|X|^s]^\frac{1}{s}
$$
Proof:
By Jensens' inequality for convex $\phi(x)$: $\phi(\mathbb{E}X) \leq \mathbb{E}[\phi(x)]$
Consider $\phi(Y) = Y^t$, then $(\mathbb{E}[Y])^t \leq \mathbb{E}[Y^t]$ where $Y = |X|^r$
Substitute $t = \frac{s}{r}$: $(\mathbb{E}[|X|^r])^{\frac{s}{r}} \leq \mathbb{E}[|X|^{r\frac{s}{r}}]$ $\implies \mathbb{E}[|X|^r]^\frac{1}{r} \leq \mathbb{E}[|X|^s]^\frac{1}{s}$
In general for $X >0$ this implies:
$ \mathbb{E}[X] \leq (\mathbb{E}[X^2])^\frac{1}{2} \leq (\mathbb{E}[X^3])^\frac{1}{3} \leq (\mathbb{E}[X^4])^\frac{1}{4} \leq \dots $ | What's higher, $E(X^2)^3$ or $E(X^3)^2$
Lyapunov's Inequality (See: Casella and Berger, Statistical Inference 4.7.6):
For $1 < r < s < \infty$:
$$
\mathbb{E}[|X|^r]^\frac{1}{r} \leq \mathbb{E}[|X|^s]^\frac{1}{s}
$$
Proof:
By Jensens' inequ |
27,498 | What's higher, $E(X^2)^3$ or $E(X^3)^2$ | Suppose X has a uniform distribution on [0,1] then E(X$^2$)= $\frac{1}{3}$ and so E(X$^2$)$^3$ = $\frac{1}{27}$ and E(X$^3$)=$\frac{1}{4}$ so E(X$^3$)$^2$= $\frac{1}{16}$. So in this case E(X$^3$)$^2$ > E(X$^2$)$^3$. Can you generalize this or find a counterexample? | What's higher, $E(X^2)^3$ or $E(X^3)^2$ | Suppose X has a uniform distribution on [0,1] then E(X$^2$)= $\frac{1}{3}$ and so E(X$^2$)$^3$ = $\frac{1}{27}$ and E(X$^3$)=$\frac{1}{4}$ so E(X$^3$)$^2$= $\frac{1}{16}$. So in this case E(X$^3$)$^2 | What's higher, $E(X^2)^3$ or $E(X^3)^2$
Suppose X has a uniform distribution on [0,1] then E(X$^2$)= $\frac{1}{3}$ and so E(X$^2$)$^3$ = $\frac{1}{27}$ and E(X$^3$)=$\frac{1}{4}$ so E(X$^3$)$^2$= $\frac{1}{16}$. So in this case E(X$^3$)$^2$ > E(X$^2$)$^3$. Can you generalize this or find a counterexample? | What's higher, $E(X^2)^3$ or $E(X^3)^2$
Suppose X has a uniform distribution on [0,1] then E(X$^2$)= $\frac{1}{3}$ and so E(X$^2$)$^3$ = $\frac{1}{27}$ and E(X$^3$)=$\frac{1}{4}$ so E(X$^3$)$^2$= $\frac{1}{16}$. So in this case E(X$^3$)$^2 |
27,499 | Why is the slope always exactly 1 when regressing the errors on the residuals using OLS? | whuber's answer is great! (+1) I worked the problem out using notation most familiar to me and figured the (less interesting, more routine) derivation may be worthwhile to include here.
Let $y = X \beta^* + \epsilon$ be the regression model, for $X \in \mathbb{R}^{n \times p}$ and $\epsilon$ the noise. Then the regression of $y$ against the columns of $X$ has normal equations $X^T\left(y - X \hat\beta\right) = 0,$ yielding estimates $$\hat\beta = \left(X^T X \right)^{-1} X^T y.$$ Therefore the regression has residuals $$r = y - X \hat\beta = \left( I - H \right) y = \left( I - H \right) \epsilon,$$ for $H = X (X^T X)^{-1} X^T$.
Regressing $\epsilon$ on $r$ results in an estimated slope given by
\begin{align*}
(r^T r)^{-1} r^T \epsilon
& = \left( \left[ \left(I - H\right) \epsilon \right]^T \left[ \left(I - H\right) \epsilon \right] \right)^{-1} \left[ \left(I - H\right) \epsilon \right]^T \epsilon \\
& = \frac{\epsilon^T \left( I - H \right)^T \epsilon}{\epsilon^T \left( I - H \right)^T \left( I - H \right) \epsilon} \\
& = \frac{\epsilon^T \left( I - H \right) \epsilon}{\epsilon^T \left( I - H \right) \epsilon} \\
& = 1,
\end{align*}
since $I-H$ is symmetric and idempotent and $\epsilon \not\in \mathrm{im}(X)$ almost surely.
Further, this argument also holds if we include an intercept when we perform the regression of the errors on the residuals if an intercept was included in the original regression, since the covariates are orthogonal (ie $1^T r = 0$, from the normal equations). | Why is the slope always exactly 1 when regressing the errors on the residuals using OLS? | whuber's answer is great! (+1) I worked the problem out using notation most familiar to me and figured the (less interesting, more routine) derivation may be worthwhile to include here.
Let $y = X \be | Why is the slope always exactly 1 when regressing the errors on the residuals using OLS?
whuber's answer is great! (+1) I worked the problem out using notation most familiar to me and figured the (less interesting, more routine) derivation may be worthwhile to include here.
Let $y = X \beta^* + \epsilon$ be the regression model, for $X \in \mathbb{R}^{n \times p}$ and $\epsilon$ the noise. Then the regression of $y$ against the columns of $X$ has normal equations $X^T\left(y - X \hat\beta\right) = 0,$ yielding estimates $$\hat\beta = \left(X^T X \right)^{-1} X^T y.$$ Therefore the regression has residuals $$r = y - X \hat\beta = \left( I - H \right) y = \left( I - H \right) \epsilon,$$ for $H = X (X^T X)^{-1} X^T$.
Regressing $\epsilon$ on $r$ results in an estimated slope given by
\begin{align*}
(r^T r)^{-1} r^T \epsilon
& = \left( \left[ \left(I - H\right) \epsilon \right]^T \left[ \left(I - H\right) \epsilon \right] \right)^{-1} \left[ \left(I - H\right) \epsilon \right]^T \epsilon \\
& = \frac{\epsilon^T \left( I - H \right)^T \epsilon}{\epsilon^T \left( I - H \right)^T \left( I - H \right) \epsilon} \\
& = \frac{\epsilon^T \left( I - H \right) \epsilon}{\epsilon^T \left( I - H \right) \epsilon} \\
& = 1,
\end{align*}
since $I-H$ is symmetric and idempotent and $\epsilon \not\in \mathrm{im}(X)$ almost surely.
Further, this argument also holds if we include an intercept when we perform the regression of the errors on the residuals if an intercept was included in the original regression, since the covariates are orthogonal (ie $1^T r = 0$, from the normal equations). | Why is the slope always exactly 1 when regressing the errors on the residuals using OLS?
whuber's answer is great! (+1) I worked the problem out using notation most familiar to me and figured the (less interesting, more routine) derivation may be worthwhile to include here.
Let $y = X \be |
27,500 | Why is the slope always exactly 1 when regressing the errors on the residuals using OLS? | Without any loss of conceptual (or practical) generality, first remove the constant from the variables as described at How exactly does one "control for other variables". Let $x$ be the regressor, $e$ the error, $Y=\beta x + e$ the response, $b$ the least-squares estimate of $\beta$, and $r = Y - bx$ the residuals. All of these vectors lie in the same plane, allowing us to draw pictures of them. The situation can be rendered like this, where $O$ designates the origin:
This picture was constructed beginning with $\beta x$, then adding the error $e$ to produce $Y$. The altitude was then dropped down to the base, meeting it at the least-squares estimate $bx$. Clearly the altitude is the residual vector $Y-bx$ and so has been labeled $r$.
The base of the triangle is parallel to the regressor vector $x$. The altitudes of the sides $OY$ and $(\beta x)Y$ are the altitude of the triangle itself. By definition, the residual $r$ is perpendicular to the base: therefore, distances away from the base can be found by projection onto $r$. Thus the triangle's altitude can be found in any one of three ways: regressing $Y$ against $r$ (finding the height of $Y$); regressing $e$ against $r$ (finding the height of $e$), or regressing $r$ against $r$ (finding the height of $r$). All three values must all be equal (as you can check by running these regressions). The latter obviously is $1$, QED.
For those who prefer algebra, we may convert this geometric analysis into an elegant algebraic demonstration. Simply observe that $r$, $e=r+(\beta-b)x$, and $Y=e+\beta x = r + (2\beta-b)x$ are all congruent modulo the subspace generated by $x$. Therefore they must have equal projections into any space orthogonal to $x$, such as the one generated by $r$, where the projection of $r$ has coefficient $1$, QED. (Statistically, we simply "take out" the component of $x$ in all three expressions, leaving $r$ in each case.) | Why is the slope always exactly 1 when regressing the errors on the residuals using OLS? | Without any loss of conceptual (or practical) generality, first remove the constant from the variables as described at How exactly does one "control for other variables". Let $x$ be the regressor, $e | Why is the slope always exactly 1 when regressing the errors on the residuals using OLS?
Without any loss of conceptual (or practical) generality, first remove the constant from the variables as described at How exactly does one "control for other variables". Let $x$ be the regressor, $e$ the error, $Y=\beta x + e$ the response, $b$ the least-squares estimate of $\beta$, and $r = Y - bx$ the residuals. All of these vectors lie in the same plane, allowing us to draw pictures of them. The situation can be rendered like this, where $O$ designates the origin:
This picture was constructed beginning with $\beta x$, then adding the error $e$ to produce $Y$. The altitude was then dropped down to the base, meeting it at the least-squares estimate $bx$. Clearly the altitude is the residual vector $Y-bx$ and so has been labeled $r$.
The base of the triangle is parallel to the regressor vector $x$. The altitudes of the sides $OY$ and $(\beta x)Y$ are the altitude of the triangle itself. By definition, the residual $r$ is perpendicular to the base: therefore, distances away from the base can be found by projection onto $r$. Thus the triangle's altitude can be found in any one of three ways: regressing $Y$ against $r$ (finding the height of $Y$); regressing $e$ against $r$ (finding the height of $e$), or regressing $r$ against $r$ (finding the height of $r$). All three values must all be equal (as you can check by running these regressions). The latter obviously is $1$, QED.
For those who prefer algebra, we may convert this geometric analysis into an elegant algebraic demonstration. Simply observe that $r$, $e=r+(\beta-b)x$, and $Y=e+\beta x = r + (2\beta-b)x$ are all congruent modulo the subspace generated by $x$. Therefore they must have equal projections into any space orthogonal to $x$, such as the one generated by $r$, where the projection of $r$ has coefficient $1$, QED. (Statistically, we simply "take out" the component of $x$ in all three expressions, leaving $r$ in each case.) | Why is the slope always exactly 1 when regressing the errors on the residuals using OLS?
Without any loss of conceptual (or practical) generality, first remove the constant from the variables as described at How exactly does one "control for other variables". Let $x$ be the regressor, $e |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.