idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
41,401
Modelling count data with extreme underdispersion - what distribution?
The Conway-Maxwell-Poisson model has recently been shown to handle arbitrarily small underdispersion (see Huang 2020). For example, it is possible to have a mean of 15 and a variance of 2, say, by selecting the dispersion parameter large enough. In the extreme limits, it is even possible to have a mean of 15 and 0 variance, or a mean of 15.2 and variance 0.2*0.8 = 0.16, which is the smallest variance possible for mean 15.2. The mean-parametrized Conway-Maxwell-Poisson model is implemented in R in the mpcmp package (Fung et al. 2020). Other alternatives that have potential to be arbitrarily underdispersed include the Double Poisson of Efron (JASA, 1986) and the exponentially re-weighted Poisson of Ridout & Besbeas (2004). However, neither of these models parametrize the distribution via the mean, and so it is harder to see what happens as the dispersion gets arbitrarily small.
Modelling count data with extreme underdispersion - what distribution?
The Conway-Maxwell-Poisson model has recently been shown to handle arbitrarily small underdispersion (see Huang 2020). For example, it is possible to have a mean of 15 and a variance of 2, say, by sel
Modelling count data with extreme underdispersion - what distribution? The Conway-Maxwell-Poisson model has recently been shown to handle arbitrarily small underdispersion (see Huang 2020). For example, it is possible to have a mean of 15 and a variance of 2, say, by selecting the dispersion parameter large enough. In the extreme limits, it is even possible to have a mean of 15 and 0 variance, or a mean of 15.2 and variance 0.2*0.8 = 0.16, which is the smallest variance possible for mean 15.2. The mean-parametrized Conway-Maxwell-Poisson model is implemented in R in the mpcmp package (Fung et al. 2020). Other alternatives that have potential to be arbitrarily underdispersed include the Double Poisson of Efron (JASA, 1986) and the exponentially re-weighted Poisson of Ridout & Besbeas (2004). However, neither of these models parametrize the distribution via the mean, and so it is harder to see what happens as the dispersion gets arbitrarily small.
Modelling count data with extreme underdispersion - what distribution? The Conway-Maxwell-Poisson model has recently been shown to handle arbitrarily small underdispersion (see Huang 2020). For example, it is possible to have a mean of 15 and a variance of 2, say, by sel
41,402
Doubt about Rigged election in South Korea
If the voters are split randomly between A and B, it would not be surprising that their voting behaviour be the same and (given the large numbers involved) that the proportions of votes to each party under each regime A and B were also nearly the same. However, I would expect that elder people with less mobility from their residence area, for instance, would fall mainly under regime A and youngsters (studying away from their homes, for instance) would fall under regime B. If elder people and youngsters vote differently (as would appear likely), I aggree with you that the result is surprising.
Doubt about Rigged election in South Korea
If the voters are split randomly between A and B, it would not be surprising that their voting behaviour be the same and (given the large numbers involved) that the proportions of votes to each party
Doubt about Rigged election in South Korea If the voters are split randomly between A and B, it would not be surprising that their voting behaviour be the same and (given the large numbers involved) that the proportions of votes to each party under each regime A and B were also nearly the same. However, I would expect that elder people with less mobility from their residence area, for instance, would fall mainly under regime A and youngsters (studying away from their homes, for instance) would fall under regime B. If elder people and youngsters vote differently (as would appear likely), I aggree with you that the result is surprising.
Doubt about Rigged election in South Korea If the voters are split randomly between A and B, it would not be surprising that their voting behaviour be the same and (given the large numbers involved) that the proportions of votes to each party
41,403
Doubt about Rigged election in South Korea
I am not very familiar with how election results should look like, e.g how big is the variation and how much it would differ from the null hypothesis contingency table. Looking the result, we can do a chi-sq test similar to what you have: M = matrix(c(15797,6185,11335,4460,5296,2073),ncol=3) chisq.test(M) Pearson's Chi-squared test data: M X-squared = 0.052314, df = 2, p-value = 0.9742 If we ask the probability of get a result as close to the expected, i.e, a X-square lesser than 0.052314, it's 1 - 0.9742 = 0.0258. Normally we would do: pchisq(0.052314,2) [1] 0.02581787 However this is only 1 observation / experiment. Ideally you collect such statistics over many local areas and perform the same analysis, and ask if this result is a blip or there are indeed trends. I can give a well known example, R.A Fisher noticed in Gregor Mendel's experimental data, for many experiments the number of seeds with a certain phenotype matches closely the expected. An exceptionally good fit of data to theory. He tested the probability of getting a chi square lesser than the observed for each experiment Mendel had, and postulated that if they are independent and followed the null hypothesis, the probability of getting an overall better result if all experiments are repeated would be 7/100000. More details about the analysis in this paper Fisher even proposed: "Although no explanation can be expected to be satisfactory, it remains a possibility among others that Mendel was deceived by some assistant who knew too well what was expected. This possibility is supported by independent evidence that the data of most, if not all, of the experiments have been falsified so as to agree closely with Mendel's expectations." Reason for pointing out the above example is, even Fisher's analysis, it's still widely debated whether Mendel manipulated his data, because there are biological reasons that we still know little of. It goes beyond the statistics. One cannot easily conclude from analysis of 1 election result that it is rigged. Even if you collect data over multiple areas, there are still many factors one need to consider, and take into account.
Doubt about Rigged election in South Korea
I am not very familiar with how election results should look like, e.g how big is the variation and how much it would differ from the null hypothesis contingency table. Looking the result, we can do
Doubt about Rigged election in South Korea I am not very familiar with how election results should look like, e.g how big is the variation and how much it would differ from the null hypothesis contingency table. Looking the result, we can do a chi-sq test similar to what you have: M = matrix(c(15797,6185,11335,4460,5296,2073),ncol=3) chisq.test(M) Pearson's Chi-squared test data: M X-squared = 0.052314, df = 2, p-value = 0.9742 If we ask the probability of get a result as close to the expected, i.e, a X-square lesser than 0.052314, it's 1 - 0.9742 = 0.0258. Normally we would do: pchisq(0.052314,2) [1] 0.02581787 However this is only 1 observation / experiment. Ideally you collect such statistics over many local areas and perform the same analysis, and ask if this result is a blip or there are indeed trends. I can give a well known example, R.A Fisher noticed in Gregor Mendel's experimental data, for many experiments the number of seeds with a certain phenotype matches closely the expected. An exceptionally good fit of data to theory. He tested the probability of getting a chi square lesser than the observed for each experiment Mendel had, and postulated that if they are independent and followed the null hypothesis, the probability of getting an overall better result if all experiments are repeated would be 7/100000. More details about the analysis in this paper Fisher even proposed: "Although no explanation can be expected to be satisfactory, it remains a possibility among others that Mendel was deceived by some assistant who knew too well what was expected. This possibility is supported by independent evidence that the data of most, if not all, of the experiments have been falsified so as to agree closely with Mendel's expectations." Reason for pointing out the above example is, even Fisher's analysis, it's still widely debated whether Mendel manipulated his data, because there are biological reasons that we still know little of. It goes beyond the statistics. One cannot easily conclude from analysis of 1 election result that it is rigged. Even if you collect data over multiple areas, there are still many factors one need to consider, and take into account.
Doubt about Rigged election in South Korea I am not very familiar with how election results should look like, e.g how big is the variation and how much it would differ from the null hypothesis contingency table. Looking the result, we can do
41,404
Front-Door Adjustment formula: confusing notation
It means $$ P(y|\text{do}(X=x)) = \sum_z \left[ P(z|x) \sum_{x'} \left\{ P(y|x',z) P(x') \right\} \right]. $$ This can be seen from equation (3.15) on p. 68 which is $$ P(y|\text{do}(X=x)) = \sum_z \sum_{x'} \left\{ P(y|x',z) P(x') P(z|x) \right\} $$ and can be rearranged to be $$ P(y|\text{do}(X=x)) = \sum_z \sum_{x'} \left\{ P(z|x) P(y|x',z) P(x') \right\}, $$ the latter ordering of conditional probabilities corresponding to the ordering in the equation at the top of this answer.
Front-Door Adjustment formula: confusing notation
It means $$ P(y|\text{do}(X=x)) = \sum_z \left[ P(z|x) \sum_{x'} \left\{ P(y|x',z) P(x') \right\} \right]. $$ This can be seen from equation (3.15) on p. 68 which is $$ P(y|\text{do}(X=x)) = \sum_z \s
Front-Door Adjustment formula: confusing notation It means $$ P(y|\text{do}(X=x)) = \sum_z \left[ P(z|x) \sum_{x'} \left\{ P(y|x',z) P(x') \right\} \right]. $$ This can be seen from equation (3.15) on p. 68 which is $$ P(y|\text{do}(X=x)) = \sum_z \sum_{x'} \left\{ P(y|x',z) P(x') P(z|x) \right\} $$ and can be rearranged to be $$ P(y|\text{do}(X=x)) = \sum_z \sum_{x'} \left\{ P(z|x) P(y|x',z) P(x') \right\}, $$ the latter ordering of conditional probabilities corresponding to the ordering in the equation at the top of this answer.
Front-Door Adjustment formula: confusing notation It means $$ P(y|\text{do}(X=x)) = \sum_z \left[ P(z|x) \sum_{x'} \left\{ P(y|x',z) P(x') \right\} \right]. $$ This can be seen from equation (3.15) on p. 68 which is $$ P(y|\text{do}(X=x)) = \sum_z \s
41,405
Three way fixed effects vs combining two of the effects
Consider the model $$(1) \ \ w_{it} = \mathbf x_{it}^\top \beta + \delta_t +\psi_{a(i,t)} + \eta_{k(i,t)} + \epsilon_{it},$$ with the area effect $\psi_a$ and sector effect $\eta_k$ unobserved. Assuming that $\mathbf x_{it}$ is correlated with the area and sector effect the OLS estimator $$\hat \beta_{OLS}:=(\sum_i \sum_t\mathbf x_{it}\mathbf x_{it}^\top)^{-1}(\sum_i \sum_t\mathbf x_{it}y_{it})$$ associated with the estimation equation $$w_{it} = \mathbf x_{it}^\top \beta + u_{it}$$ is inconsistent because $\mathbb E[\mathbf x_{it}u_{it}]=\mathbb E[\mathbf x_{it}(\delta_t +\psi_{a(i,t)} + \eta_{k(i,t)} + \epsilon_{it})]\not=0$. Doing the estimation with fixed effects for area $\psi_a$, sector $\eta_k$ and time $\delta_t$ will give you consistent estimates assuming $\mathbb E[x_{it}\epsilon_{it}]$. But then again so will doing the estimation with area-sector fixed $\phi_{ak}$ hence using the estimation equation $$(2) \ \ w_{it} = \mathbf x_{it}^\top \beta + \delta_t + \phi_{a(i,t),k(i,t)} + \epsilon_{it},$$ where the area-sector specific fixed effect is $\phi_{a(i,t),k(i,t)}$. This is perhaps most simply seen by just recognizing that the first model is an instance of the latter by the restriction that $$\phi_{a(i,t),k(i,t)} = \psi_{a(it)} + \eta_{k(i,t)},$$ however the two estimators are not the same and so estimates might differ. Also model (2) cannot alwys be estimated consistently using model (1) as estimation equation. In R you should use the lfe-package by Simen Gaure and you can find documentation here. Remember to cluster standard errors on id for panel data. Here is a simulation example (I leave it to you to figure out how to add the fixed effect for time): library(lfe) library(data.table) # Simulate a population of N workers observed over T timeperiods. # Balanced panel # Workers are assigned to A different areas # Workers are assigned to K different sectors N <- 1000 T <- 10 NT <- T*N A <- 30 K <- 10 vA <- 10 # strength of area effect vK <- 10 # strength of sector effect # Initialize vectors for area and sector assignment area <- rep(NA,NT) sector <- rep(NA,NT) # Choose probabilities for assigning individual to sector and area # Probabilities are increasing in index pA[j+1]>pA[j] this is used # to assign certain individual according to observed skill x to certain # sectors and areas. # Economic literature suggest that workers sort over sectors and areas # according to skill (see for example Glaeser and Mare (2001) Cities and Skills and # Combes (2008) Spatial wage disparities: Sorting matters!) pA <- (1:A)^4/sum((1:A)^4) pK <- (1:K)^0.7/sum((1:K)^0.7) # Check distribution layout(matrix(1:2,nrow=1)) barplot(table(sample(1:A,size=1000,prob=pA,replace=TRUE))) barplot(table(sample(K:1,size=1000,prob=pK,replace=TRUE))) # Set unobserved individual parameter deciding individual tendency to sort # Individuals with high mu[i] will be in high sector versus low sector # and in high area versus low area mu <- rnorm(N) # Sart loop to assign individuals to sector and area ii <- 1 for (i in 1:N) { # Assign individual to sector a <- ifelse(mu[i] > 0,sample(1:A,size=1,prob=pA),sample(A:1,size=1,prob=pA)) k <- ifelse(mu[i] > 0,sample(1:K,size=1,prob=pK),sample(K:1,size=1,prob=pK)) # The above assigns individuals with high mu to high index sector and area # because probabilities are increasing in index for (t in 1:T) { # Individual worker do not frequently change sector and area # here the probability of change is chosen to 0.2 (very high) # probably more around 5-10% (but we need variation in panel) if (runif(1)<0.2) { a <- ifelse(mu[i] > 0,sample(1:A,size=1,prob=pA),sample(A:1,size=1,prob=pA)) } if (runif(1)<0.2) { k <- ifelse(mu[i] > 0,sample(1:K,size=1,prob=pK),sample(K:1,size=1,prob=pK)) } # Assign and note that a and k have changed from last period with probability 0.2 area[ii] <- a sector[ii] <- k ii <- ii + 1 } } # Specify area and sector effect, vA and vK controls size of effect # The are sorted so higher index sector is high wage sector and higher # index area is high wage area (where to individuals of high mu sort) area_effect <- sort(vA*runif(A)) sector_effect <- sort(vK*runif(K)) # Define id and time period for observation id <- rep(1:N,each=T) time <- rep(1:T,N) # Make some covariate ... here made correlated with area and sector # mu[i] is used as mean of individual i's time varying observed skill x x <- rnorm(NT,mean=rep(mu,each=T)) + area_effect[area] + sector_effect[sector] # rnorm(NT,mean=rep(mu,each=T)) ... check strength of covariance # high covariance implies larger bias in OLS estimates cov(x,area_effect[area]) cov(x,sector_effect[sector]) # Make dependent variable using the Mincer wage equation y <- beta*x + area_effect[area] + sector_effect[sector] + (rt(NT,10)+abs(rt(NT,7))) dt <- data.table(id=id,time=time,y=y,x=x,area=area,sector=sector,as=interaction(area,sector)) setkey(dt,id,time) # Start estimation, first OLS is seen to be inconsistent lm(y~x,data=dt) # Must control for sector and area using fixed effects # Dummy estimators (break down on large number of fixed effects) # Both consistent but not good with many fixed effects # Also standard error is not clustered on id ... model1 <- lm(y ~ x + as.factor(sector) + as.factor(area),data=dt) model2 <- lm(y ~ x + as,data=dt) # Use lfe (designed to handle many fixed effects) # Cluster on id because it is panel (actually whether this is necessary depends on assumption about variance) # modelspec fixed effect instru cluster sd model3 <- felm( y~x | sector+area | 0 | id , data=dt) model4 <- felm(y~x|as|0|id,data=dt) # check estimates ... all consistent coef(model1)[2] coef(model2)[2] coef(model3) coef(model4) Good reads (armed with patience) on the topic: John M. Abowd, Francis Kramarz and David N. Margolis Source: Econometrica, Vol. 67, No. 2 (Mar., 1999), pp. 251-333 Pierre-Philippe Combes & Laurent Gobillon (2015) The Empirics of Agglomeration Economies in Handbook of Regional and Urban Economics
Three way fixed effects vs combining two of the effects
Consider the model $$(1) \ \ w_{it} = \mathbf x_{it}^\top \beta + \delta_t +\psi_{a(i,t)} + \eta_{k(i,t)} + \epsilon_{it},$$ with the area effect $\psi_a$ and sector effect $\eta_k$ unobserved. Assum
Three way fixed effects vs combining two of the effects Consider the model $$(1) \ \ w_{it} = \mathbf x_{it}^\top \beta + \delta_t +\psi_{a(i,t)} + \eta_{k(i,t)} + \epsilon_{it},$$ with the area effect $\psi_a$ and sector effect $\eta_k$ unobserved. Assuming that $\mathbf x_{it}$ is correlated with the area and sector effect the OLS estimator $$\hat \beta_{OLS}:=(\sum_i \sum_t\mathbf x_{it}\mathbf x_{it}^\top)^{-1}(\sum_i \sum_t\mathbf x_{it}y_{it})$$ associated with the estimation equation $$w_{it} = \mathbf x_{it}^\top \beta + u_{it}$$ is inconsistent because $\mathbb E[\mathbf x_{it}u_{it}]=\mathbb E[\mathbf x_{it}(\delta_t +\psi_{a(i,t)} + \eta_{k(i,t)} + \epsilon_{it})]\not=0$. Doing the estimation with fixed effects for area $\psi_a$, sector $\eta_k$ and time $\delta_t$ will give you consistent estimates assuming $\mathbb E[x_{it}\epsilon_{it}]$. But then again so will doing the estimation with area-sector fixed $\phi_{ak}$ hence using the estimation equation $$(2) \ \ w_{it} = \mathbf x_{it}^\top \beta + \delta_t + \phi_{a(i,t),k(i,t)} + \epsilon_{it},$$ where the area-sector specific fixed effect is $\phi_{a(i,t),k(i,t)}$. This is perhaps most simply seen by just recognizing that the first model is an instance of the latter by the restriction that $$\phi_{a(i,t),k(i,t)} = \psi_{a(it)} + \eta_{k(i,t)},$$ however the two estimators are not the same and so estimates might differ. Also model (2) cannot alwys be estimated consistently using model (1) as estimation equation. In R you should use the lfe-package by Simen Gaure and you can find documentation here. Remember to cluster standard errors on id for panel data. Here is a simulation example (I leave it to you to figure out how to add the fixed effect for time): library(lfe) library(data.table) # Simulate a population of N workers observed over T timeperiods. # Balanced panel # Workers are assigned to A different areas # Workers are assigned to K different sectors N <- 1000 T <- 10 NT <- T*N A <- 30 K <- 10 vA <- 10 # strength of area effect vK <- 10 # strength of sector effect # Initialize vectors for area and sector assignment area <- rep(NA,NT) sector <- rep(NA,NT) # Choose probabilities for assigning individual to sector and area # Probabilities are increasing in index pA[j+1]>pA[j] this is used # to assign certain individual according to observed skill x to certain # sectors and areas. # Economic literature suggest that workers sort over sectors and areas # according to skill (see for example Glaeser and Mare (2001) Cities and Skills and # Combes (2008) Spatial wage disparities: Sorting matters!) pA <- (1:A)^4/sum((1:A)^4) pK <- (1:K)^0.7/sum((1:K)^0.7) # Check distribution layout(matrix(1:2,nrow=1)) barplot(table(sample(1:A,size=1000,prob=pA,replace=TRUE))) barplot(table(sample(K:1,size=1000,prob=pK,replace=TRUE))) # Set unobserved individual parameter deciding individual tendency to sort # Individuals with high mu[i] will be in high sector versus low sector # and in high area versus low area mu <- rnorm(N) # Sart loop to assign individuals to sector and area ii <- 1 for (i in 1:N) { # Assign individual to sector a <- ifelse(mu[i] > 0,sample(1:A,size=1,prob=pA),sample(A:1,size=1,prob=pA)) k <- ifelse(mu[i] > 0,sample(1:K,size=1,prob=pK),sample(K:1,size=1,prob=pK)) # The above assigns individuals with high mu to high index sector and area # because probabilities are increasing in index for (t in 1:T) { # Individual worker do not frequently change sector and area # here the probability of change is chosen to 0.2 (very high) # probably more around 5-10% (but we need variation in panel) if (runif(1)<0.2) { a <- ifelse(mu[i] > 0,sample(1:A,size=1,prob=pA),sample(A:1,size=1,prob=pA)) } if (runif(1)<0.2) { k <- ifelse(mu[i] > 0,sample(1:K,size=1,prob=pK),sample(K:1,size=1,prob=pK)) } # Assign and note that a and k have changed from last period with probability 0.2 area[ii] <- a sector[ii] <- k ii <- ii + 1 } } # Specify area and sector effect, vA and vK controls size of effect # The are sorted so higher index sector is high wage sector and higher # index area is high wage area (where to individuals of high mu sort) area_effect <- sort(vA*runif(A)) sector_effect <- sort(vK*runif(K)) # Define id and time period for observation id <- rep(1:N,each=T) time <- rep(1:T,N) # Make some covariate ... here made correlated with area and sector # mu[i] is used as mean of individual i's time varying observed skill x x <- rnorm(NT,mean=rep(mu,each=T)) + area_effect[area] + sector_effect[sector] # rnorm(NT,mean=rep(mu,each=T)) ... check strength of covariance # high covariance implies larger bias in OLS estimates cov(x,area_effect[area]) cov(x,sector_effect[sector]) # Make dependent variable using the Mincer wage equation y <- beta*x + area_effect[area] + sector_effect[sector] + (rt(NT,10)+abs(rt(NT,7))) dt <- data.table(id=id,time=time,y=y,x=x,area=area,sector=sector,as=interaction(area,sector)) setkey(dt,id,time) # Start estimation, first OLS is seen to be inconsistent lm(y~x,data=dt) # Must control for sector and area using fixed effects # Dummy estimators (break down on large number of fixed effects) # Both consistent but not good with many fixed effects # Also standard error is not clustered on id ... model1 <- lm(y ~ x + as.factor(sector) + as.factor(area),data=dt) model2 <- lm(y ~ x + as,data=dt) # Use lfe (designed to handle many fixed effects) # Cluster on id because it is panel (actually whether this is necessary depends on assumption about variance) # modelspec fixed effect instru cluster sd model3 <- felm( y~x | sector+area | 0 | id , data=dt) model4 <- felm(y~x|as|0|id,data=dt) # check estimates ... all consistent coef(model1)[2] coef(model2)[2] coef(model3) coef(model4) Good reads (armed with patience) on the topic: John M. Abowd, Francis Kramarz and David N. Margolis Source: Econometrica, Vol. 67, No. 2 (Mar., 1999), pp. 251-333 Pierre-Philippe Combes & Laurent Gobillon (2015) The Empirics of Agglomeration Economies in Handbook of Regional and Urban Economics
Three way fixed effects vs combining two of the effects Consider the model $$(1) \ \ w_{it} = \mathbf x_{it}^\top \beta + \delta_t +\psi_{a(i,t)} + \eta_{k(i,t)} + \epsilon_{it},$$ with the area effect $\psi_a$ and sector effect $\eta_k$ unobserved. Assum
41,406
How can I calculate a joint distribution based on marginal and conditional information?
Sketching a diagram of the joint distribution might firm up your understanding as well as help you get the right answer (and spot incorrect answers that might be offered). Usually you don't need to work very hard at this--it's primarily a conceptual exercise--but for accuracy I asked a computer to draw this diagram: This diagram situates a dot at each point with coordinates $(n,k)$ where the probability is nonzero. (The dots are colored and sized in proportion to their probabilities.) To do that, it creates vertical strips of dots, because each such strip corresponds to an event with a single value of $n.$ Those strips therefore reflect the conditional probability information, which says the strip positioned over a whole number $n$ must have dots at heights $k=1,2,\ldots, 2n$ (all with equal probabilities within each strip). I have highlighted the strip for $n=4$ because that corresponds to the work attempted in the question. To the right of each dot I have posted a formula for the joint (not the conditional) probability. Recall that the joint probability at $(n,k)$ must be the product of The probability of $n,$ given by $1/2\times n \times 2^{-n},$ and The conditional probability of $k$ given $n.$ Because this is assumed uniform and it covers $2n$ possibilities, this conditional probability is $1/(2n).$ Thus the formula for the joint probability is $$P(n,k) = P(n)\times P(k\mid n) = \left\{\begin{array}{lr}\frac{1}{2}\,n\,2^{-n}\,/\,(2n) & 1 \le k \le 2n \\ 0 & \text{otherwise.}\end{array}\right.$$ Notice that the expression on the right hand side simplifies: $$\frac{1}{2}\,n\,2^{-n}\,/\,(2n) = 2^{-(n+2)}.$$ You can spot-check these values against the diagram if you wish. As a reality check, let's verify the probabilities sum to unity--but we'll do this in a different way than we constructed the diagram so that the check might catch any mistakes we might have made. Let's sum the probabilities by rows: At the bottom two rows with $k=1$ and $k=2,$ which are identical, you can read off the sequence of probabilities left to right as $2^{-1-2}, 2^{-2-2}, 2^{-3-2}, \ldots = 1/8, 1/16, 1/32, \ldots.$ This is a geometric series that sums (obviously) to $1/4.$ The two rows together sum to $1/2.$ At the next two rows with $k=3$ and $k=4,$ which are identical, the sequence of probabilities is the same as before, but with the first one omitted. We obtain two rows summing to $1/16 + 1/32 + 1/64 + \cdots = 1/8.$ The two rows together sum to $1/4.$ The pattern is evident: every time you go up two rows you see the same probabilities as before but (a) multiplied by $1/2$ and (b) shifted one unit to the right. Thus the next two sum to $1/4\times 1/2 = 1/8,$ the next two sum to $1/8\times 1/2=1/16,$ and so on. Evidently the sum of all the probabilities is $1/2 + 1/4 + 1/8 + \cdots = 1,$ as it should be. As a mathematical proposition, this diagram has shown how to evaluate the sum $$\sum_{n=1}^\infty n\, 2^{-n} = 2$$ by splitting each term $n 2^{-n}$ into $2n$ separate pieces of size $2^{-(n+1)}$ and then adding those pieces in a different order. The evaluation requires knowing only that $1/2+1/4+1/8+\cdots = 1.$
How can I calculate a joint distribution based on marginal and conditional information?
Sketching a diagram of the joint distribution might firm up your understanding as well as help you get the right answer (and spot incorrect answers that might be offered). Usually you don't need to wo
How can I calculate a joint distribution based on marginal and conditional information? Sketching a diagram of the joint distribution might firm up your understanding as well as help you get the right answer (and spot incorrect answers that might be offered). Usually you don't need to work very hard at this--it's primarily a conceptual exercise--but for accuracy I asked a computer to draw this diagram: This diagram situates a dot at each point with coordinates $(n,k)$ where the probability is nonzero. (The dots are colored and sized in proportion to their probabilities.) To do that, it creates vertical strips of dots, because each such strip corresponds to an event with a single value of $n.$ Those strips therefore reflect the conditional probability information, which says the strip positioned over a whole number $n$ must have dots at heights $k=1,2,\ldots, 2n$ (all with equal probabilities within each strip). I have highlighted the strip for $n=4$ because that corresponds to the work attempted in the question. To the right of each dot I have posted a formula for the joint (not the conditional) probability. Recall that the joint probability at $(n,k)$ must be the product of The probability of $n,$ given by $1/2\times n \times 2^{-n},$ and The conditional probability of $k$ given $n.$ Because this is assumed uniform and it covers $2n$ possibilities, this conditional probability is $1/(2n).$ Thus the formula for the joint probability is $$P(n,k) = P(n)\times P(k\mid n) = \left\{\begin{array}{lr}\frac{1}{2}\,n\,2^{-n}\,/\,(2n) & 1 \le k \le 2n \\ 0 & \text{otherwise.}\end{array}\right.$$ Notice that the expression on the right hand side simplifies: $$\frac{1}{2}\,n\,2^{-n}\,/\,(2n) = 2^{-(n+2)}.$$ You can spot-check these values against the diagram if you wish. As a reality check, let's verify the probabilities sum to unity--but we'll do this in a different way than we constructed the diagram so that the check might catch any mistakes we might have made. Let's sum the probabilities by rows: At the bottom two rows with $k=1$ and $k=2,$ which are identical, you can read off the sequence of probabilities left to right as $2^{-1-2}, 2^{-2-2}, 2^{-3-2}, \ldots = 1/8, 1/16, 1/32, \ldots.$ This is a geometric series that sums (obviously) to $1/4.$ The two rows together sum to $1/2.$ At the next two rows with $k=3$ and $k=4,$ which are identical, the sequence of probabilities is the same as before, but with the first one omitted. We obtain two rows summing to $1/16 + 1/32 + 1/64 + \cdots = 1/8.$ The two rows together sum to $1/4.$ The pattern is evident: every time you go up two rows you see the same probabilities as before but (a) multiplied by $1/2$ and (b) shifted one unit to the right. Thus the next two sum to $1/4\times 1/2 = 1/8,$ the next two sum to $1/8\times 1/2=1/16,$ and so on. Evidently the sum of all the probabilities is $1/2 + 1/4 + 1/8 + \cdots = 1,$ as it should be. As a mathematical proposition, this diagram has shown how to evaluate the sum $$\sum_{n=1}^\infty n\, 2^{-n} = 2$$ by splitting each term $n 2^{-n}$ into $2n$ separate pieces of size $2^{-(n+1)}$ and then adding those pieces in a different order. The evaluation requires knowing only that $1/2+1/4+1/8+\cdots = 1.$
How can I calculate a joint distribution based on marginal and conditional information? Sketching a diagram of the joint distribution might firm up your understanding as well as help you get the right answer (and spot incorrect answers that might be offered). Usually you don't need to wo
41,407
How can I calculate a joint distribution based on marginal and conditional information?
The conditional distribution is $$p_{K|N=n}(k)=\frac{1}{2n},\ \ \ k=1,\dots,2n$$ For joint, you just multiply this with $p_N(n)$, which you already did with slightly wrong conditional, i.e. $1/n$. You don't necessarily have $k$ as a variable in your joint distribution. As far as uniform distribution is concerned, disappearance of some variables from PMF/PDF shouldn't surprise you. So, the joint is: $$p_{N,K}(n,k)=\frac{2^{-n}}{4},\ \ n\in \mathbb{Z^+}, k\in [1,2n], k\in \mathbb{Z^+}$$
How can I calculate a joint distribution based on marginal and conditional information?
The conditional distribution is $$p_{K|N=n}(k)=\frac{1}{2n},\ \ \ k=1,\dots,2n$$ For joint, you just multiply this with $p_N(n)$, which you already did with slightly wrong conditional, i.e. $1/n$. Yo
How can I calculate a joint distribution based on marginal and conditional information? The conditional distribution is $$p_{K|N=n}(k)=\frac{1}{2n},\ \ \ k=1,\dots,2n$$ For joint, you just multiply this with $p_N(n)$, which you already did with slightly wrong conditional, i.e. $1/n$. You don't necessarily have $k$ as a variable in your joint distribution. As far as uniform distribution is concerned, disappearance of some variables from PMF/PDF shouldn't surprise you. So, the joint is: $$p_{N,K}(n,k)=\frac{2^{-n}}{4},\ \ n\in \mathbb{Z^+}, k\in [1,2n], k\in \mathbb{Z^+}$$
How can I calculate a joint distribution based on marginal and conditional information? The conditional distribution is $$p_{K|N=n}(k)=\frac{1}{2n},\ \ \ k=1,\dots,2n$$ For joint, you just multiply this with $p_N(n)$, which you already did with slightly wrong conditional, i.e. $1/n$. Yo
41,408
Difference between non-contextual and contextual word embeddings
Your understanding is correct. Word embeddings, i.e., vectors you retrieve from a lookup table are always non-contextual, not matter in what this is happening. (It is slightly different in ELMo which uses a character-based network to get a word embedding, but it also does consider any context). However, when people say contextual embeddings, they don't mean the vectors from the look-up table, they mean the hiden states of the pre-trained model. As you said these states are contextualized, but it is kind of confusing to call them word embeddings.
Difference between non-contextual and contextual word embeddings
Your understanding is correct. Word embeddings, i.e., vectors you retrieve from a lookup table are always non-contextual, not matter in what this is happening. (It is slightly different in ELMo which
Difference between non-contextual and contextual word embeddings Your understanding is correct. Word embeddings, i.e., vectors you retrieve from a lookup table are always non-contextual, not matter in what this is happening. (It is slightly different in ELMo which uses a character-based network to get a word embedding, but it also does consider any context). However, when people say contextual embeddings, they don't mean the vectors from the look-up table, they mean the hiden states of the pre-trained model. As you said these states are contextualized, but it is kind of confusing to call them word embeddings.
Difference between non-contextual and contextual word embeddings Your understanding is correct. Word embeddings, i.e., vectors you retrieve from a lookup table are always non-contextual, not matter in what this is happening. (It is slightly different in ELMo which
41,409
Difference between non-contextual and contextual word embeddings
In non contextual 'playing' remain same for example 'playing' sport, and 'playing' violin. But it changes in contextual word embedding. Check this post for an illustrative understanding.. https://medium.com/@everlearner42/how-contextual-word-embeddings-are-learnt-in-nlp-a0a52fcad1f9
Difference between non-contextual and contextual word embeddings
In non contextual 'playing' remain same for example 'playing' sport, and 'playing' violin. But it changes in contextual word embedding. Check this post for an illustrative understanding.. https://medi
Difference between non-contextual and contextual word embeddings In non contextual 'playing' remain same for example 'playing' sport, and 'playing' violin. But it changes in contextual word embedding. Check this post for an illustrative understanding.. https://medium.com/@everlearner42/how-contextual-word-embeddings-are-learnt-in-nlp-a0a52fcad1f9
Difference between non-contextual and contextual word embeddings In non contextual 'playing' remain same for example 'playing' sport, and 'playing' violin. But it changes in contextual word embedding. Check this post for an illustrative understanding.. https://medi
41,410
Spurious relationships: flavours, terminology
Which of these cases are instances of a "spurious relationship"? How could the remaining cases be termed? I think 1. and 2. are both spurious, but they result from taking a finite sample. If we took independent samples from some distribution, say a normal distribution, we would very likely find that the correlation between the two is not exactly zero. Obviously this problem would be worse when the sample sizes are very small. The case of 3. deserves the most attention. I can think of a few situations where this can arise: Spurious correlation due to confounding. The example you gave is a good example, this happens when two variables have a common (often unmeasured) cause. Spurious correlation due to mathematical coupling. This occurs where two variables are linked, for example when two variables are divided by a 3rd variable. This often happens where rates of disease, exposure, sales etc., are created by dividing by the population size. This can induce a large correlation in otherwise unrelated and independent variables. Spurious correlation due to regression to the mean (RTM). Galton is credited with discovering this whereby the offspring of tall parents also tend to be tall, but less tall than the parents, while the offspring of small parents also tend to be small, but less small than the parents, however it can occur in many settings. RTM occurs with any variable that fluctuates within an individual or a population either due to measurement error and/or physiological variation. One example is in longitudinal studies where a variable is measured at several points in time and the interest in is a distal outcome measured once, or cross-sectionally. Methods used to analyse such data often condition on the outcome which induces RTM. The reversal paradox. This is a general term for things like Simpson's Paradox, Lord's Paradox and suppression, in situations where subgroups are being analysed or when mediators are included in a regression. I can't really think of anything that fits this description of 4. Bonus question (just in case you have an opinion on the matter): Which ones may deserve the most attention in a quantitative methods class taught to management students? Unsurprisingly I would definitely suggest that those falling under 3. deserve the most attention.
Spurious relationships: flavours, terminology
Which of these cases are instances of a "spurious relationship"? How could the remaining cases be termed? I think 1. and 2. are both spurious, but they result from taking a finite sample. If we took
Spurious relationships: flavours, terminology Which of these cases are instances of a "spurious relationship"? How could the remaining cases be termed? I think 1. and 2. are both spurious, but they result from taking a finite sample. If we took independent samples from some distribution, say a normal distribution, we would very likely find that the correlation between the two is not exactly zero. Obviously this problem would be worse when the sample sizes are very small. The case of 3. deserves the most attention. I can think of a few situations where this can arise: Spurious correlation due to confounding. The example you gave is a good example, this happens when two variables have a common (often unmeasured) cause. Spurious correlation due to mathematical coupling. This occurs where two variables are linked, for example when two variables are divided by a 3rd variable. This often happens where rates of disease, exposure, sales etc., are created by dividing by the population size. This can induce a large correlation in otherwise unrelated and independent variables. Spurious correlation due to regression to the mean (RTM). Galton is credited with discovering this whereby the offspring of tall parents also tend to be tall, but less tall than the parents, while the offspring of small parents also tend to be small, but less small than the parents, however it can occur in many settings. RTM occurs with any variable that fluctuates within an individual or a population either due to measurement error and/or physiological variation. One example is in longitudinal studies where a variable is measured at several points in time and the interest in is a distal outcome measured once, or cross-sectionally. Methods used to analyse such data often condition on the outcome which induces RTM. The reversal paradox. This is a general term for things like Simpson's Paradox, Lord's Paradox and suppression, in situations where subgroups are being analysed or when mediators are included in a regression. I can't really think of anything that fits this description of 4. Bonus question (just in case you have an opinion on the matter): Which ones may deserve the most attention in a quantitative methods class taught to management students? Unsurprisingly I would definitely suggest that those falling under 3. deserve the most attention.
Spurious relationships: flavours, terminology Which of these cases are instances of a "spurious relationship"? How could the remaining cases be termed? I think 1. and 2. are both spurious, but they result from taking a finite sample. If we took
41,411
Spurious relationships: flavours, terminology
As I've noted in a related answer, my view is that it is best to reserve the attribution of "spuriousness" to an incorrect inference from correlation to cause. It is immportant to be able to talk accurately about evidence of correlation (and other nonlinear associations) between variables in statistical analysis, and this often leads to cases where there is clear evidence of correlation, or some other statistical association beteween variables. Merely asserting this relationship to be present, when there is evidence that it is indeed present, is certainly not "spurious". Thus, it is not appropriate to refer to inferences of statistical associations as "spurious" in their own right. What is "spurious" is when a person takes evidence of correlation and then uses this to make an inference of a direct causal link between variables, in circumstances where that step is not warranted. For that reason, I find the term "spurious correlation" to be harmful to discussion, since it actually refers to a spurious inference from correlation, which actually does exist, to a cause which does not. The items in your list: None of these situations strike me as inherently "spurious", though they could be accompanied by incorrect inferences in some cases. Items 1-2 of your list merely represent cases where there is sampling error, such that an estimate of a relationship or quantity in a smaller sample is not an accurate reflection of the true relationship or quantity in the larger group from which that sample is drawn. Since statistical methods have appropriate measures of the likely levels of sampling error, there is no need for anything further here. So long as inferences are being made using proper estimators, and appropriate measures of uncertainty are constructed that take account of the sampling error (e.g., using confidence intervals, Bayesian posterior intervals, etc.) nothing "spurious" is occurring. In my view, it is not a good idea to conflate sampling error with a spurious inference. Item 3 refers to an actual relationship that is a statistical association, but is merely "uninteresting" because it does not reflect a causal connection between the associated variables. Again, there is nothing inherently "spurious" about recognising the existence of this statistical association, but if a person were to infer a causal link between ice-cream sales and drownings, that would indeed be a spurious inference. Item 4 appears to me to be impossible. If you trace causality back to its philosophical roots, ultimately it is just an attribution to an object of certain kinds of actions that it takes. (Causality is merely "identity applied to action" ---i.e., a thing acts according to its nature.) Thus, any process that generates "data" is taking action, and that action can, in principle, be traced back to the nature of the process and its constituent objects. (Note that we speak metaphysically here, not epistemologically; there may be reasons why we cannot uncover the causal chain.) Which of these items to explain to students: As I see it, there are essentially three principles that come out of your four items, all of which are valuable for an understanding of the interplay between causality and statistical association. Firstly, there is the philosophical question of what causality is at a metaphysical level. Secondly, there is the question of when causality can properly be inferred from statistical association (and when it cannot). And third, there is the question of how we find evidence of statistical association, and how accurate our inference of statistical association are. Each of these issues is of value when teaching statistics, but the first gets you deeper into the territory of philosophy. If you would like your student to develop their skills as experimentalists then they should take some time to confront each of these questions and build up an integrated theory of statistical association and causality. At a minimum, I would expect students who do some statistical courses to come out with a reasonable understanding of methods to estimate statistical associations, and the likely level of sampling error, and I would expect them to understand the injunction that "correlation is not cause". Over time they should develop a deeper understanding of causal structures and their statistical implications, and ultimately they should develop the ability to plan and understand experimental structures that are designed to allow a transition from inference of association to inference of causality. It is certainly desirable if your students can back this up with a reasonably coherent philosophical explanation of causality, but that is quite rare, and it is excusable for that to be left out of a statistics course. (Interested students can be directed to the philosophy department for courses on that subject.)
Spurious relationships: flavours, terminology
As I've noted in a related answer, my view is that it is best to reserve the attribution of "spuriousness" to an incorrect inference from correlation to cause. It is immportant to be able to talk acc
Spurious relationships: flavours, terminology As I've noted in a related answer, my view is that it is best to reserve the attribution of "spuriousness" to an incorrect inference from correlation to cause. It is immportant to be able to talk accurately about evidence of correlation (and other nonlinear associations) between variables in statistical analysis, and this often leads to cases where there is clear evidence of correlation, or some other statistical association beteween variables. Merely asserting this relationship to be present, when there is evidence that it is indeed present, is certainly not "spurious". Thus, it is not appropriate to refer to inferences of statistical associations as "spurious" in their own right. What is "spurious" is when a person takes evidence of correlation and then uses this to make an inference of a direct causal link between variables, in circumstances where that step is not warranted. For that reason, I find the term "spurious correlation" to be harmful to discussion, since it actually refers to a spurious inference from correlation, which actually does exist, to a cause which does not. The items in your list: None of these situations strike me as inherently "spurious", though they could be accompanied by incorrect inferences in some cases. Items 1-2 of your list merely represent cases where there is sampling error, such that an estimate of a relationship or quantity in a smaller sample is not an accurate reflection of the true relationship or quantity in the larger group from which that sample is drawn. Since statistical methods have appropriate measures of the likely levels of sampling error, there is no need for anything further here. So long as inferences are being made using proper estimators, and appropriate measures of uncertainty are constructed that take account of the sampling error (e.g., using confidence intervals, Bayesian posterior intervals, etc.) nothing "spurious" is occurring. In my view, it is not a good idea to conflate sampling error with a spurious inference. Item 3 refers to an actual relationship that is a statistical association, but is merely "uninteresting" because it does not reflect a causal connection between the associated variables. Again, there is nothing inherently "spurious" about recognising the existence of this statistical association, but if a person were to infer a causal link between ice-cream sales and drownings, that would indeed be a spurious inference. Item 4 appears to me to be impossible. If you trace causality back to its philosophical roots, ultimately it is just an attribution to an object of certain kinds of actions that it takes. (Causality is merely "identity applied to action" ---i.e., a thing acts according to its nature.) Thus, any process that generates "data" is taking action, and that action can, in principle, be traced back to the nature of the process and its constituent objects. (Note that we speak metaphysically here, not epistemologically; there may be reasons why we cannot uncover the causal chain.) Which of these items to explain to students: As I see it, there are essentially three principles that come out of your four items, all of which are valuable for an understanding of the interplay between causality and statistical association. Firstly, there is the philosophical question of what causality is at a metaphysical level. Secondly, there is the question of when causality can properly be inferred from statistical association (and when it cannot). And third, there is the question of how we find evidence of statistical association, and how accurate our inference of statistical association are. Each of these issues is of value when teaching statistics, but the first gets you deeper into the territory of philosophy. If you would like your student to develop their skills as experimentalists then they should take some time to confront each of these questions and build up an integrated theory of statistical association and causality. At a minimum, I would expect students who do some statistical courses to come out with a reasonable understanding of methods to estimate statistical associations, and the likely level of sampling error, and I would expect them to understand the injunction that "correlation is not cause". Over time they should develop a deeper understanding of causal structures and their statistical implications, and ultimately they should develop the ability to plan and understand experimental structures that are designed to allow a transition from inference of association to inference of causality. It is certainly desirable if your students can back this up with a reasonably coherent philosophical explanation of causality, but that is quite rare, and it is excusable for that to be left out of a statistics course. (Interested students can be directed to the philosophy department for courses on that subject.)
Spurious relationships: flavours, terminology As I've noted in a related answer, my view is that it is best to reserve the attribution of "spuriousness" to an incorrect inference from correlation to cause. It is immportant to be able to talk acc
41,412
What does Sparse PCA implementation in Python do?
Answer: Your code seems to be in line with the cited paper, and the incosistency is an artefact of non-standard definition of principal components used in scikit-learn of versions < 0.22 (this is mentioned in the deprecation message). You can get meaningful results by setting normalize_components=True dividing the entries of the variance array by the number of samples, 505. This gives you explained variance ratios like 0.90514782, 0.98727812, 0.99406053, 0.99732234, 0.99940307. and 3. The most immediate way is to check the source files of the sklearn.decomposition on your computer. Details: The code of SparsePCA, as in scikit-learn=0.21.3, has an unexpected artefact: as is returns a transformation of inputs such that the $QR$ decomposition has $R$ diagonal with non-zero values in $\{-1, 1\}$ (rounding to 5 digits after the comma, easy to check) and therefore is not reflective of the preserved variance as it should be according to Zou, Hastie and Tibshirani. normalize components is key to getting to the meaningful variance ratios. This argument is only present in scikit-learn=0.21.3 and absent in scikit-learn=0.22. Set to False it induces a normalization of the outputs that hides the information about the explained variance. One can see the details of this normalization in the source code: class SparsePCA(BaseEstimator,TransformerMixin): ... def transform(self, X): ... U = ridge_regression(...) if not self.normalize_components: s = np.sqrt((U ** 2).sum(axis=0)) s[s == 0] = 1 U /= s return U The deprecation warning reads that this normalization is not in line with the common definition of principal components and is removed in version 0.22. If you are still using an old version like me, this normalization can be reversed as mentioned in the brief version of the answer.
What does Sparse PCA implementation in Python do?
Answer: Your code seems to be in line with the cited paper, and the incosistency is an artefact of non-standard definition of principal components used in scikit-learn of versions < 0.22 (this is me
What does Sparse PCA implementation in Python do? Answer: Your code seems to be in line with the cited paper, and the incosistency is an artefact of non-standard definition of principal components used in scikit-learn of versions < 0.22 (this is mentioned in the deprecation message). You can get meaningful results by setting normalize_components=True dividing the entries of the variance array by the number of samples, 505. This gives you explained variance ratios like 0.90514782, 0.98727812, 0.99406053, 0.99732234, 0.99940307. and 3. The most immediate way is to check the source files of the sklearn.decomposition on your computer. Details: The code of SparsePCA, as in scikit-learn=0.21.3, has an unexpected artefact: as is returns a transformation of inputs such that the $QR$ decomposition has $R$ diagonal with non-zero values in $\{-1, 1\}$ (rounding to 5 digits after the comma, easy to check) and therefore is not reflective of the preserved variance as it should be according to Zou, Hastie and Tibshirani. normalize components is key to getting to the meaningful variance ratios. This argument is only present in scikit-learn=0.21.3 and absent in scikit-learn=0.22. Set to False it induces a normalization of the outputs that hides the information about the explained variance. One can see the details of this normalization in the source code: class SparsePCA(BaseEstimator,TransformerMixin): ... def transform(self, X): ... U = ridge_regression(...) if not self.normalize_components: s = np.sqrt((U ** 2).sum(axis=0)) s[s == 0] = 1 U /= s return U The deprecation warning reads that this normalization is not in line with the common definition of principal components and is removed in version 0.22. If you are still using an old version like me, this normalization can be reversed as mentioned in the brief version of the answer.
What does Sparse PCA implementation in Python do? Answer: Your code seems to be in line with the cited paper, and the incosistency is an artefact of non-standard definition of principal components used in scikit-learn of versions < 0.22 (this is me
41,413
Fleiss kappa vs Cohen kappa
Fleiss' $\kappa$ works for any number of raters, Cohen's $\kappa$ only works for two raters; in addition, Fleiss' $\kappa$ allows for each rater to be rating different items, while Cohen's $\kappa$ assumes that both raters are rating identical items. However, Fleiss' $\kappa$ can lead to paradoxical results (see e.g. Gwet, Handbook of Interrater Reliability, namely that, even with nominal categories, reordering the categories can change the results. But Cohen's version has its own problems and can lead to odd results when there are large differences in the prevalence of possible outcomes (see e.g. Feinstein and Cicchetti, High Agreement but low Kappa. Gwet's AC1 statistic appears to be immune to these problems. For R raters it is given by $\gamma_1 = \frac{P_a-P_{e|\gamma_1}}{1-P_{e|\gamma_1}} $ where $P_{e|\gamma_1} = \frac{1}{K-1}\sum{\hat{\pi}_k}(1-\hat{\pi}_k)$ and $\hat{\pi}_k = \sum{\frac{R_{ik}}{R}} $
Fleiss kappa vs Cohen kappa
Fleiss' $\kappa$ works for any number of raters, Cohen's $\kappa$ only works for two raters; in addition, Fleiss' $\kappa$ allows for each rater to be rating different items, while Cohen's $\kappa$ as
Fleiss kappa vs Cohen kappa Fleiss' $\kappa$ works for any number of raters, Cohen's $\kappa$ only works for two raters; in addition, Fleiss' $\kappa$ allows for each rater to be rating different items, while Cohen's $\kappa$ assumes that both raters are rating identical items. However, Fleiss' $\kappa$ can lead to paradoxical results (see e.g. Gwet, Handbook of Interrater Reliability, namely that, even with nominal categories, reordering the categories can change the results. But Cohen's version has its own problems and can lead to odd results when there are large differences in the prevalence of possible outcomes (see e.g. Feinstein and Cicchetti, High Agreement but low Kappa. Gwet's AC1 statistic appears to be immune to these problems. For R raters it is given by $\gamma_1 = \frac{P_a-P_{e|\gamma_1}}{1-P_{e|\gamma_1}} $ where $P_{e|\gamma_1} = \frac{1}{K-1}\sum{\hat{\pi}_k}(1-\hat{\pi}_k)$ and $\hat{\pi}_k = \sum{\frac{R_{ik}}{R}} $
Fleiss kappa vs Cohen kappa Fleiss' $\kappa$ works for any number of raters, Cohen's $\kappa$ only works for two raters; in addition, Fleiss' $\kappa$ allows for each rater to be rating different items, while Cohen's $\kappa$ as
41,414
Biased prediction (overestimation) for xgboost
What is described is not very surprising. Boosting methods usually do not give well calibrated probabilistic predictions (e.g. see Caruana et al. (2004) "Ensemble Selection from Libraries of Models", Niculescu-Mizil & Caruana (2005) "Predicting good probabilities with supervised learning"). GLM-based methods provide more consistent marginal probabilities as they have a direct probabilistic connection. That said, probabilistic estimates can be calibrated on a follow up step (e.g. through isotonic regression or beta calibration). Please note that these methods are usually applied on a separate hold-out sample. Check the paper from Kull et al. (2017) "Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers" irrespective of using beta calibration or not, it gives a very nice modern exposition of the matter. The choice of the model depends on what we want to do with it/use its estimates. Do we care more about the order or the marginal probabilities of our estimates? If, for example, we care about the "fairness" of our prediction, the marginal estimates are more important. On the other hand, if we want to pick the "top X more probable" items for a particular treatment, AUC are more important. Probability calibration techniques try to bridge that gap to certain extent.
Biased prediction (overestimation) for xgboost
What is described is not very surprising. Boosting methods usually do not give well calibrated probabilistic predictions (e.g. see Caruana et al. (2004) "Ensemble Selection from Libraries of Models"
Biased prediction (overestimation) for xgboost What is described is not very surprising. Boosting methods usually do not give well calibrated probabilistic predictions (e.g. see Caruana et al. (2004) "Ensemble Selection from Libraries of Models", Niculescu-Mizil & Caruana (2005) "Predicting good probabilities with supervised learning"). GLM-based methods provide more consistent marginal probabilities as they have a direct probabilistic connection. That said, probabilistic estimates can be calibrated on a follow up step (e.g. through isotonic regression or beta calibration). Please note that these methods are usually applied on a separate hold-out sample. Check the paper from Kull et al. (2017) "Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers" irrespective of using beta calibration or not, it gives a very nice modern exposition of the matter. The choice of the model depends on what we want to do with it/use its estimates. Do we care more about the order or the marginal probabilities of our estimates? If, for example, we care about the "fairness" of our prediction, the marginal estimates are more important. On the other hand, if we want to pick the "top X more probable" items for a particular treatment, AUC are more important. Probability calibration techniques try to bridge that gap to certain extent.
Biased prediction (overestimation) for xgboost What is described is not very surprising. Boosting methods usually do not give well calibrated probabilistic predictions (e.g. see Caruana et al. (2004) "Ensemble Selection from Libraries of Models"
41,415
How to compute confidence interval in a case with small sample size, small population size, and one very dominant class?
Judging from its abstract, the JASA article by Weizhen Wang linked in my comment, gives a method to get (nearly) exact hypergeometric confidence confidence intervals. Perhaps a more easily computed style of CI, based on a normal approximation to the hypergeometric distribution, will suffice for your purposes. Main example: Suppose we know there are $T$ marbles in the urn, of which we withdraw $n = 40$ without replacement, observing $x = 37$ red marbles in our sample. We wish to estimate the number $R$ of red marbles in the urn. A Wald-style asymptotic CI would use $p = x/n$ to estimate the proportion of red marbles, thus estimating that the number of red balls in the urn is $R = pT$ (rounded to an integer). Such an interval would degenerate to a point estimate for $x = 0$ of $x = n,$ which you say you want to avoid. [Note: Our goal is to estimate the integer parameter $R.$ So a 'point' CI is not necessarily absurd.] For binomial CIs, the Agresti-Coull style of interval 'shrinks' the point estimate of the binomial success probability $\theta$ towards $1/2$ to provide an interval that does not degenerate to points for extreme observations and has more accurate coverage probability than Wald CIs. They use $\hat \theta = \frac{x+2}{n+4},$ but arguments can be made that $\hat \theta = \frac{x+1}{n+2}$ might also be used. Accordingly, I suggest the estimate $p = \frac{x+1}{n+2}$ as an estimate for hypergeometric $p$ in your problem. Proposed interval from normal approximation. The R code below computes the interval $p \pm 1.96 \sqrt{\frac{cp(1-p)}{n+2}},$ with $p = \frac{x+1}{n+2}$ and the 'finite population correction' $c = \frac{T-n}{T-1}.$ In terms of the number $R$ of red marbles, the result is $[125, 147]$. (I'm using R statistical software to do the calculations; a calculator would suffice.) t = 150 # marb in urn x = 37 # red in samp n = 40 # marb in samp p = (x+1)/(n+2) cor = (t-n)/(t-1) me = 1.96*sqrt(cor*p*(1-p)/(n+4)) lcl = p - me; ucl = p + me lcl; ucl [1] 0.8302363 [1] 0.9792875 LCL = max(0, round(t*lcl)) UCL = min(t, round(t*ucl)) c(LCL, UCL) [1] 125 147 For $x = 0, 20,$ and $40,$ this style of CI gives 95% interval estimates $[0,15],\, [56,94],$ and $[135,150],$ respectively. The interval for $x = 20$ may look excessively long, but I believe it is reasonable. First a roughly corresponding Agresti-Coull binomial 95% CI $(0.352, 0.648)$ for 20 observed successes in 40 trials. The "general method." More directly, the so-called 'general method' for confidence intervals can be (roughly) applied to the hypergeometric problem as shown below. [I say roughly, because some minor fussing with the discrete nature of the hypergeometric distribution remains unresolved.] r = 0:150 h1 = qhyper(.025, r, 150-r, 40) h2 = qhyper(.975, r, 150-r, 40) plot(r, h1, type="s", ylab="Red Obs", xlab="Red Est") lines(r, h2, type="s") abline(h = 20, col="red") abline(v=c(56,94), col="blue") For $x = 20,$ the 95% CI from the proposed modification of the Wald interval agrees pretty well with the CI from the general method. Below the graph for the general method shows lines corresponding to our main example with $x = 37.$ Furthermore, agreement for the extreme cases $(x = 0$ or $x = n)$ is not perfect, but also pretty good. (Even for large $T,$ the normal approximation is less accurate for $R$ near $0$ or $T.$ Maybe you can check the extreme cases for yourself from a printout of the figure.) Unresolved. An unresolved issue with the general method in this case is that it is not possible in general to get 95% CIs by 'cutting exactly 2.5% from each tail of the distribution' because the hypergeometric distribution is discrete. The usual approach is to start by getting 'optimal' one-sided CIs, and from them to get two-sided CIs with approximately 95% coverage--as near as possible to 95% without going below. (To use a normal approximation is essentially to ignore the discreteness issue, not to resolve it.) I do not see how to make sense of getting CIs for the number of red marbles without knowing the total number of marbles. If $n < 0.1T,$ it might be argued that binomial CIs should be used to give proportions of red marbles.
How to compute confidence interval in a case with small sample size, small population size, and one
Judging from its abstract, the JASA article by Weizhen Wang linked in my comment, gives a method to get (nearly) exact hypergeometric confidence confidence intervals. Perhaps a more easily computed s
How to compute confidence interval in a case with small sample size, small population size, and one very dominant class? Judging from its abstract, the JASA article by Weizhen Wang linked in my comment, gives a method to get (nearly) exact hypergeometric confidence confidence intervals. Perhaps a more easily computed style of CI, based on a normal approximation to the hypergeometric distribution, will suffice for your purposes. Main example: Suppose we know there are $T$ marbles in the urn, of which we withdraw $n = 40$ without replacement, observing $x = 37$ red marbles in our sample. We wish to estimate the number $R$ of red marbles in the urn. A Wald-style asymptotic CI would use $p = x/n$ to estimate the proportion of red marbles, thus estimating that the number of red balls in the urn is $R = pT$ (rounded to an integer). Such an interval would degenerate to a point estimate for $x = 0$ of $x = n,$ which you say you want to avoid. [Note: Our goal is to estimate the integer parameter $R.$ So a 'point' CI is not necessarily absurd.] For binomial CIs, the Agresti-Coull style of interval 'shrinks' the point estimate of the binomial success probability $\theta$ towards $1/2$ to provide an interval that does not degenerate to points for extreme observations and has more accurate coverage probability than Wald CIs. They use $\hat \theta = \frac{x+2}{n+4},$ but arguments can be made that $\hat \theta = \frac{x+1}{n+2}$ might also be used. Accordingly, I suggest the estimate $p = \frac{x+1}{n+2}$ as an estimate for hypergeometric $p$ in your problem. Proposed interval from normal approximation. The R code below computes the interval $p \pm 1.96 \sqrt{\frac{cp(1-p)}{n+2}},$ with $p = \frac{x+1}{n+2}$ and the 'finite population correction' $c = \frac{T-n}{T-1}.$ In terms of the number $R$ of red marbles, the result is $[125, 147]$. (I'm using R statistical software to do the calculations; a calculator would suffice.) t = 150 # marb in urn x = 37 # red in samp n = 40 # marb in samp p = (x+1)/(n+2) cor = (t-n)/(t-1) me = 1.96*sqrt(cor*p*(1-p)/(n+4)) lcl = p - me; ucl = p + me lcl; ucl [1] 0.8302363 [1] 0.9792875 LCL = max(0, round(t*lcl)) UCL = min(t, round(t*ucl)) c(LCL, UCL) [1] 125 147 For $x = 0, 20,$ and $40,$ this style of CI gives 95% interval estimates $[0,15],\, [56,94],$ and $[135,150],$ respectively. The interval for $x = 20$ may look excessively long, but I believe it is reasonable. First a roughly corresponding Agresti-Coull binomial 95% CI $(0.352, 0.648)$ for 20 observed successes in 40 trials. The "general method." More directly, the so-called 'general method' for confidence intervals can be (roughly) applied to the hypergeometric problem as shown below. [I say roughly, because some minor fussing with the discrete nature of the hypergeometric distribution remains unresolved.] r = 0:150 h1 = qhyper(.025, r, 150-r, 40) h2 = qhyper(.975, r, 150-r, 40) plot(r, h1, type="s", ylab="Red Obs", xlab="Red Est") lines(r, h2, type="s") abline(h = 20, col="red") abline(v=c(56,94), col="blue") For $x = 20,$ the 95% CI from the proposed modification of the Wald interval agrees pretty well with the CI from the general method. Below the graph for the general method shows lines corresponding to our main example with $x = 37.$ Furthermore, agreement for the extreme cases $(x = 0$ or $x = n)$ is not perfect, but also pretty good. (Even for large $T,$ the normal approximation is less accurate for $R$ near $0$ or $T.$ Maybe you can check the extreme cases for yourself from a printout of the figure.) Unresolved. An unresolved issue with the general method in this case is that it is not possible in general to get 95% CIs by 'cutting exactly 2.5% from each tail of the distribution' because the hypergeometric distribution is discrete. The usual approach is to start by getting 'optimal' one-sided CIs, and from them to get two-sided CIs with approximately 95% coverage--as near as possible to 95% without going below. (To use a normal approximation is essentially to ignore the discreteness issue, not to resolve it.) I do not see how to make sense of getting CIs for the number of red marbles without knowing the total number of marbles. If $n < 0.1T,$ it might be argued that binomial CIs should be used to give proportions of red marbles.
How to compute confidence interval in a case with small sample size, small population size, and one Judging from its abstract, the JASA article by Weizhen Wang linked in my comment, gives a method to get (nearly) exact hypergeometric confidence confidence intervals. Perhaps a more easily computed s
41,416
visreg visualization of mgcv results (GAM)
The difference arises because you are ignoring the intercept (& the coef for the non-reference levels of the factor; see first Note) when you go via the mgcv:::plot.gam() method. The visreg output is showing the smooth effect of each variable conditional upon the other terms in the model. The blue line shows the estimated smooth effect of Tag, including the model intercept. Note your model may be wrong; you probably should include Wundinfektionsstatus as the by-factor smooths are centered about zero and hence you need a parametric fixed effect (in this case) to raise adjust the mean response of each level up/down. Without it, the model has to build this into the smooth functions. You can achieve the same thing in mgcv by predicting from the model over a grid of values for Tag (in this case of the blue line) repeated for each level of your factor and you'll need to include a dummy patient ID, which we'll exclude: ## Assuming data in df pred_df <- with(df, expand.grid(Tag = seq(min(Tag), max(Tag), length = 100), Wundinfektionsstatus = levels(Wundinfektionsstatus), PatientenID = sample(PatientenID, 1))) pred <- predict(fit, newdata = pred_df, type = 'link', se.fit = TRUE, exclude = c("s(PatientenID)", "s(PatientenID,Tag)")) pred_df <- cbind(pred_df, as.data.frame(pred)) ## create confidence interval and back transform pred_df <- transform(pred_df, fitted_response = exp(fit), fitted_upper = exp(fit + (2 * se.fit)), fitted_lower = exp(fit - (2 * se.fit))) pred_df should be in a format to plot with ggplot2 as you see fit: library('ggplot2') ggplot(pred_df, aes(x = Tag, y = fitted_response)) + geom_ribbon(aes(fill = Wundinfektionsstatus, ymin = fitted_lower, ymax = fitted_upper)) + geom_line(aes(colour = Wundinfektionsstatus)) Note: thread the above as pseudo-code as I wrote this from memory and it is untested as there wasn't a reproducible example.
visreg visualization of mgcv results (GAM)
The difference arises because you are ignoring the intercept (& the coef for the non-reference levels of the factor; see first Note) when you go via the mgcv:::plot.gam() method. The visreg output is
visreg visualization of mgcv results (GAM) The difference arises because you are ignoring the intercept (& the coef for the non-reference levels of the factor; see first Note) when you go via the mgcv:::plot.gam() method. The visreg output is showing the smooth effect of each variable conditional upon the other terms in the model. The blue line shows the estimated smooth effect of Tag, including the model intercept. Note your model may be wrong; you probably should include Wundinfektionsstatus as the by-factor smooths are centered about zero and hence you need a parametric fixed effect (in this case) to raise adjust the mean response of each level up/down. Without it, the model has to build this into the smooth functions. You can achieve the same thing in mgcv by predicting from the model over a grid of values for Tag (in this case of the blue line) repeated for each level of your factor and you'll need to include a dummy patient ID, which we'll exclude: ## Assuming data in df pred_df <- with(df, expand.grid(Tag = seq(min(Tag), max(Tag), length = 100), Wundinfektionsstatus = levels(Wundinfektionsstatus), PatientenID = sample(PatientenID, 1))) pred <- predict(fit, newdata = pred_df, type = 'link', se.fit = TRUE, exclude = c("s(PatientenID)", "s(PatientenID,Tag)")) pred_df <- cbind(pred_df, as.data.frame(pred)) ## create confidence interval and back transform pred_df <- transform(pred_df, fitted_response = exp(fit), fitted_upper = exp(fit + (2 * se.fit)), fitted_lower = exp(fit - (2 * se.fit))) pred_df should be in a format to plot with ggplot2 as you see fit: library('ggplot2') ggplot(pred_df, aes(x = Tag, y = fitted_response)) + geom_ribbon(aes(fill = Wundinfektionsstatus, ymin = fitted_lower, ymax = fitted_upper)) + geom_line(aes(colour = Wundinfektionsstatus)) Note: thread the above as pseudo-code as I wrote this from memory and it is untested as there wasn't a reproducible example.
visreg visualization of mgcv results (GAM) The difference arises because you are ignoring the intercept (& the coef for the non-reference levels of the factor; see first Note) when you go via the mgcv:::plot.gam() method. The visreg output is
41,417
Is the independent sum of a continuous random variable and mixed random variable continuous?
There is a relatively elementary demonstration that the sum is continuous. Let $X$ have a probability distribution function $F_X$ with density function $f_X$ and let the distribution function of $Y$ be $F_Y.$ We do not assume $Y$ has a density function. I claim that $X+Y$ has a density function (implying it is absolutely continuous) and its density can be expressed as an expectation, $$f_{X+Y}(z) = E[f_X(z-Y)] = \int_\mathbb{R} f_X(z-y) \mathrm{d}F_Y(y).$$ To prove this claim, it suffices to show that integrating $f_{X+Y}$ indeed gives the desired probability function for $X+Y.$ The integration is performed by invoking Fubini's Theorem to change the order of the integrals, then changing the variable of integration from $w-y$ to $x,$ and finally expressing a probability in terms of an indicator function $\mathcal{I}.$ The remaining equations are just definitions of distribution functions and expectations as integrals: $$\eqalign{ \int_{-\infty}^z f_{X+Y}(w)\mathrm{d}w &= \int_{-\infty}^z \int_\mathbb{R} f_X(w-y) \mathrm{d}F_Y(y)\ \mathrm{d}w \\ &= \int_\mathbb{R} \int_{-\infty}^z f_X(w-y) \mathrm{d}w\ \mathrm{d}F_Y(y) \\ &= \int_\mathbb{R} \int_{-\infty}^{z-y} f_X(x) \mathrm{d}x\ \mathrm{d}F_Y(y) \\ &= \int_\mathbb{R} F_X(z-y) \mathrm{d}F_Y(y) \\ &= E[F_X(z-Y)] \\ &= E_Y[\Pr(X \le z-Y)] \\ &= E_Y[E_X[\mathcal{I}(X+Y\le z)]] \\ &= \Pr(X+Y\le z) \\ &= F_{X+Y}(z). }$$ For some intuition, think of adding $X$ to $Y$ as "smearing" every possible value of $Y$ according to the distribution of $X$ or, equivalently, as using $Y$ to weight a mixture of shifted versions of $X.$ In either case it's clear the result will have no atoms because $X$ has no atoms and so (of course) none of its shifted versions have atoms, either, whence no mixture of them will have any atoms. In this figure, the left panel depicts the density of $X.$ The next panel shows the mass of $Y$ -- this variable has no density. Nevertheless, as shown in the third panel, adding $Y$ to $X$ produces as many continuous components of $X$ as there are spikes in $Y,$ each one scaled by the height of its spike. The density of $X+Y$ is the accumulated height of all these components. Because it is formed from density functions, it too is a density, showing that $X+Y$ follows a continuous distribution even though $Y$ does not.
Is the independent sum of a continuous random variable and mixed random variable continuous?
There is a relatively elementary demonstration that the sum is continuous. Let $X$ have a probability distribution function $F_X$ with density function $f_X$ and let the distribution function of $Y$ b
Is the independent sum of a continuous random variable and mixed random variable continuous? There is a relatively elementary demonstration that the sum is continuous. Let $X$ have a probability distribution function $F_X$ with density function $f_X$ and let the distribution function of $Y$ be $F_Y.$ We do not assume $Y$ has a density function. I claim that $X+Y$ has a density function (implying it is absolutely continuous) and its density can be expressed as an expectation, $$f_{X+Y}(z) = E[f_X(z-Y)] = \int_\mathbb{R} f_X(z-y) \mathrm{d}F_Y(y).$$ To prove this claim, it suffices to show that integrating $f_{X+Y}$ indeed gives the desired probability function for $X+Y.$ The integration is performed by invoking Fubini's Theorem to change the order of the integrals, then changing the variable of integration from $w-y$ to $x,$ and finally expressing a probability in terms of an indicator function $\mathcal{I}.$ The remaining equations are just definitions of distribution functions and expectations as integrals: $$\eqalign{ \int_{-\infty}^z f_{X+Y}(w)\mathrm{d}w &= \int_{-\infty}^z \int_\mathbb{R} f_X(w-y) \mathrm{d}F_Y(y)\ \mathrm{d}w \\ &= \int_\mathbb{R} \int_{-\infty}^z f_X(w-y) \mathrm{d}w\ \mathrm{d}F_Y(y) \\ &= \int_\mathbb{R} \int_{-\infty}^{z-y} f_X(x) \mathrm{d}x\ \mathrm{d}F_Y(y) \\ &= \int_\mathbb{R} F_X(z-y) \mathrm{d}F_Y(y) \\ &= E[F_X(z-Y)] \\ &= E_Y[\Pr(X \le z-Y)] \\ &= E_Y[E_X[\mathcal{I}(X+Y\le z)]] \\ &= \Pr(X+Y\le z) \\ &= F_{X+Y}(z). }$$ For some intuition, think of adding $X$ to $Y$ as "smearing" every possible value of $Y$ according to the distribution of $X$ or, equivalently, as using $Y$ to weight a mixture of shifted versions of $X.$ In either case it's clear the result will have no atoms because $X$ has no atoms and so (of course) none of its shifted versions have atoms, either, whence no mixture of them will have any atoms. In this figure, the left panel depicts the density of $X.$ The next panel shows the mass of $Y$ -- this variable has no density. Nevertheless, as shown in the third panel, adding $Y$ to $X$ produces as many continuous components of $X$ as there are spikes in $Y,$ each one scaled by the height of its spike. The density of $X+Y$ is the accumulated height of all these components. Because it is formed from density functions, it too is a density, showing that $X+Y$ follows a continuous distribution even though $Y$ does not.
Is the independent sum of a continuous random variable and mixed random variable continuous? There is a relatively elementary demonstration that the sum is continuous. Let $X$ have a probability distribution function $F_X$ with density function $f_X$ and let the distribution function of $Y$ b
41,418
Is it advisable to use output from a ML model as a feature in another ML model?
It's perfectly OK, and also widely used. Different models can explain different perspectives of data and stacking them in front of each other, and using outputs/predictions produced by previous layers enables even moderately simple final layer algorithms to perform much better compared to on their own, because they use the cumulative knowledge learned via other algorithms. This is somewhat analogous to adding layers to neural networks.
Is it advisable to use output from a ML model as a feature in another ML model?
It's perfectly OK, and also widely used. Different models can explain different perspectives of data and stacking them in front of each other, and using outputs/predictions produced by previous layers
Is it advisable to use output from a ML model as a feature in another ML model? It's perfectly OK, and also widely used. Different models can explain different perspectives of data and stacking them in front of each other, and using outputs/predictions produced by previous layers enables even moderately simple final layer algorithms to perform much better compared to on their own, because they use the cumulative knowledge learned via other algorithms. This is somewhat analogous to adding layers to neural networks.
Is it advisable to use output from a ML model as a feature in another ML model? It's perfectly OK, and also widely used. Different models can explain different perspectives of data and stacking them in front of each other, and using outputs/predictions produced by previous layers
41,419
In linear regression, why are raw least squares residuals heteroskedastic?
Assuming the usual linear model with constant variance $\sigma^2$. I will use notation (and some results) from Leverages and effect of leverage points. The linear model in matrix form is $$ Y= X\beta + \epsilon $$ where $\epsilon$ is a vector of $n$ iid error terms. Then the hat matrix is $H=X(X^TX)^{-1}X^T$, and its diagonal terms are the leverages $h_{ii}$. We can show that the variance of the residuals $e_i = y_i-\hat{y_i}$ is $\sigma^2 (1-h_{ii})$ (remember $0<h_{ii}<1$.) So, under this model, to get constant-variance residuals we divide by $\sqrt{1-h_{ii}}$: the standardized residuals defined by $r_i=\frac{y_i-\hat{y}_i}{\sqrt{1-h_{ii}}}$ have constant variance. So for many uses in residuals analysis we prefer this standardized residuals, for instance in checking the assumption of constant variance. EDIT In a comment the OP writes: As far as I know the formal assumption is not "homoscedasticity of standardized residuals", but only residuals by itself. This confuses errors with residuals. The errors are the unobserved $\epsilon_i$ in the regression equation $y_i =\beta_0 +\sum_i \beta_i x_i +\epsilon_i$, while residuals is the observed difference between observation and model prediction. Homoskedastcity means that the the errors all have the same variance, not that the residuals have constant variance. If you want to use residuals to test/critizise the constant variance assumption, it is better to use a version of the residuals that do have constant variance (under the model.)
In linear regression, why are raw least squares residuals heteroskedastic?
Assuming the usual linear model with constant variance $\sigma^2$. I will use notation (and some results) from Leverages and effect of leverage points. The linear model in matrix form is $$ Y= X
In linear regression, why are raw least squares residuals heteroskedastic? Assuming the usual linear model with constant variance $\sigma^2$. I will use notation (and some results) from Leverages and effect of leverage points. The linear model in matrix form is $$ Y= X\beta + \epsilon $$ where $\epsilon$ is a vector of $n$ iid error terms. Then the hat matrix is $H=X(X^TX)^{-1}X^T$, and its diagonal terms are the leverages $h_{ii}$. We can show that the variance of the residuals $e_i = y_i-\hat{y_i}$ is $\sigma^2 (1-h_{ii})$ (remember $0<h_{ii}<1$.) So, under this model, to get constant-variance residuals we divide by $\sqrt{1-h_{ii}}$: the standardized residuals defined by $r_i=\frac{y_i-\hat{y}_i}{\sqrt{1-h_{ii}}}$ have constant variance. So for many uses in residuals analysis we prefer this standardized residuals, for instance in checking the assumption of constant variance. EDIT In a comment the OP writes: As far as I know the formal assumption is not "homoscedasticity of standardized residuals", but only residuals by itself. This confuses errors with residuals. The errors are the unobserved $\epsilon_i$ in the regression equation $y_i =\beta_0 +\sum_i \beta_i x_i +\epsilon_i$, while residuals is the observed difference between observation and model prediction. Homoskedastcity means that the the errors all have the same variance, not that the residuals have constant variance. If you want to use residuals to test/critizise the constant variance assumption, it is better to use a version of the residuals that do have constant variance (under the model.)
In linear regression, why are raw least squares residuals heteroskedastic? Assuming the usual linear model with constant variance $\sigma^2$. I will use notation (and some results) from Leverages and effect of leverage points. The linear model in matrix form is $$ Y= X
41,420
In linear regression, why are raw least squares residuals heteroskedastic?
Suppose you have three $x$-values: $-1,0, +1.$ The corresponding dependent variables $Y_1,Y_2,Y_3$ are where the randomness is. Now draw the picture. You can see why, if you move $Y_2$ up or down, the fitted line moves up or down. (By just $1/3$ as much as $Y_2$ moves.) But what happens if you move $Y_3$ up or down? The fitted line doesn't just move up or down; its slope also gets bigger or smaller. Or if you move $Y_1$ up or down, then the slope gets smaller or bigger, respectively. So the line has more tendency to stay close to the data point when the data point's $x$-value is far from the average $x$-value than when it is near the average $x$-value. Hence the observed residuals have a smaller variance when the $x$-value is far from the average $x$-value than when the $x$-value is close to the average $x$-value. The fitted values are \begin{align} & \left(\widehat Y_1, \widehat Y_2, \widehat Y_3\right) \\[5pt] = {} & \left( \tfrac 2 3 Y_1+ \tfrac 1 3 Y_2, \,\,\, \tfrac 1 3 (Y_1+Y_2 + Y_3), \,\,\, \tfrac 1 3 Y_2 + \tfrac 2 3 Y_3 \right). \end{align} So the residuals are \begin{align} & \left( Y_1, Y_2, Y_3 \right) - \left(\widehat Y_1, \widehat Y_2, \widehat Y_3\right) \\[5pt] = {} & \left( \tfrac 1 3 Y_1 - \tfrac 1 3 Y_2, \,\,\, -\tfrac 2 3 Y_1+ \tfrac 2 3 Y_2 - \tfrac 2 3 Y_3, \,\,\, -\tfrac 1 3 Y_2 + \tfrac 1 3 Y_3 \right). \end{align} From this one can compute the variances of the residuals.
In linear regression, why are raw least squares residuals heteroskedastic?
Suppose you have three $x$-values: $-1,0, +1.$ The corresponding dependent variables $Y_1,Y_2,Y_3$ are where the randomness is. Now draw the picture. You can see why, if you move $Y_2$ up or down, the
In linear regression, why are raw least squares residuals heteroskedastic? Suppose you have three $x$-values: $-1,0, +1.$ The corresponding dependent variables $Y_1,Y_2,Y_3$ are where the randomness is. Now draw the picture. You can see why, if you move $Y_2$ up or down, the fitted line moves up or down. (By just $1/3$ as much as $Y_2$ moves.) But what happens if you move $Y_3$ up or down? The fitted line doesn't just move up or down; its slope also gets bigger or smaller. Or if you move $Y_1$ up or down, then the slope gets smaller or bigger, respectively. So the line has more tendency to stay close to the data point when the data point's $x$-value is far from the average $x$-value than when it is near the average $x$-value. Hence the observed residuals have a smaller variance when the $x$-value is far from the average $x$-value than when the $x$-value is close to the average $x$-value. The fitted values are \begin{align} & \left(\widehat Y_1, \widehat Y_2, \widehat Y_3\right) \\[5pt] = {} & \left( \tfrac 2 3 Y_1+ \tfrac 1 3 Y_2, \,\,\, \tfrac 1 3 (Y_1+Y_2 + Y_3), \,\,\, \tfrac 1 3 Y_2 + \tfrac 2 3 Y_3 \right). \end{align} So the residuals are \begin{align} & \left( Y_1, Y_2, Y_3 \right) - \left(\widehat Y_1, \widehat Y_2, \widehat Y_3\right) \\[5pt] = {} & \left( \tfrac 1 3 Y_1 - \tfrac 1 3 Y_2, \,\,\, -\tfrac 2 3 Y_1+ \tfrac 2 3 Y_2 - \tfrac 2 3 Y_3, \,\,\, -\tfrac 1 3 Y_2 + \tfrac 1 3 Y_3 \right). \end{align} From this one can compute the variances of the residuals.
In linear regression, why are raw least squares residuals heteroskedastic? Suppose you have three $x$-values: $-1,0, +1.$ The corresponding dependent variables $Y_1,Y_2,Y_3$ are where the randomness is. Now draw the picture. You can see why, if you move $Y_2$ up or down, the
41,421
Is there a random variable $X$ with positive support such that the ratio of the two smallest realizations of an iid sample goes to one?
Yes, there are such distributions where the ratio of the second-smallest to smallest values approaches unity in probability as the sample size grows large. They have to behave "essentially" like distributions with strictly positive support in the sense of approaching zero probability very, very rapidly at the origin. See the first figure below for an illustration. (I rely here on the correct intuition that when the distribution has a strictly positive minimum, eventually the two smallest values in large samples will both be close to that minimum with high probability, whence their ratio approaches unity. This intuition won't work when the minimum is zero.) For convenience, let's work with independent and identically distributed random variables $X_i$ with continuous distributions. This means they have a common density $f$ and $F(x)=F(0)+\int_0^x f(t)\mathrm{d}t$ is their common distribution function. The question concerns the smallest two among the first $n$ of these variables, $X\lt Y,$ where $n$ will grow arbitrarily large. The joint distribution of the two smallest values among them has density $$f_{n;2}(x,y) = n(n-1)f(x)f(y)\left(1-F(y)\right)^{n-2}$$ for all $0\le x\le y.$ Introducing a variable $u$ defined by $$x = uy$$ to represent the ratio $x/y\le 1,$ changing variables from $(x,y)$ to $(u,y),$ integrating from $u=0$ to $u=r$ (which can be expressed in terms of $F$), and integrating over all possible values of $y$ will give us the distribution function of the ratio $U=X/Y$ as $$\Pr(U \le r) = n(n-1)\int_0^\infty f(y)F(ry)(1-F(y))^{n-2}\,\mathrm{d}y$$ for $0 \le r \le 1.$ This expression will be the object of our analysis. In some cases the distribution of $U$ is easy to evaluate. Although this next section is a diversion, it reveals a thought process that leads to an answer. Take, for instance, $$F_p(y) = y^p$$ for $0\le y\le 1$ where $p\gt 0.$ I obtain an answer that does not vary with $n$ at all: for possible ratios $0\le r \le 1,$ $$\Pr(U \le r) = r^p.\tag{1}$$ This indicates that any distribution $F$ that behaves like $F(y)\approx y^p$ near the origin will yield something like this power distribution for the ratio; in particular, it will not converge to $1$ in probability. As $p$ grows large, $(1)$ does converge to the constant value $1$ in probability. In other words, although we have not discovered any distributions where the ratio $U$ approaches $1$, we do have a family of distributions where this ratio can be made as close to $1$ as we might like by choosing a suitable member of the family (that is, by picking the power $p$ to be sufficiently large). We might therefore consider distributions where, at the origin, $F$ is flatter than any polynomial. The classic, and one of the simplest, such functions is $$F(x) = \exp\left(1 - \frac{1}{x^2}\right)$$ for $0 \le x \le 1.$ Obviously its support extends down to $0,$ because the exponential is never zero. $F$ is infinitely differentiable at $0$ but all derivatives are zero there. In this case the integral for $\Pr(U\le r)$ still can be evaluated. It is simpler to express the result in terms of $s \ge 1,$ where $$1/s^2 = r,$$ as $$\Pr(U \le r) = \Pr(U \le 1/s^2) = \frac{e^{1-s}n!}{(1+s)^{(n-1)}}= s\,e^{1-s}\frac{1^{(n)}}{s^{(n)}}\tag{2}$$ where $$s^{(n)} = s(1+s)(2+s)\cdots(n-1+s)$$ is the Pochhammer function. Here are plots of this probability (as a function of $s$) for $n=2, 2^2, 2^{2^2}, 2^{2^3},$ and $2^{2^4}.$ The graphs drop towards a level of zero as $n$ increases: It is easy to show that for all $z = s-1 \gt 0,$ $$\frac{1^{(n)}}{s^{(n)}} = \frac{1^{(n)}}{(1+z)^{(n)}} = \frac{1}{1+z}\frac{2}{2+z}\frac{3}{3+z}\cdots\frac{n}{n+z} \to 0$$ as $n$ grows large. (Examine the MacLaurin series of its logarithm.) Thus, for all $s=1+z \gt 1,$ $(2)$ goes to $0,$ demonstrating the ratio $U$ approaches the constant $1$ in probability.
Is there a random variable $X$ with positive support such that the ratio of the two smallest realiza
Yes, there are such distributions where the ratio of the second-smallest to smallest values approaches unity in probability as the sample size grows large. They have to behave "essentially" like dist
Is there a random variable $X$ with positive support such that the ratio of the two smallest realizations of an iid sample goes to one? Yes, there are such distributions where the ratio of the second-smallest to smallest values approaches unity in probability as the sample size grows large. They have to behave "essentially" like distributions with strictly positive support in the sense of approaching zero probability very, very rapidly at the origin. See the first figure below for an illustration. (I rely here on the correct intuition that when the distribution has a strictly positive minimum, eventually the two smallest values in large samples will both be close to that minimum with high probability, whence their ratio approaches unity. This intuition won't work when the minimum is zero.) For convenience, let's work with independent and identically distributed random variables $X_i$ with continuous distributions. This means they have a common density $f$ and $F(x)=F(0)+\int_0^x f(t)\mathrm{d}t$ is their common distribution function. The question concerns the smallest two among the first $n$ of these variables, $X\lt Y,$ where $n$ will grow arbitrarily large. The joint distribution of the two smallest values among them has density $$f_{n;2}(x,y) = n(n-1)f(x)f(y)\left(1-F(y)\right)^{n-2}$$ for all $0\le x\le y.$ Introducing a variable $u$ defined by $$x = uy$$ to represent the ratio $x/y\le 1,$ changing variables from $(x,y)$ to $(u,y),$ integrating from $u=0$ to $u=r$ (which can be expressed in terms of $F$), and integrating over all possible values of $y$ will give us the distribution function of the ratio $U=X/Y$ as $$\Pr(U \le r) = n(n-1)\int_0^\infty f(y)F(ry)(1-F(y))^{n-2}\,\mathrm{d}y$$ for $0 \le r \le 1.$ This expression will be the object of our analysis. In some cases the distribution of $U$ is easy to evaluate. Although this next section is a diversion, it reveals a thought process that leads to an answer. Take, for instance, $$F_p(y) = y^p$$ for $0\le y\le 1$ where $p\gt 0.$ I obtain an answer that does not vary with $n$ at all: for possible ratios $0\le r \le 1,$ $$\Pr(U \le r) = r^p.\tag{1}$$ This indicates that any distribution $F$ that behaves like $F(y)\approx y^p$ near the origin will yield something like this power distribution for the ratio; in particular, it will not converge to $1$ in probability. As $p$ grows large, $(1)$ does converge to the constant value $1$ in probability. In other words, although we have not discovered any distributions where the ratio $U$ approaches $1$, we do have a family of distributions where this ratio can be made as close to $1$ as we might like by choosing a suitable member of the family (that is, by picking the power $p$ to be sufficiently large). We might therefore consider distributions where, at the origin, $F$ is flatter than any polynomial. The classic, and one of the simplest, such functions is $$F(x) = \exp\left(1 - \frac{1}{x^2}\right)$$ for $0 \le x \le 1.$ Obviously its support extends down to $0,$ because the exponential is never zero. $F$ is infinitely differentiable at $0$ but all derivatives are zero there. In this case the integral for $\Pr(U\le r)$ still can be evaluated. It is simpler to express the result in terms of $s \ge 1,$ where $$1/s^2 = r,$$ as $$\Pr(U \le r) = \Pr(U \le 1/s^2) = \frac{e^{1-s}n!}{(1+s)^{(n-1)}}= s\,e^{1-s}\frac{1^{(n)}}{s^{(n)}}\tag{2}$$ where $$s^{(n)} = s(1+s)(2+s)\cdots(n-1+s)$$ is the Pochhammer function. Here are plots of this probability (as a function of $s$) for $n=2, 2^2, 2^{2^2}, 2^{2^3},$ and $2^{2^4}.$ The graphs drop towards a level of zero as $n$ increases: It is easy to show that for all $z = s-1 \gt 0,$ $$\frac{1^{(n)}}{s^{(n)}} = \frac{1^{(n)}}{(1+z)^{(n)}} = \frac{1}{1+z}\frac{2}{2+z}\frac{3}{3+z}\cdots\frac{n}{n+z} \to 0$$ as $n$ grows large. (Examine the MacLaurin series of its logarithm.) Thus, for all $s=1+z \gt 1,$ $(2)$ goes to $0,$ demonstrating the ratio $U$ approaches the constant $1$ in probability.
Is there a random variable $X$ with positive support such that the ratio of the two smallest realiza Yes, there are such distributions where the ratio of the second-smallest to smallest values approaches unity in probability as the sample size grows large. They have to behave "essentially" like dist
41,422
Distribution of sum of independent exponentials with random number of summands
As detailed in this X validated answer, waiting for a sum of iid exponential $\mathcal E(\lambda)$ variates to exceed one produces an Poisson $\mathcal P(\lambda)$ variate $N$. Hence waiting for a sum of iid exponential $\mathcal E(\lambda)$ variates to exceed $\tau_a$ produces an Poisson $\mathcal P(\tau_a\lambda)$ variate $N$, conditional on $\tau_a$ (since dividing the sum by $\tau_a$ amounts to multiply the exponential parameter by $\tau_a$. Therefore \begin{align*} \mathbb P(N=n)&=\int_0^\infty \mathbb P(N=n|\tau_a) \,\lambda_a e^{-\lambda_a\tau_a}\,\text{d}\tau_a\\ &= \int_0^\infty \dfrac{(\lambda\tau_a)^n}{n!}\,e^{-\tau_a\lambda} \,\lambda_a e^{-\lambda_a\tau_a}\,\text{d}\tau_a\\ &=\dfrac{\lambda_a\lambda^n}{n!}\,\int_0^\infty \tau_a^n\,e^{-\tau_a(\lambda+\lambda_a)} \,\text{d}\tau_a\\ &=\dfrac{\lambda_a\lambda^n}{n!}\,\dfrac{\Gamma(n+1)}{(\lambda_a+\lambda)^{n+1}}=\dfrac{\lambda_a\lambda^n}{(\lambda_a+\lambda)^{n+1}} \end{align*} which is a Geometric $\mathcal G(\lambda_a/\{\lambda_a+\lambda\})$ random variable. (Here the Geometric variate is a number of failures, meaning its support starts at zero.) Considering now $N$ as a Geometric number of trials, $N\ge 1$, the distribution of$$\zeta=\sum_{i=1}^N \tau_i$$the moment generating function of $\zeta$ is $$\mathbb E[e^{z\zeta}]=\mathbb E[e^{z\{\tau_1+\cdots+\tau_N\}}]=\mathbb E^N[\mathbb E^{\tau_1}[e^{z\tau_1}]^N]=E^N[\{\lambda/(\lambda-z)\}^N]=E^N[e^{N(\ln \lambda-\ln (\lambda-z))}]$$and the mgf of a Geometric $\mathcal G(p)$ variate is $$\varphi_N(z)=\dfrac{pe^z}{1-(1-p)e^z}$$Hence the moment generating function of $\zeta$ is $$\dfrac{pe^{\ln \lambda-\ln (\lambda-z)}}{1-(1-(\lambda_a/\{\lambda_a+\lambda\}))e^{\ln \lambda-\ln (\lambda-z)}}=\dfrac{p \lambda}{ \lambda-z-\lambda^2/\{\lambda_a+\lambda\}}$$ where $p=\lambda_a/\{\lambda_a+\lambda\}$, which leads to the mfg $$\dfrac{\lambda\lambda_a/\{\lambda_a+\lambda\}}{ \lambda-z-\lambda\lambda_a/\{\lambda_a+\lambda\}^2}=\dfrac{1}{1-z(p\lambda)^{-1}}$$meaning that $\zeta$ is an Exponential $\mathcal{E}(\lambda\lambda_a/\{\lambda_a+\lambda\})$ variate.
Distribution of sum of independent exponentials with random number of summands
As detailed in this X validated answer, waiting for a sum of iid exponential $\mathcal E(\lambda)$ variates to exceed one produces an Poisson $\mathcal P(\lambda)$ variate $N$. Hence waiting for a sum
Distribution of sum of independent exponentials with random number of summands As detailed in this X validated answer, waiting for a sum of iid exponential $\mathcal E(\lambda)$ variates to exceed one produces an Poisson $\mathcal P(\lambda)$ variate $N$. Hence waiting for a sum of iid exponential $\mathcal E(\lambda)$ variates to exceed $\tau_a$ produces an Poisson $\mathcal P(\tau_a\lambda)$ variate $N$, conditional on $\tau_a$ (since dividing the sum by $\tau_a$ amounts to multiply the exponential parameter by $\tau_a$. Therefore \begin{align*} \mathbb P(N=n)&=\int_0^\infty \mathbb P(N=n|\tau_a) \,\lambda_a e^{-\lambda_a\tau_a}\,\text{d}\tau_a\\ &= \int_0^\infty \dfrac{(\lambda\tau_a)^n}{n!}\,e^{-\tau_a\lambda} \,\lambda_a e^{-\lambda_a\tau_a}\,\text{d}\tau_a\\ &=\dfrac{\lambda_a\lambda^n}{n!}\,\int_0^\infty \tau_a^n\,e^{-\tau_a(\lambda+\lambda_a)} \,\text{d}\tau_a\\ &=\dfrac{\lambda_a\lambda^n}{n!}\,\dfrac{\Gamma(n+1)}{(\lambda_a+\lambda)^{n+1}}=\dfrac{\lambda_a\lambda^n}{(\lambda_a+\lambda)^{n+1}} \end{align*} which is a Geometric $\mathcal G(\lambda_a/\{\lambda_a+\lambda\})$ random variable. (Here the Geometric variate is a number of failures, meaning its support starts at zero.) Considering now $N$ as a Geometric number of trials, $N\ge 1$, the distribution of$$\zeta=\sum_{i=1}^N \tau_i$$the moment generating function of $\zeta$ is $$\mathbb E[e^{z\zeta}]=\mathbb E[e^{z\{\tau_1+\cdots+\tau_N\}}]=\mathbb E^N[\mathbb E^{\tau_1}[e^{z\tau_1}]^N]=E^N[\{\lambda/(\lambda-z)\}^N]=E^N[e^{N(\ln \lambda-\ln (\lambda-z))}]$$and the mgf of a Geometric $\mathcal G(p)$ variate is $$\varphi_N(z)=\dfrac{pe^z}{1-(1-p)e^z}$$Hence the moment generating function of $\zeta$ is $$\dfrac{pe^{\ln \lambda-\ln (\lambda-z)}}{1-(1-(\lambda_a/\{\lambda_a+\lambda\}))e^{\ln \lambda-\ln (\lambda-z)}}=\dfrac{p \lambda}{ \lambda-z-\lambda^2/\{\lambda_a+\lambda\}}$$ where $p=\lambda_a/\{\lambda_a+\lambda\}$, which leads to the mfg $$\dfrac{\lambda\lambda_a/\{\lambda_a+\lambda\}}{ \lambda-z-\lambda\lambda_a/\{\lambda_a+\lambda\}^2}=\dfrac{1}{1-z(p\lambda)^{-1}}$$meaning that $\zeta$ is an Exponential $\mathcal{E}(\lambda\lambda_a/\{\lambda_a+\lambda\})$ variate.
Distribution of sum of independent exponentials with random number of summands As detailed in this X validated answer, waiting for a sum of iid exponential $\mathcal E(\lambda)$ variates to exceed one produces an Poisson $\mathcal P(\lambda)$ variate $N$. Hence waiting for a sum
41,423
Literature on $\ell_q$ LASSO, $q < 1$
Frank & Friedman (1993) suggested the idea of bridge estimates, with penalty function $P_B=\lambda\sum_j|\alpha_j|^\gamma$, as a paradigm for understanding subset selection and ridge regression. The $\ell_0$-norm corresponds to subset selection methods, $\ell_1$ is the LASSO, and $\ell_2$ is ridge regression. They noted that it would be beneficial to estimate the parameters $\lambda$ and $\gamma$ simultaneously to widen the choice of possible models but did not develop the method any further. The $\lambda$ parameter controls the size of estimates ($\hat\alpha_j^B$) or the amount of shrinkage, while the $\gamma$ parameter determines the directions in which the parameters are aligned with respect to the coordinate axes. When $\gamma\in(0,1)$: The penalty function $P_B=\lambda\sum_j|\alpha_j|^\gamma$ is concave. The figure below shows the concave penalty functions (dotted) in comparison with the LASSO penalty function (solid). Some parameters are set to zero and the shrinkage is inversely proportional to the size of the parameters. The figure shows the thresholding function $\hat\alpha_j - sign(\hat\alpha_j)\lambda\gamma|\hat\alpha_j|^{\gamma-1}$, where $\hat\alpha_j$ are the OLS estimates. Here, with $\lambda=4$ and $\gamma=0.25$ or $\gamma=0.5$, large parameters remain fairly untouched by the shrinkage. With the LASSO (solid line), the shrinkage is constant. Estimates are likely to occur on the axes. The figure shows norm balls in $\mathbb{R}^2$ (left) and $\mathbb{R}^3$ (right) for $\gamma=0.5$. Please see pages 118-119 and 126-127 of Kirkland (2014) for a comparison of these figures with other values of $\gamma$. This masters thesis also provides an overview of other shrinkage methods. Knight & Fu (2000) showed that bridge estimates are consistent and have asymptotic normal distributions. The main idea behind the concave penalty functions is that large parameters are penalized less so that the resulting estimates are nearly unbiased. I know of 2 other shrinkage methods which make use of concave penalties and may be of interest to you: Fan & Li (2001) proposed SCAD, which was the first shrinkage method having the oracle property. Although the adaptive LASSO is oracle, the bias may decrease at a faster rate with SCAD. Zhang (2010) proposed MCP, which follows a similar approach to SCAD but penalizes smaller parameters less. Despite having concave penalties which are also non-differentiable at zero, they both provide efficient algorithms for computing the solution, even in high dimensional settings when $p\geq n$.
Literature on $\ell_q$ LASSO, $q < 1$
Frank & Friedman (1993) suggested the idea of bridge estimates, with penalty function $P_B=\lambda\sum_j|\alpha_j|^\gamma$, as a paradigm for understanding subset selection and ridge regression. The $
Literature on $\ell_q$ LASSO, $q < 1$ Frank & Friedman (1993) suggested the idea of bridge estimates, with penalty function $P_B=\lambda\sum_j|\alpha_j|^\gamma$, as a paradigm for understanding subset selection and ridge regression. The $\ell_0$-norm corresponds to subset selection methods, $\ell_1$ is the LASSO, and $\ell_2$ is ridge regression. They noted that it would be beneficial to estimate the parameters $\lambda$ and $\gamma$ simultaneously to widen the choice of possible models but did not develop the method any further. The $\lambda$ parameter controls the size of estimates ($\hat\alpha_j^B$) or the amount of shrinkage, while the $\gamma$ parameter determines the directions in which the parameters are aligned with respect to the coordinate axes. When $\gamma\in(0,1)$: The penalty function $P_B=\lambda\sum_j|\alpha_j|^\gamma$ is concave. The figure below shows the concave penalty functions (dotted) in comparison with the LASSO penalty function (solid). Some parameters are set to zero and the shrinkage is inversely proportional to the size of the parameters. The figure shows the thresholding function $\hat\alpha_j - sign(\hat\alpha_j)\lambda\gamma|\hat\alpha_j|^{\gamma-1}$, where $\hat\alpha_j$ are the OLS estimates. Here, with $\lambda=4$ and $\gamma=0.25$ or $\gamma=0.5$, large parameters remain fairly untouched by the shrinkage. With the LASSO (solid line), the shrinkage is constant. Estimates are likely to occur on the axes. The figure shows norm balls in $\mathbb{R}^2$ (left) and $\mathbb{R}^3$ (right) for $\gamma=0.5$. Please see pages 118-119 and 126-127 of Kirkland (2014) for a comparison of these figures with other values of $\gamma$. This masters thesis also provides an overview of other shrinkage methods. Knight & Fu (2000) showed that bridge estimates are consistent and have asymptotic normal distributions. The main idea behind the concave penalty functions is that large parameters are penalized less so that the resulting estimates are nearly unbiased. I know of 2 other shrinkage methods which make use of concave penalties and may be of interest to you: Fan & Li (2001) proposed SCAD, which was the first shrinkage method having the oracle property. Although the adaptive LASSO is oracle, the bias may decrease at a faster rate with SCAD. Zhang (2010) proposed MCP, which follows a similar approach to SCAD but penalizes smaller parameters less. Despite having concave penalties which are also non-differentiable at zero, they both provide efficient algorithms for computing the solution, even in high dimensional settings when $p\geq n$.
Literature on $\ell_q$ LASSO, $q < 1$ Frank & Friedman (1993) suggested the idea of bridge estimates, with penalty function $P_B=\lambda\sum_j|\alpha_j|^\gamma$, as a paradigm for understanding subset selection and ridge regression. The $
41,424
How to calculate ARMA model manually without R or Python
It is unclear in your question what "manual calculation" excludes, and your comment that you cannot use "advanced functions" is also not very helpful. In any case, fitting an ARMA model via maximum-likelihood estimation is an optimisation problem where you need to maximise a function over a set of parameters. For example, the log-likelihood function in a stationary Gaussian ARMA is: $$\ell_{x}(\mu,\boldsymbol{\phi},\boldsymbol{\theta}) = - \frac{1}{2} \ln | \boldsymbol{\Sigma}(\boldsymbol{\phi},\boldsymbol{\theta})| + (\mathbf{x} - \mu \boldsymbol{1})^\text{T} \boldsymbol{\Sigma}(\boldsymbol{\phi},\boldsymbol{\theta})^{-1} (\mathbf{x} - \mu \boldsymbol{1}),$$ where the covariance matrix $\boldsymbol{\Sigma}(\boldsymbol{\phi},\boldsymbol{\theta})$ depends on the parameters $\boldsymbol{\phi}$ and $\boldsymbol{\theta}$ according to the auto-covariance function for the ARMA model. This function has two terms. The second is a standard sum-of-squares term, but the first is a more complicated term involving the logarithm of the determinant of the covariance matrix. The exact MLE method has critical point equations that cannot be put into closed form, so this would entail the use of iterative methods (e.g., Newton-Raphson iteration) to find the maximising values. If you are willing to deviate slightly from the exact MLE and use the partial likelihood function ---excluding the logarithmic term--- this gives MLEs that can be obtained as standard WLS estimates. Once you estimate the parameters in the model you can make forecasts as point-estimates by substituting the parameter estimates. It is certainly possible to program this optimisation problem "manually", in the sense that you can directly program an iterative procedure to optimise the above function, without using pre-programmed optimisation procedures. It would be quite laborious, but it could probably done in a few hours.
How to calculate ARMA model manually without R or Python
It is unclear in your question what "manual calculation" excludes, and your comment that you cannot use "advanced functions" is also not very helpful. In any case, fitting an ARMA model via maximum-l
How to calculate ARMA model manually without R or Python It is unclear in your question what "manual calculation" excludes, and your comment that you cannot use "advanced functions" is also not very helpful. In any case, fitting an ARMA model via maximum-likelihood estimation is an optimisation problem where you need to maximise a function over a set of parameters. For example, the log-likelihood function in a stationary Gaussian ARMA is: $$\ell_{x}(\mu,\boldsymbol{\phi},\boldsymbol{\theta}) = - \frac{1}{2} \ln | \boldsymbol{\Sigma}(\boldsymbol{\phi},\boldsymbol{\theta})| + (\mathbf{x} - \mu \boldsymbol{1})^\text{T} \boldsymbol{\Sigma}(\boldsymbol{\phi},\boldsymbol{\theta})^{-1} (\mathbf{x} - \mu \boldsymbol{1}),$$ where the covariance matrix $\boldsymbol{\Sigma}(\boldsymbol{\phi},\boldsymbol{\theta})$ depends on the parameters $\boldsymbol{\phi}$ and $\boldsymbol{\theta}$ according to the auto-covariance function for the ARMA model. This function has two terms. The second is a standard sum-of-squares term, but the first is a more complicated term involving the logarithm of the determinant of the covariance matrix. The exact MLE method has critical point equations that cannot be put into closed form, so this would entail the use of iterative methods (e.g., Newton-Raphson iteration) to find the maximising values. If you are willing to deviate slightly from the exact MLE and use the partial likelihood function ---excluding the logarithmic term--- this gives MLEs that can be obtained as standard WLS estimates. Once you estimate the parameters in the model you can make forecasts as point-estimates by substituting the parameter estimates. It is certainly possible to program this optimisation problem "manually", in the sense that you can directly program an iterative procedure to optimise the above function, without using pre-programmed optimisation procedures. It would be quite laborious, but it could probably done in a few hours.
How to calculate ARMA model manually without R or Python It is unclear in your question what "manual calculation" excludes, and your comment that you cannot use "advanced functions" is also not very helpful. In any case, fitting an ARMA model via maximum-l
41,425
Using PCA to reduce dimensionality of training and testing data [duplicate]
Yes this is a common way of overfitting your model to the test data. In NLP a similar mistake is to do vocabulary selection and bag-of-words vectorization on the full train/test data. This is a bit insidious since doing model selection is a lot easier with most tools once you got your feature matrix. In addition the "boost" you get is not alarmingly big so it is tempting to just think your model is great and pat yourself on the back. On a positive note I think this was a lot more common 5-10 ten years ago and most practitioners are wise to this error today.
Using PCA to reduce dimensionality of training and testing data [duplicate]
Yes this is a common way of overfitting your model to the test data. In NLP a similar mistake is to do vocabulary selection and bag-of-words vectorization on the full train/test data. This is a bit in
Using PCA to reduce dimensionality of training and testing data [duplicate] Yes this is a common way of overfitting your model to the test data. In NLP a similar mistake is to do vocabulary selection and bag-of-words vectorization on the full train/test data. This is a bit insidious since doing model selection is a lot easier with most tools once you got your feature matrix. In addition the "boost" you get is not alarmingly big so it is tempting to just think your model is great and pat yourself on the back. On a positive note I think this was a lot more common 5-10 ten years ago and most practitioners are wise to this error today.
Using PCA to reduce dimensionality of training and testing data [duplicate] Yes this is a common way of overfitting your model to the test data. In NLP a similar mistake is to do vocabulary selection and bag-of-words vectorization on the full train/test data. This is a bit in
41,426
Covariance of An Empirical Distribution Function Evaluated at Different Points
Note that \begin{align*}\text{Cov}(\frac{1}{n} \sum I\{ X_i \le x\},\frac{1}{n} \sum I\{ X_i \le y\}) &=\frac{1}{n^2}\text{Cov}(\sum I\{ X_i \le x\},\sum I\{ X_i \le y\})\\ &=\frac{1}{n^2}\sum_{i=1}^n\text{Cov}(I\{ X_i \le x\},I\{ X_i \le y\})\\ &\qquad\quad{\text{(since the $X_i$'s are independent)}}\\ &=\frac{1}{n}\text{Cov}(I\{ X_1 \le x\},I\{ X_1 \le y\})\end{align*} and \begin{align*} \mathop{\mathbb{E}}[I\{X_1 \le x\}I\{X_1 \le y\}] &= \mathop{\mathbb{E}}[I(X_1 \le \min\{x, y\})] \\&= F(\min\{x,y\}) \end{align*} leading to $$\text{Cov}(\hat F_n(x), \hat F_n(y)) = \frac{1}{n}[F(\min\{x,y\}) - F(x)F(y)]\tag{1}$$ When writing \begin{align*} \mathop{\mathbb{E}}(\hat F_n(x)\cdot \hat F_n(y)) &= \frac{1}{n^2} \mathop{\mathbb{E}}(\sum_i I\{X_i \le x\} \sum_j I\{X_j \le y\}) \\&= \frac{1}{n^2} \mathop{\mathbb{E}}(\underbrace{\sum_{i \neq j}}_{n(n-1)\\\text{distinct}\\\text{pairs}} I\{X_i \le x\}I\{X_j \le y\} + \sum_{i = j} I\{X_i \le x\}I\{X_j \le y\}) \\&\overbrace{=}^\text{wrong!} \frac{1}{n^2}(nF(\min\{x,y\})+nF(x)F(y))\end{align*} the mistake is in not counting the number of distinct pairs $i\ne j$ right: there are $n(n-1)$ of them, rather than $n$. With this correction, $$\mathop{\mathbb{E}}(\hat F_n(x)\cdot \hat F_n(y))=\frac{F(\min\{x,y\})}{n}+\frac{n-1}{n}F(x)F(y)-F(x)F(y)=\frac{1}{n}[F(\min\{x,y\}) - F(x)F(y)]$$ recovering (1). Here is an illustration of the fit between theory and empirical evaluation of $\text{Cov}(I\{ X_1 \le x\},I\{ X_1 \le y\})$ x=rnorm(1e6) cov((x<a),(x<b)) pnorm(min(c(a,b)))-pnorm(a)*pnorm(b)) based on 10³ random pairs (a,b).
Covariance of An Empirical Distribution Function Evaluated at Different Points
Note that \begin{align*}\text{Cov}(\frac{1}{n} \sum I\{ X_i \le x\},\frac{1}{n} \sum I\{ X_i \le y\}) &=\frac{1}{n^2}\text{Cov}(\sum I\{ X_i \le x\},\sum I\{ X_i \le y\})\\ &=\frac{1}{n^2}\sum_{i=1}^n
Covariance of An Empirical Distribution Function Evaluated at Different Points Note that \begin{align*}\text{Cov}(\frac{1}{n} \sum I\{ X_i \le x\},\frac{1}{n} \sum I\{ X_i \le y\}) &=\frac{1}{n^2}\text{Cov}(\sum I\{ X_i \le x\},\sum I\{ X_i \le y\})\\ &=\frac{1}{n^2}\sum_{i=1}^n\text{Cov}(I\{ X_i \le x\},I\{ X_i \le y\})\\ &\qquad\quad{\text{(since the $X_i$'s are independent)}}\\ &=\frac{1}{n}\text{Cov}(I\{ X_1 \le x\},I\{ X_1 \le y\})\end{align*} and \begin{align*} \mathop{\mathbb{E}}[I\{X_1 \le x\}I\{X_1 \le y\}] &= \mathop{\mathbb{E}}[I(X_1 \le \min\{x, y\})] \\&= F(\min\{x,y\}) \end{align*} leading to $$\text{Cov}(\hat F_n(x), \hat F_n(y)) = \frac{1}{n}[F(\min\{x,y\}) - F(x)F(y)]\tag{1}$$ When writing \begin{align*} \mathop{\mathbb{E}}(\hat F_n(x)\cdot \hat F_n(y)) &= \frac{1}{n^2} \mathop{\mathbb{E}}(\sum_i I\{X_i \le x\} \sum_j I\{X_j \le y\}) \\&= \frac{1}{n^2} \mathop{\mathbb{E}}(\underbrace{\sum_{i \neq j}}_{n(n-1)\\\text{distinct}\\\text{pairs}} I\{X_i \le x\}I\{X_j \le y\} + \sum_{i = j} I\{X_i \le x\}I\{X_j \le y\}) \\&\overbrace{=}^\text{wrong!} \frac{1}{n^2}(nF(\min\{x,y\})+nF(x)F(y))\end{align*} the mistake is in not counting the number of distinct pairs $i\ne j$ right: there are $n(n-1)$ of them, rather than $n$. With this correction, $$\mathop{\mathbb{E}}(\hat F_n(x)\cdot \hat F_n(y))=\frac{F(\min\{x,y\})}{n}+\frac{n-1}{n}F(x)F(y)-F(x)F(y)=\frac{1}{n}[F(\min\{x,y\}) - F(x)F(y)]$$ recovering (1). Here is an illustration of the fit between theory and empirical evaluation of $\text{Cov}(I\{ X_1 \le x\},I\{ X_1 \le y\})$ x=rnorm(1e6) cov((x<a),(x<b)) pnorm(min(c(a,b)))-pnorm(a)*pnorm(b)) based on 10³ random pairs (a,b).
Covariance of An Empirical Distribution Function Evaluated at Different Points Note that \begin{align*}\text{Cov}(\frac{1}{n} \sum I\{ X_i \le x\},\frac{1}{n} \sum I\{ X_i \le y\}) &=\frac{1}{n^2}\text{Cov}(\sum I\{ X_i \le x\},\sum I\{ X_i \le y\})\\ &=\frac{1}{n^2}\sum_{i=1}^n
41,427
Is it possible for two independent variables to be correlated, by chance?
The question is confusing, and so maybe misinterpreted by some commenters and answerers. In your citation, there is two random variables $X$ and $Y$ which are independent. Then, there is a theorem saying that they are uncorrelated. It also have an easy proof, which you can find in many probability texts. But this do not mean that if you have a sample $(X_1,Y_1), \dotsc, (X_n,Y_n)$ from $(X,Y)$, that the sample correlation coefficient will be zero! which is what the answer by @Nutle explains. But, if $n$ is large, the sampling distribution of that correlation coefficient will be concentrated close to zero. So, yes, samples from two independent variables can seem to be correlated, by chance. Especially if $n$ is small. That just means that you risk having a type I error.
Is it possible for two independent variables to be correlated, by chance?
The question is confusing, and so maybe misinterpreted by some commenters and answerers. In your citation, there is two random variables $X$ and $Y$ which are independent. Then, there is a theorem say
Is it possible for two independent variables to be correlated, by chance? The question is confusing, and so maybe misinterpreted by some commenters and answerers. In your citation, there is two random variables $X$ and $Y$ which are independent. Then, there is a theorem saying that they are uncorrelated. It also have an easy proof, which you can find in many probability texts. But this do not mean that if you have a sample $(X_1,Y_1), \dotsc, (X_n,Y_n)$ from $(X,Y)$, that the sample correlation coefficient will be zero! which is what the answer by @Nutle explains. But, if $n$ is large, the sampling distribution of that correlation coefficient will be concentrated close to zero. So, yes, samples from two independent variables can seem to be correlated, by chance. Especially if $n$ is small. That just means that you risk having a type I error.
Is it possible for two independent variables to be correlated, by chance? The question is confusing, and so maybe misinterpreted by some commenters and answerers. In your citation, there is two random variables $X$ and $Y$ which are independent. Then, there is a theorem say
41,428
Is it possible for two independent variables to be correlated, by chance?
It's a matter of change. Yes, it's always possible that on timepoint x, two independent variables can increase or decrease at the same time. At timepoint x it looks like these variables are dependent. However, if the data is independent they follow their own trends in most of the timepoints.
Is it possible for two independent variables to be correlated, by chance?
It's a matter of change. Yes, it's always possible that on timepoint x, two independent variables can increase or decrease at the same time. At timepoint x it looks like these variables are dependent
Is it possible for two independent variables to be correlated, by chance? It's a matter of change. Yes, it's always possible that on timepoint x, two independent variables can increase or decrease at the same time. At timepoint x it looks like these variables are dependent. However, if the data is independent they follow their own trends in most of the timepoints.
Is it possible for two independent variables to be correlated, by chance? It's a matter of change. Yes, it's always possible that on timepoint x, two independent variables can increase or decrease at the same time. At timepoint x it looks like these variables are dependent
41,429
Is it possible for two independent variables to be correlated, by chance?
Yes, and to add to all previous answers and comments, you can easily simulate the accidence. I.e., this simple R example: require(magrittr) lapply(1:1000, function(b){ set.seed(b) cor(rnorm(100), rnorm(100)) }) %>% do.call(c,.) %>% abs %>% max Out of 1000 random draws there was a case when two independent normal random variables had a (weak) correlation of $\pm 0.34$. This tends to decrease, of course, to zero, with increasing the size of the sample.
Is it possible for two independent variables to be correlated, by chance?
Yes, and to add to all previous answers and comments, you can easily simulate the accidence. I.e., this simple R example: require(magrittr) lapply(1:1000, function(b){ set.seed(b) cor(rnorm(100),
Is it possible for two independent variables to be correlated, by chance? Yes, and to add to all previous answers and comments, you can easily simulate the accidence. I.e., this simple R example: require(magrittr) lapply(1:1000, function(b){ set.seed(b) cor(rnorm(100), rnorm(100)) }) %>% do.call(c,.) %>% abs %>% max Out of 1000 random draws there was a case when two independent normal random variables had a (weak) correlation of $\pm 0.34$. This tends to decrease, of course, to zero, with increasing the size of the sample.
Is it possible for two independent variables to be correlated, by chance? Yes, and to add to all previous answers and comments, you can easily simulate the accidence. I.e., this simple R example: require(magrittr) lapply(1:1000, function(b){ set.seed(b) cor(rnorm(100),
41,430
Is it possible for two independent variables to be correlated, by chance?
If X and Y are independent from each other, it can be proven that the correlation coefficient must be zero. However, there are two caveats: As others point out, just because you have two populations that are uncorrelated, it does not necessarily mean that the samples drawn from the two populations will also be uncorrelated. Remember that there is a thing called "variance." The reverse is not necessarily true. That is, two perfectly uncorrelated variables are not necessarily independent from each other. Correlation only measures the linear relationship. Just look at the chart below from Wikipedia: The last row shows how two perfectly uncorrelated variables can have non-linear relationship and thus are dependent.
Is it possible for two independent variables to be correlated, by chance?
If X and Y are independent from each other, it can be proven that the correlation coefficient must be zero. However, there are two caveats: As others point out, just because you have two populations
Is it possible for two independent variables to be correlated, by chance? If X and Y are independent from each other, it can be proven that the correlation coefficient must be zero. However, there are two caveats: As others point out, just because you have two populations that are uncorrelated, it does not necessarily mean that the samples drawn from the two populations will also be uncorrelated. Remember that there is a thing called "variance." The reverse is not necessarily true. That is, two perfectly uncorrelated variables are not necessarily independent from each other. Correlation only measures the linear relationship. Just look at the chart below from Wikipedia: The last row shows how two perfectly uncorrelated variables can have non-linear relationship and thus are dependent.
Is it possible for two independent variables to be correlated, by chance? If X and Y are independent from each other, it can be proven that the correlation coefficient must be zero. However, there are two caveats: As others point out, just because you have two populations
41,431
Neural network's weight reduction
You might wanna check: http://yann.lecun.com/exdb/publis/pdf/lecun-90b.pdf And a more recent paper on the topic: https://arxiv.org/pdf/1506.02626v3.pdf However, I was not able to find an implementation of these two. So you will need to implement it yourself.
Neural network's weight reduction
You might wanna check: http://yann.lecun.com/exdb/publis/pdf/lecun-90b.pdf And a more recent paper on the topic: https://arxiv.org/pdf/1506.02626v3.pdf However, I was not able to find an implementatio
Neural network's weight reduction You might wanna check: http://yann.lecun.com/exdb/publis/pdf/lecun-90b.pdf And a more recent paper on the topic: https://arxiv.org/pdf/1506.02626v3.pdf However, I was not able to find an implementation of these two. So you will need to implement it yourself.
Neural network's weight reduction You might wanna check: http://yann.lecun.com/exdb/publis/pdf/lecun-90b.pdf And a more recent paper on the topic: https://arxiv.org/pdf/1506.02626v3.pdf However, I was not able to find an implementatio
41,432
Neural network's weight reduction
After reading some of the helpful comments and answers, I've did some focused reading on my own. As mentioned in other answers, this process is called Pruning and like many other ideas in the neural network area, it is not new. From what I can tell, it originates in LeCun's 1990 paper with the lovely "Optimal Brain Damage" (The paper cites some earlier works on network minimization from the late 80's but I didn't go that far down the rabbit hole). The main idea was to approximate the change in loss caused by removing a feature map and minimize it: ∆C(hi) = |C(D|W0) − C(D|W)| Where C is the cost function, D is our dataset (of x samples and y labels) and W are the weights of the model (W0 are the original weights). hi is the output produced from parameter i, which can be either a full feature map in convolution layers or a single neuron in dense layers. More recent works on the subject include: "2016 - Pruning convolutional neural networks for resource efficient inference" In this paper they propose the following iterative process for pruning CNNs in a greedy manner: They present and test several criteria for the pruning process. The first and most natural one to use is the oracle pruning, which desires to minimize the difference in accuracy between the full and pruned models. However it is very costly to compute, requiring ||W0|| evaluations on the training dataset. More heuristic criteria which are much more computationally efficient are: Minimum Weight - Assuming that a convolutional kernel with low L2 norm detects less important features than those with a high norm. Activation - Assuming that an activation value of a feature map is smaller for less impotent features. Information Gain - IG(y|x) = H(x) + H(y) − H(x, y), where H is the entropy. Taylor Expansion - Based on the Taylor expansion, we directly approximate change in the loss function from removing a particular parameter. 2016 - Dynamic Network Surgery for Efficient DNNs Unlike the previous methods which accomplish this task in a greedy way, they incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. With this method, without any accuracy loss, they efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of 108× and 17.7× respectively. The figures and a much of what I written is based on the original papers. Another useful explanation can be found in the following link: Pruning deep neural networks to make them fast and small. A good tool for modifying trained Keras models is the Keras-surgeon. It currently enables easy methods to: delete neurons/channels from layers, delete layers, insert layers and replace layers. I didn't find any methods for the actual pruning process (testing criteria, optimizing etc.)
Neural network's weight reduction
After reading some of the helpful comments and answers, I've did some focused reading on my own. As mentioned in other answers, this process is called Pruning and like many other ideas in the neural n
Neural network's weight reduction After reading some of the helpful comments and answers, I've did some focused reading on my own. As mentioned in other answers, this process is called Pruning and like many other ideas in the neural network area, it is not new. From what I can tell, it originates in LeCun's 1990 paper with the lovely "Optimal Brain Damage" (The paper cites some earlier works on network minimization from the late 80's but I didn't go that far down the rabbit hole). The main idea was to approximate the change in loss caused by removing a feature map and minimize it: ∆C(hi) = |C(D|W0) − C(D|W)| Where C is the cost function, D is our dataset (of x samples and y labels) and W are the weights of the model (W0 are the original weights). hi is the output produced from parameter i, which can be either a full feature map in convolution layers or a single neuron in dense layers. More recent works on the subject include: "2016 - Pruning convolutional neural networks for resource efficient inference" In this paper they propose the following iterative process for pruning CNNs in a greedy manner: They present and test several criteria for the pruning process. The first and most natural one to use is the oracle pruning, which desires to minimize the difference in accuracy between the full and pruned models. However it is very costly to compute, requiring ||W0|| evaluations on the training dataset. More heuristic criteria which are much more computationally efficient are: Minimum Weight - Assuming that a convolutional kernel with low L2 norm detects less important features than those with a high norm. Activation - Assuming that an activation value of a feature map is smaller for less impotent features. Information Gain - IG(y|x) = H(x) + H(y) − H(x, y), where H is the entropy. Taylor Expansion - Based on the Taylor expansion, we directly approximate change in the loss function from removing a particular parameter. 2016 - Dynamic Network Surgery for Efficient DNNs Unlike the previous methods which accomplish this task in a greedy way, they incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. With this method, without any accuracy loss, they efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of 108× and 17.7× respectively. The figures and a much of what I written is based on the original papers. Another useful explanation can be found in the following link: Pruning deep neural networks to make them fast and small. A good tool for modifying trained Keras models is the Keras-surgeon. It currently enables easy methods to: delete neurons/channels from layers, delete layers, insert layers and replace layers. I didn't find any methods for the actual pruning process (testing criteria, optimizing etc.)
Neural network's weight reduction After reading some of the helpful comments and answers, I've did some focused reading on my own. As mentioned in other answers, this process is called Pruning and like many other ideas in the neural n
41,433
how to make sense of the number of observations per parameters in deep learning models?
In a classical machine learning (i.e. statistical learning theory) setup, the number of parameters usually enters via the Vapnik–Chervonenkis (VC) dimension and the number of observations via the PAC bound. Very roughly speaking, this says that for classification problems, the worst-case difference in 0-1 loss between training and test set is of the order $\sqrt{D/N}$ with $N$ number of observations and $D$ the VC dimension. Usually, the VC dimension increases in the number of parameters (how exactly depends on the model class). This result can be generalized beyond the binary classification setting. For neural networks, a quick Google scholar search gives for example Size-Independent Sample Complexity of Neural Networks. More recent results support the idea that as the number of parameters passes a threshold of perfect (over)fitting, the test error decreases again since a model with more parameters is more expressive and able to fit the data using smoother functions. This is probably what is happening in your example. See for example Reconciling modern machine learning and the bias-variance trade-off or Generalization in Machine Learning via Analytical Learning Theory
how to make sense of the number of observations per parameters in deep learning models?
In a classical machine learning (i.e. statistical learning theory) setup, the number of parameters usually enters via the Vapnik–Chervonenkis (VC) dimension and the number of observations via the PAC
how to make sense of the number of observations per parameters in deep learning models? In a classical machine learning (i.e. statistical learning theory) setup, the number of parameters usually enters via the Vapnik–Chervonenkis (VC) dimension and the number of observations via the PAC bound. Very roughly speaking, this says that for classification problems, the worst-case difference in 0-1 loss between training and test set is of the order $\sqrt{D/N}$ with $N$ number of observations and $D$ the VC dimension. Usually, the VC dimension increases in the number of parameters (how exactly depends on the model class). This result can be generalized beyond the binary classification setting. For neural networks, a quick Google scholar search gives for example Size-Independent Sample Complexity of Neural Networks. More recent results support the idea that as the number of parameters passes a threshold of perfect (over)fitting, the test error decreases again since a model with more parameters is more expressive and able to fit the data using smoother functions. This is probably what is happening in your example. See for example Reconciling modern machine learning and the bias-variance trade-off or Generalization in Machine Learning via Analytical Learning Theory
how to make sense of the number of observations per parameters in deep learning models? In a classical machine learning (i.e. statistical learning theory) setup, the number of parameters usually enters via the Vapnik–Chervonenkis (VC) dimension and the number of observations via the PAC
41,434
how to make sense of the number of observations per parameters in deep learning models?
so essentially the answer is regularisation. for NNs this is stopped training (ie initialise weights near zero and stop when you reach minimum in validation error, before you reach minimum on training error), drop out (randomly disable parts of the network) and weight regularisation (l2). even in the linear case, lasso (l1 regularisation) has been used for n< and for ridge regression (l2 regularisation) you have the notion of effective degrees of freedom Effective degrees of freedom for regularized regression Although you are right that the majority of params are in the fully connected layer, and pooling etc reduce the effective input dimension, the number of parameters in the fully connected layer is still an important quantity, as it regulates the nonlinearity. (cf 1-d polynomial regression)
how to make sense of the number of observations per parameters in deep learning models?
so essentially the answer is regularisation. for NNs this is stopped training (ie initialise weights near zero and stop when you reach minimum in validation error, before you reach minimum on training
how to make sense of the number of observations per parameters in deep learning models? so essentially the answer is regularisation. for NNs this is stopped training (ie initialise weights near zero and stop when you reach minimum in validation error, before you reach minimum on training error), drop out (randomly disable parts of the network) and weight regularisation (l2). even in the linear case, lasso (l1 regularisation) has been used for n< and for ridge regression (l2 regularisation) you have the notion of effective degrees of freedom Effective degrees of freedom for regularized regression Although you are right that the majority of params are in the fully connected layer, and pooling etc reduce the effective input dimension, the number of parameters in the fully connected layer is still an important quantity, as it regulates the nonlinearity. (cf 1-d polynomial regression)
how to make sense of the number of observations per parameters in deep learning models? so essentially the answer is regularisation. for NNs this is stopped training (ie initialise weights near zero and stop when you reach minimum in validation error, before you reach minimum on training
41,435
how to make sense of the number of observations per parameters in deep learning models?
For any estimation problem, one would typically require the parameters of the statistical model to be identifiable. The one-to-one correspondence between number of observations and number of parameters actually comes from inspecting whenever the Information Matrix is singular or not. If the information matrix is non-singular for all possible parameter values, then broadly speaking, you can find unique values to the parameters of the model given your observations. The information matrix can for the Gaussian distribution, which is equivalent to least square fitting with known covariance, be written as $$ I(\theta) = J^T(\theta) Q^{-1} J(\theta) $$ and if the Jacobian $J$ has more columns than rows, that is, more parameters than observations, $I$ will be singular for all values of $\theta$. However, if you have an estimation problem with more parameters than observations, you typically need to constrain the space of allowable values of the parameters, which can be done with regularization. All of this is described more in detail here.
how to make sense of the number of observations per parameters in deep learning models?
For any estimation problem, one would typically require the parameters of the statistical model to be identifiable. The one-to-one correspondence between number of observations and number of parameter
how to make sense of the number of observations per parameters in deep learning models? For any estimation problem, one would typically require the parameters of the statistical model to be identifiable. The one-to-one correspondence between number of observations and number of parameters actually comes from inspecting whenever the Information Matrix is singular or not. If the information matrix is non-singular for all possible parameter values, then broadly speaking, you can find unique values to the parameters of the model given your observations. The information matrix can for the Gaussian distribution, which is equivalent to least square fitting with known covariance, be written as $$ I(\theta) = J^T(\theta) Q^{-1} J(\theta) $$ and if the Jacobian $J$ has more columns than rows, that is, more parameters than observations, $I$ will be singular for all values of $\theta$. However, if you have an estimation problem with more parameters than observations, you typically need to constrain the space of allowable values of the parameters, which can be done with regularization. All of this is described more in detail here.
how to make sense of the number of observations per parameters in deep learning models? For any estimation problem, one would typically require the parameters of the statistical model to be identifiable. The one-to-one correspondence between number of observations and number of parameter
41,436
Time steps in Keras LSTM
As described by Andrey Karpathy, the basic recurrent neural network cell is something like $$ h_t = \tanh(W_{hh}h_{t-1} + W_{xh}x_t) $$ so it takes previous hidden state $h_{t-1}$ and current input $x_t$, to produce hidden state $h_t$. Notice that $W_{hh}$ and $W_{xh}$ are not indexed by time $t$, we use the same weights for each timestep. In simplified python code, the forward pass is basically a for-loop: for t in range(timesteps): h[t] = np.tanh(np.dot(Wxh, x[t]) + np.dot(Whh, h[t-1])) So it doesn't matter how many timesteps there are, it is just a matter of how it is implemented. People often use fixed number of timesteps to simplify the code and work with simpler data structures. In Keras, the RNN cells take as input tensors of shape (batch_size, timesteps, input_dim), but you can set them to None if you want to use varying sizes. For example, if you use (None, None, input_dim), then it will accept batches of any size and any number of timesteps, with input_dim number of features (this needs to be fixed). It is possible because this is a for-loop and we apply same function to every timestep. It would be more complicated in other cases, where varying sizes would need us to be using things like varying sizes for the vectors of parameters (say in densely-connected layer).
Time steps in Keras LSTM
As described by Andrey Karpathy, the basic recurrent neural network cell is something like $$ h_t = \tanh(W_{hh}h_{t-1} + W_{xh}x_t) $$ so it takes previous hidden state $h_{t-1}$ and current input $x
Time steps in Keras LSTM As described by Andrey Karpathy, the basic recurrent neural network cell is something like $$ h_t = \tanh(W_{hh}h_{t-1} + W_{xh}x_t) $$ so it takes previous hidden state $h_{t-1}$ and current input $x_t$, to produce hidden state $h_t$. Notice that $W_{hh}$ and $W_{xh}$ are not indexed by time $t$, we use the same weights for each timestep. In simplified python code, the forward pass is basically a for-loop: for t in range(timesteps): h[t] = np.tanh(np.dot(Wxh, x[t]) + np.dot(Whh, h[t-1])) So it doesn't matter how many timesteps there are, it is just a matter of how it is implemented. People often use fixed number of timesteps to simplify the code and work with simpler data structures. In Keras, the RNN cells take as input tensors of shape (batch_size, timesteps, input_dim), but you can set them to None if you want to use varying sizes. For example, if you use (None, None, input_dim), then it will accept batches of any size and any number of timesteps, with input_dim number of features (this needs to be fixed). It is possible because this is a for-loop and we apply same function to every timestep. It would be more complicated in other cases, where varying sizes would need us to be using things like varying sizes for the vectors of parameters (say in densely-connected layer).
Time steps in Keras LSTM As described by Andrey Karpathy, the basic recurrent neural network cell is something like $$ h_t = \tanh(W_{hh}h_{t-1} + W_{xh}x_t) $$ so it takes previous hidden state $h_{t-1}$ and current input $x
41,437
Proof that posterior median is the Bayes estimate of absolute loss?
Second derivative yields $$2 \pi (\delta | x) \geq 0$$ So the original function is convex and hence the median corresponds to a minimum not an inflection point
Proof that posterior median is the Bayes estimate of absolute loss?
Second derivative yields $$2 \pi (\delta | x) \geq 0$$ So the original function is convex and hence the median corresponds to a minimum not an inflection point
Proof that posterior median is the Bayes estimate of absolute loss? Second derivative yields $$2 \pi (\delta | x) \geq 0$$ So the original function is convex and hence the median corresponds to a minimum not an inflection point
Proof that posterior median is the Bayes estimate of absolute loss? Second derivative yields $$2 \pi (\delta | x) \geq 0$$ So the original function is convex and hence the median corresponds to a minimum not an inflection point
41,438
Proof that posterior median is the Bayes estimate of absolute loss?
Adding to Xiaomi answer, here is the derivation, using Leibnitz rule: $\require{cancel} \frac{\partial \rho}{\partial \delta} = \int_{-\infty}^{\delta}\pi(\theta|x)d\theta - \int_{\delta}^{\infty}\pi(\theta|x)d\theta \\ \frac{\partial^2 \rho}{\partial \delta^2} = \pi(\delta|x)\cdot1 - \cancel{\pi(\delta|x)\cdot0} + \cancel{\int_{-\infty}^{\delta}0d\theta} - [\cancel{\pi(\delta|x)\cdot0} - \pi(\delta|x)\cdot1 + \cancel{\int_{\delta}^{\infty}0d\theta}] = 2\pi(\delta|x) $
Proof that posterior median is the Bayes estimate of absolute loss?
Adding to Xiaomi answer, here is the derivation, using Leibnitz rule: $\require{cancel} \frac{\partial \rho}{\partial \delta} = \int_{-\infty}^{\delta}\pi(\theta|x)d\theta - \int_{\delta}^{\infty}\pi
Proof that posterior median is the Bayes estimate of absolute loss? Adding to Xiaomi answer, here is the derivation, using Leibnitz rule: $\require{cancel} \frac{\partial \rho}{\partial \delta} = \int_{-\infty}^{\delta}\pi(\theta|x)d\theta - \int_{\delta}^{\infty}\pi(\theta|x)d\theta \\ \frac{\partial^2 \rho}{\partial \delta^2} = \pi(\delta|x)\cdot1 - \cancel{\pi(\delta|x)\cdot0} + \cancel{\int_{-\infty}^{\delta}0d\theta} - [\cancel{\pi(\delta|x)\cdot0} - \pi(\delta|x)\cdot1 + \cancel{\int_{\delta}^{\infty}0d\theta}] = 2\pi(\delta|x) $
Proof that posterior median is the Bayes estimate of absolute loss? Adding to Xiaomi answer, here is the derivation, using Leibnitz rule: $\require{cancel} \frac{\partial \rho}{\partial \delta} = \int_{-\infty}^{\delta}\pi(\theta|x)d\theta - \int_{\delta}^{\infty}\pi
41,439
I regularized my linear regression, now what?
One common approach is to now redo the regression (without regularization) using only the variables that were selected by LASSO. This is called "post-selection inference." See Lee et al. 2016 for finding p-values and confidence intervals on the resulting estimates.
I regularized my linear regression, now what?
One common approach is to now redo the regression (without regularization) using only the variables that were selected by LASSO. This is called "post-selection inference." See Lee et al. 2016 for find
I regularized my linear regression, now what? One common approach is to now redo the regression (without regularization) using only the variables that were selected by LASSO. This is called "post-selection inference." See Lee et al. 2016 for finding p-values and confidence intervals on the resulting estimates.
I regularized my linear regression, now what? One common approach is to now redo the regression (without regularization) using only the variables that were selected by LASSO. This is called "post-selection inference." See Lee et al. 2016 for find
41,440
I regularized my linear regression, now what?
You may ask yourself, what is the goal of building this model. Are you trying to get a better prediction performance? Or you want a linear model that statistically significant. These two goals are not necessarily aligned. From machine learning perspective, you always want to know you are overfitting or under-fitting. If you already under fitting, regularization will make it worse.
I regularized my linear regression, now what?
You may ask yourself, what is the goal of building this model. Are you trying to get a better prediction performance? Or you want a linear model that statistically significant. These two goals are not
I regularized my linear regression, now what? You may ask yourself, what is the goal of building this model. Are you trying to get a better prediction performance? Or you want a linear model that statistically significant. These two goals are not necessarily aligned. From machine learning perspective, you always want to know you are overfitting or under-fitting. If you already under fitting, regularization will make it worse.
I regularized my linear regression, now what? You may ask yourself, what is the goal of building this model. Are you trying to get a better prediction performance? Or you want a linear model that statistically significant. These two goals are not
41,441
Linear Regression of Indicator Matrix: sum of predictions is 1
The intercept in the model (a column of 1's in X) is the key. Write $\hat{Y} = \hat{f}_k(X)$ as the fit of $Y$. \begin{equation*} \begin{split} \sum_{k \in \{1, .., K\}} \hat{f}_k(X) &= \hat{Y}\cdot\textbf{1}_{K} \\ & = X(X^TX)^{-1}X^TY\cdot\textbf{1}_{K} \\ & = P\cdot\textbf{1}_{N} \end{split} \end{equation*} where $\textbf{1}_{K}$ is a vector of 1 of dimension $K$, $Y\cdot\textbf{1}_{K}=\textbf{1}_{N}$ because $Y$ is an indicator matrix, and $P=X(X^TX)^{-1}X^T$ is the projection matrix. $Pa$ is the projection of vector $a$ onto the column space of $X$. If there is an intercept in the model, then ${1}_{N}$ is in the column space of $X$, thus $\sum_{k \in \{1, .., K\}} \hat{f}_k(X)= P\cdot\textbf{1}_{N} = \textbf{1}_{N}$. And for each observation $x$ in $X$, $\sum_{k \in \{1, .., K\}} \hat{f}_k(x)= 1$.
Linear Regression of Indicator Matrix: sum of predictions is 1
The intercept in the model (a column of 1's in X) is the key. Write $\hat{Y} = \hat{f}_k(X)$ as the fit of $Y$. \begin{equation*} \begin{split} \sum_{k \in \{1, .., K\}} \hat{f}_k(X) &= \hat{Y}\cdot\
Linear Regression of Indicator Matrix: sum of predictions is 1 The intercept in the model (a column of 1's in X) is the key. Write $\hat{Y} = \hat{f}_k(X)$ as the fit of $Y$. \begin{equation*} \begin{split} \sum_{k \in \{1, .., K\}} \hat{f}_k(X) &= \hat{Y}\cdot\textbf{1}_{K} \\ & = X(X^TX)^{-1}X^TY\cdot\textbf{1}_{K} \\ & = P\cdot\textbf{1}_{N} \end{split} \end{equation*} where $\textbf{1}_{K}$ is a vector of 1 of dimension $K$, $Y\cdot\textbf{1}_{K}=\textbf{1}_{N}$ because $Y$ is an indicator matrix, and $P=X(X^TX)^{-1}X^T$ is the projection matrix. $Pa$ is the projection of vector $a$ onto the column space of $X$. If there is an intercept in the model, then ${1}_{N}$ is in the column space of $X$, thus $\sum_{k \in \{1, .., K\}} \hat{f}_k(X)= P\cdot\textbf{1}_{N} = \textbf{1}_{N}$. And for each observation $x$ in $X$, $\sum_{k \in \{1, .., K\}} \hat{f}_k(x)= 1$.
Linear Regression of Indicator Matrix: sum of predictions is 1 The intercept in the model (a column of 1's in X) is the key. Write $\hat{Y} = \hat{f}_k(X)$ as the fit of $Y$. \begin{equation*} \begin{split} \sum_{k \in \{1, .., K\}} \hat{f}_k(X) &= \hat{Y}\cdot\
41,442
Linear Regression of Indicator Matrix: sum of predictions is 1
Yeah, it is straighforward if you're familiar with centered model (or form) of $X$. Consider that we have $K$ (ranges from $1, ..., K$) classes in our problem; As you showed, we compute a fitted output through the equation (however, a more compact notation is $\hat{f}_k(x)^T$ that denotes the discriminant of the $k$ class): $$\hat{f}_k(x)^T = (1,x^T)\hat{\textbf{B}}$$ That is equivalent to: $$(1,x^T)\begin{bmatrix}\hat{\textbf{B}}_{0} \\ \hat{\textbf{B}}_{1\sim k}\end{bmatrix}$$ Also, if we decompose $\hat{\textbf{B}}$ to its original equation (i.e., by optimizing it using the least mean squares), i.e., that is: $$\hat{\textbf{B}} = (X^TX)^{-1}X^TY_{N \times K}$$ So the whole new equation is: $$\hat{f}_k(x)^T = {\hat{\textbf{B}}_{0}}_{1 \times K} + x^T_{1 \times p}(X^TX)^{-1}X^TY_{N \times K}$$ And it's completely okay to use a centered form of $X$ (by subtracting each observation from the mean vector) instead of the original one. $$\hat{f}_k(x)^T = {\hat{\textbf{A}}_{0}}_{1 \times K} + {x_c^T}_{1 \times p}(X^T_c X_c)^{-1}X_c^TY_{N \times K}$$ ${\hat{\textbf{A}}_{0}}_{1 \times K}$ is the intercept vector in the centered form. $\hat{f}_k(x)^T$ by a vector of ones, i.e., $\textbf{1}_{K \times 1}$, and we multiply it by the whole equation, so that it becomes: $$\hat{f}_k(x)^T\textbf{1}_{K \times 1} = {\hat{\textbf{A}}_{0}}_{1 \times K}\textbf{1}_{K \times 1} + {x_c^T}_{1 \times p}(X^T_c X_c)^{-1}X_c^TY_{N \times K}\textbf{1}_{K \times 1}$$ $Y_{N \times K}$ is a matrix where each row has only single 1. So $Y_{N \times K}\textbf{1}_{K \times 1} = {1}_{N \times 1}$, and hence, the equation is: $$\hat{f}_k(x)^T\textbf{1}_{K \times 1} = {\hat{\textbf{A}}_{0}}_{1 \times K}\textbf{1}_{K \times 1} + {x_c^T}_{1 \times p}(X^T_c X_c)^{-1}\underbrace{X_c^T\textbf{1}_{N \times 1}}_{\text{sum of centered values = 0}}$$ And since ${\hat{\textbf{A}}_{0}}_{1 \times K}$ is nothing but a vector of all zeros and one at the correct class index, then: $$\hat{f}_k(x)^T\textbf{1}_{K \times 1} = \underbrace{{\hat{\textbf{A}}_{0}}_{1 \times K}\textbf{1}_{K \times 1}}_{\text{1}} = 1 \tag*{Q.E.D $\blacksquare$}$$
Linear Regression of Indicator Matrix: sum of predictions is 1
Yeah, it is straighforward if you're familiar with centered model (or form) of $X$. Consider that we have $K$ (ranges from $1, ..., K$) classes in our problem; As you showed, we compute a fitted outp
Linear Regression of Indicator Matrix: sum of predictions is 1 Yeah, it is straighforward if you're familiar with centered model (or form) of $X$. Consider that we have $K$ (ranges from $1, ..., K$) classes in our problem; As you showed, we compute a fitted output through the equation (however, a more compact notation is $\hat{f}_k(x)^T$ that denotes the discriminant of the $k$ class): $$\hat{f}_k(x)^T = (1,x^T)\hat{\textbf{B}}$$ That is equivalent to: $$(1,x^T)\begin{bmatrix}\hat{\textbf{B}}_{0} \\ \hat{\textbf{B}}_{1\sim k}\end{bmatrix}$$ Also, if we decompose $\hat{\textbf{B}}$ to its original equation (i.e., by optimizing it using the least mean squares), i.e., that is: $$\hat{\textbf{B}} = (X^TX)^{-1}X^TY_{N \times K}$$ So the whole new equation is: $$\hat{f}_k(x)^T = {\hat{\textbf{B}}_{0}}_{1 \times K} + x^T_{1 \times p}(X^TX)^{-1}X^TY_{N \times K}$$ And it's completely okay to use a centered form of $X$ (by subtracting each observation from the mean vector) instead of the original one. $$\hat{f}_k(x)^T = {\hat{\textbf{A}}_{0}}_{1 \times K} + {x_c^T}_{1 \times p}(X^T_c X_c)^{-1}X_c^TY_{N \times K}$$ ${\hat{\textbf{A}}_{0}}_{1 \times K}$ is the intercept vector in the centered form. $\hat{f}_k(x)^T$ by a vector of ones, i.e., $\textbf{1}_{K \times 1}$, and we multiply it by the whole equation, so that it becomes: $$\hat{f}_k(x)^T\textbf{1}_{K \times 1} = {\hat{\textbf{A}}_{0}}_{1 \times K}\textbf{1}_{K \times 1} + {x_c^T}_{1 \times p}(X^T_c X_c)^{-1}X_c^TY_{N \times K}\textbf{1}_{K \times 1}$$ $Y_{N \times K}$ is a matrix where each row has only single 1. So $Y_{N \times K}\textbf{1}_{K \times 1} = {1}_{N \times 1}$, and hence, the equation is: $$\hat{f}_k(x)^T\textbf{1}_{K \times 1} = {\hat{\textbf{A}}_{0}}_{1 \times K}\textbf{1}_{K \times 1} + {x_c^T}_{1 \times p}(X^T_c X_c)^{-1}\underbrace{X_c^T\textbf{1}_{N \times 1}}_{\text{sum of centered values = 0}}$$ And since ${\hat{\textbf{A}}_{0}}_{1 \times K}$ is nothing but a vector of all zeros and one at the correct class index, then: $$\hat{f}_k(x)^T\textbf{1}_{K \times 1} = \underbrace{{\hat{\textbf{A}}_{0}}_{1 \times K}\textbf{1}_{K \times 1}}_{\text{1}} = 1 \tag*{Q.E.D $\blacksquare$}$$
Linear Regression of Indicator Matrix: sum of predictions is 1 Yeah, it is straighforward if you're familiar with centered model (or form) of $X$. Consider that we have $K$ (ranges from $1, ..., K$) classes in our problem; As you showed, we compute a fitted outp
41,443
Linear Regression of Indicator Matrix: sum of predictions is 1
$(1,x^T)\hat{B}\mathbf{1}_{K\times1} = 1$ should hold for any $x \in \mathbb{R}^{p}$ rather than $x$ in the training set. Since $\mathbf{X}' = [\mathbf{1}_{N \times 1}, \mathbf{X}_{N \times p}] $, $\begin{align*}\hat{B}\mathbf{1}_{K \times 1} &= (\mathbf{X}'^T\mathbf{X}')^{-1}\mathbf{X}'^T \mathbf{Y}\mathbf{1}_{K \times 1} \\ &=\begin{bmatrix} N & \mathbf{1}_{N \times 1}^T \mathbf{X}\\ \mathbf{X}^T \mathbf{1}_{N \times 1}& \mathbf{X}^T\mathbf{X} \end{bmatrix}^{-1} \begin{bmatrix} \mathbf{1}_{N \times 1}^T \\ \mathbf{X}^T \end{bmatrix} \mathbf{1}_{N \times 1}\\ &= \begin{bmatrix} N & \mathbf{1}_{N \times 1}^T \mathbf{X}\\ \mathbf{X}^T \mathbf{1}_{N \times 1}& \mathbf{X}^T\mathbf{X} \end{bmatrix}^{-1} \begin{bmatrix} N \\ \mathbf{X}^T \mathbf{1}_{N \times 1} \end{bmatrix} \end{align*}$. Recall $\begin{bmatrix} N & \mathbf{1}_{N \times 1}^T \mathbf{X}\\ \mathbf{X}^T \mathbf{1}_{N \times 1}& \mathbf{X}^T\mathbf{X} \end{bmatrix}^{-1} \begin{bmatrix} N & \mathbf{1}_{N \times 1}^T \mathbf{X}\\ \mathbf{X}^T \mathbf{1}_{N \times 1}& \mathbf{X}^T\mathbf{X} \end{bmatrix} = \mathbf{I}_{(p+1) \times (p+1)} $, and $ \begin{bmatrix} N \\ \mathbf{X}^T \mathbf{1}_{N \times 1} \end{bmatrix} $ is the first column $\implies \hat{B}\mathbf{1}_{K \times 1} = \begin{bmatrix} 1 \\ \mathbf{0}_{p \times 1} \end{bmatrix}$, and $(1,x^T)\hat{B}\mathbf{1}_{K\times1} = 1$.
Linear Regression of Indicator Matrix: sum of predictions is 1
$(1,x^T)\hat{B}\mathbf{1}_{K\times1} = 1$ should hold for any $x \in \mathbb{R}^{p}$ rather than $x$ in the training set. Since $\mathbf{X}' = [\mathbf{1}_{N \times 1}, \mathbf{X}_{N \times p}] $, $\b
Linear Regression of Indicator Matrix: sum of predictions is 1 $(1,x^T)\hat{B}\mathbf{1}_{K\times1} = 1$ should hold for any $x \in \mathbb{R}^{p}$ rather than $x$ in the training set. Since $\mathbf{X}' = [\mathbf{1}_{N \times 1}, \mathbf{X}_{N \times p}] $, $\begin{align*}\hat{B}\mathbf{1}_{K \times 1} &= (\mathbf{X}'^T\mathbf{X}')^{-1}\mathbf{X}'^T \mathbf{Y}\mathbf{1}_{K \times 1} \\ &=\begin{bmatrix} N & \mathbf{1}_{N \times 1}^T \mathbf{X}\\ \mathbf{X}^T \mathbf{1}_{N \times 1}& \mathbf{X}^T\mathbf{X} \end{bmatrix}^{-1} \begin{bmatrix} \mathbf{1}_{N \times 1}^T \\ \mathbf{X}^T \end{bmatrix} \mathbf{1}_{N \times 1}\\ &= \begin{bmatrix} N & \mathbf{1}_{N \times 1}^T \mathbf{X}\\ \mathbf{X}^T \mathbf{1}_{N \times 1}& \mathbf{X}^T\mathbf{X} \end{bmatrix}^{-1} \begin{bmatrix} N \\ \mathbf{X}^T \mathbf{1}_{N \times 1} \end{bmatrix} \end{align*}$. Recall $\begin{bmatrix} N & \mathbf{1}_{N \times 1}^T \mathbf{X}\\ \mathbf{X}^T \mathbf{1}_{N \times 1}& \mathbf{X}^T\mathbf{X} \end{bmatrix}^{-1} \begin{bmatrix} N & \mathbf{1}_{N \times 1}^T \mathbf{X}\\ \mathbf{X}^T \mathbf{1}_{N \times 1}& \mathbf{X}^T\mathbf{X} \end{bmatrix} = \mathbf{I}_{(p+1) \times (p+1)} $, and $ \begin{bmatrix} N \\ \mathbf{X}^T \mathbf{1}_{N \times 1} \end{bmatrix} $ is the first column $\implies \hat{B}\mathbf{1}_{K \times 1} = \begin{bmatrix} 1 \\ \mathbf{0}_{p \times 1} \end{bmatrix}$, and $(1,x^T)\hat{B}\mathbf{1}_{K\times1} = 1$.
Linear Regression of Indicator Matrix: sum of predictions is 1 $(1,x^T)\hat{B}\mathbf{1}_{K\times1} = 1$ should hold for any $x \in \mathbb{R}^{p}$ rather than $x$ in the training set. Since $\mathbf{X}' = [\mathbf{1}_{N \times 1}, \mathbf{X}_{N \times p}] $, $\b
41,444
Comparison of variance between two samples with unequal sample size
Location differences Mann-Whitney does not test mean differences I'll begin by knocking off the simpler questions. If your primary question about location differences is a difference in means, then you probably do not want a non-parametric test like the Mann-Whitney test. The Mann-Whitney test is a test of stochastic dominance. So given two groups A and B, if I were to randomly draw from A and B, which value would be greater. If they cancel out on average, then there is no stochastic dominance; the opposite follows. This test would work regardless of non-normality or heteroskedasticity. However, if you have non-normality and heteroskedasticity in particular, then this test is anything but a test of the mean-difference. The mean-difference can be zero, but you can easily have stochastic dominance of one group over the other. Given the choice, I personally may be more interested in a test of stochastic dominance, but they are not commonly explained like this in most applied literature I come across. Non-normality may not be too important Next issue is normality in relation to mean differences. We will bring up normality again in relation to variance differences as things work somewhat differently there. Unless you expect the data to be extremely non-normal, the sample sizes you have may be large enough to ignore questions of normality. If the data are extremely non-normal or you can hypothesize a theoretical distribution from which they may arise, then maybe it is better to run the model using that distribution, such as Poisson for count data or income. Also, certain transformations might make sense theoretically, such that you can expect to use them even before viewing the data. The point I'm trying to make is with relation to mean differences, heteroskedasticity may be more consequential in your situation. If you have to examine normality, know that all other things held constant, statistical tests improve in their ability to detect differences given greater sample size. Shapiro-Wilk might be able to detect minor deviations from normality at the sample size you have. Additionally, if you do the test at all, you should probably do it on the data after subtracting off the group means. Most importantly though, making future decisions contingent on such preliminary tests can make your eventual decision flawed. I do not know of studies of the sort with normality testing, but there are such studies with heteroskedasticity-testing, see for one: Zimmerman, D. W. (2004). A note on preliminary tests of equality of variances. British Journal of Mathematical and Statistical Psychology, 57(1), 173–181. https://doi.org/10.1348/000711004849222 Dealing with heteroskedasticity So if your primary question about location differences is a difference in means, then my recommendation would be to compute the difference in means and use a test that can adjust for possible violations of heteroskedasticity. The most developed requiring very little additional computer time (wild bootstrapping is good but can take eons) are heteroskedasticity-consistent standard errors in econometrics. I would recommend the HC3, HC4 or HC5 variants. See: Hausman, J., & Palmer, C. (2012). Heteroskedasticity-robust inference in finite samples. Economics Letters, 116(2), 232–235. https://doi.org/10.1016/j.econlet.2012.02.007 Cribari-Neto, F., Souza, T. C., & Vasconcellos, K. L. P. (2007). Inference Under Heteroskedasticity and Leveraged Data. Communications in Statistics - Theory and Methods, 36(10), 1877–1888. https://doi.org/10.1080/03610920601126589 These methods are more recently developed than Welch's correction and do not require you to know the correct model specification for the variance. So run a regression of the outcome on group, the coefficient is your mean difference, and the robust error correction corrects the p-value for heteroskedasticity. There are methods that allow you to simultaneously model the mean and variance such as generalized least squares, but normality of the data comes back into play in relation to the test of the variance. I hope the above helps in relation to questions about mean differences. Variance differences Brief simulation I conducted I next turn to the other primary question about variance differences. I ran some simulations of this weeks ago. I assumed that the mean and variance of outcome was a function of the groups alone $-$ a simplifying assumption that would be met in say a randomized trial. I varied: the distribution of the data, normal, or skewed ($\chi^2_8$ centered and scaled to meet mean and variance requirements $\approx$ skew of 1 asymptotically). The choice of $\chi^2$ is not ideal for generating skewed data especially under unbalanced design but I think it suffices. balanced versus unbalanced design (1:3, so not as extreme as your situation). And the maximum sample size I considered was 200 persons in both groups. I tested the ability of the methods I considered to maintain nominal error-rate and statistical power. To knock off power questions now, at sample sizes below OP's, most methods displayed similar statistical power with regard to detecting variance differences. But not all had ability to maintain nominal error rate. So when I use performed relatively well below, I mean maintained nominal error rate. Levene test with median and OLS on squared residuals may be good choices The most standard way is the $F$-test. But unless your data are normally distributed, this test behaves very badly. So one can take it off the table. The next standard is Levene's test. If you are concerned about normality, you can robustify it by conducting Levene's test using the median in place of the mean in the formula for the test. In the simulations I conducted, and this approach seemed to perform relatively well across a variety of situations and should be available in major statistical packages. The finding in my own simulations is backed up by the recommendations from the NIST engineering statistics handbook: https://itl.nist.gov/div898/handbook/eda/section3/eda35a.htm. However, I also found that you can take the results from the first regression you conducted to obtain the mean differences. Square the residuals from this regression, then regress this squared residual on the group variable again. I found this approach to testing the variance differences to perform well across all conditions in my simulation. In summary So to recap, if I were in your situation and wanted to make informed decisions a-priori assuming non-normality is not extreme, I would: conduct the standard regression model regressing the data on group membership to obtain the mean difference, and use heteroskedasticity-consistent standard errors for inference I would conduct Levene's test using the median as center rather than the mean. I might also use the regression of squared residuals approach. Additional methods I tested were Levene's test using the Hodges-Lehmann median (a nice robust estimator which has a relation to the aforementioned Mann-Whitney) as the center; generalized least squares; and three methods from the structural equation modeling literature: diagonally-weighted least squares, and mean and variance adjusted OLS, and maximim likelihood with a sandwich estimator commonly referred to as MLM. The methods I focused on in the bulk of the text won out.
Comparison of variance between two samples with unequal sample size
Location differences Mann-Whitney does not test mean differences I'll begin by knocking off the simpler questions. If your primary question about location differences is a difference in means, then yo
Comparison of variance between two samples with unequal sample size Location differences Mann-Whitney does not test mean differences I'll begin by knocking off the simpler questions. If your primary question about location differences is a difference in means, then you probably do not want a non-parametric test like the Mann-Whitney test. The Mann-Whitney test is a test of stochastic dominance. So given two groups A and B, if I were to randomly draw from A and B, which value would be greater. If they cancel out on average, then there is no stochastic dominance; the opposite follows. This test would work regardless of non-normality or heteroskedasticity. However, if you have non-normality and heteroskedasticity in particular, then this test is anything but a test of the mean-difference. The mean-difference can be zero, but you can easily have stochastic dominance of one group over the other. Given the choice, I personally may be more interested in a test of stochastic dominance, but they are not commonly explained like this in most applied literature I come across. Non-normality may not be too important Next issue is normality in relation to mean differences. We will bring up normality again in relation to variance differences as things work somewhat differently there. Unless you expect the data to be extremely non-normal, the sample sizes you have may be large enough to ignore questions of normality. If the data are extremely non-normal or you can hypothesize a theoretical distribution from which they may arise, then maybe it is better to run the model using that distribution, such as Poisson for count data or income. Also, certain transformations might make sense theoretically, such that you can expect to use them even before viewing the data. The point I'm trying to make is with relation to mean differences, heteroskedasticity may be more consequential in your situation. If you have to examine normality, know that all other things held constant, statistical tests improve in their ability to detect differences given greater sample size. Shapiro-Wilk might be able to detect minor deviations from normality at the sample size you have. Additionally, if you do the test at all, you should probably do it on the data after subtracting off the group means. Most importantly though, making future decisions contingent on such preliminary tests can make your eventual decision flawed. I do not know of studies of the sort with normality testing, but there are such studies with heteroskedasticity-testing, see for one: Zimmerman, D. W. (2004). A note on preliminary tests of equality of variances. British Journal of Mathematical and Statistical Psychology, 57(1), 173–181. https://doi.org/10.1348/000711004849222 Dealing with heteroskedasticity So if your primary question about location differences is a difference in means, then my recommendation would be to compute the difference in means and use a test that can adjust for possible violations of heteroskedasticity. The most developed requiring very little additional computer time (wild bootstrapping is good but can take eons) are heteroskedasticity-consistent standard errors in econometrics. I would recommend the HC3, HC4 or HC5 variants. See: Hausman, J., & Palmer, C. (2012). Heteroskedasticity-robust inference in finite samples. Economics Letters, 116(2), 232–235. https://doi.org/10.1016/j.econlet.2012.02.007 Cribari-Neto, F., Souza, T. C., & Vasconcellos, K. L. P. (2007). Inference Under Heteroskedasticity and Leveraged Data. Communications in Statistics - Theory and Methods, 36(10), 1877–1888. https://doi.org/10.1080/03610920601126589 These methods are more recently developed than Welch's correction and do not require you to know the correct model specification for the variance. So run a regression of the outcome on group, the coefficient is your mean difference, and the robust error correction corrects the p-value for heteroskedasticity. There are methods that allow you to simultaneously model the mean and variance such as generalized least squares, but normality of the data comes back into play in relation to the test of the variance. I hope the above helps in relation to questions about mean differences. Variance differences Brief simulation I conducted I next turn to the other primary question about variance differences. I ran some simulations of this weeks ago. I assumed that the mean and variance of outcome was a function of the groups alone $-$ a simplifying assumption that would be met in say a randomized trial. I varied: the distribution of the data, normal, or skewed ($\chi^2_8$ centered and scaled to meet mean and variance requirements $\approx$ skew of 1 asymptotically). The choice of $\chi^2$ is not ideal for generating skewed data especially under unbalanced design but I think it suffices. balanced versus unbalanced design (1:3, so not as extreme as your situation). And the maximum sample size I considered was 200 persons in both groups. I tested the ability of the methods I considered to maintain nominal error-rate and statistical power. To knock off power questions now, at sample sizes below OP's, most methods displayed similar statistical power with regard to detecting variance differences. But not all had ability to maintain nominal error rate. So when I use performed relatively well below, I mean maintained nominal error rate. Levene test with median and OLS on squared residuals may be good choices The most standard way is the $F$-test. But unless your data are normally distributed, this test behaves very badly. So one can take it off the table. The next standard is Levene's test. If you are concerned about normality, you can robustify it by conducting Levene's test using the median in place of the mean in the formula for the test. In the simulations I conducted, and this approach seemed to perform relatively well across a variety of situations and should be available in major statistical packages. The finding in my own simulations is backed up by the recommendations from the NIST engineering statistics handbook: https://itl.nist.gov/div898/handbook/eda/section3/eda35a.htm. However, I also found that you can take the results from the first regression you conducted to obtain the mean differences. Square the residuals from this regression, then regress this squared residual on the group variable again. I found this approach to testing the variance differences to perform well across all conditions in my simulation. In summary So to recap, if I were in your situation and wanted to make informed decisions a-priori assuming non-normality is not extreme, I would: conduct the standard regression model regressing the data on group membership to obtain the mean difference, and use heteroskedasticity-consistent standard errors for inference I would conduct Levene's test using the median as center rather than the mean. I might also use the regression of squared residuals approach. Additional methods I tested were Levene's test using the Hodges-Lehmann median (a nice robust estimator which has a relation to the aforementioned Mann-Whitney) as the center; generalized least squares; and three methods from the structural equation modeling literature: diagonally-weighted least squares, and mean and variance adjusted OLS, and maximim likelihood with a sandwich estimator commonly referred to as MLM. The methods I focused on in the bulk of the text won out.
Comparison of variance between two samples with unequal sample size Location differences Mann-Whitney does not test mean differences I'll begin by knocking off the simpler questions. If your primary question about location differences is a difference in means, then yo
41,445
Does class balancing introduce bias?
For the main question: Does class balancing introduce bias? Yes, in most cases it does. Since the new data points are generated from the old ones, they can't introduce much variance to the dataset. In most cases they are only slightly different than the original ones. Does oversampling before spliting introduce bias? Yes, and this is why you should perform the splitting before balancing the training set. You want your test set to be as unbiased as possible in order to get an objective evaluation of the model's performance. If balancing was performed before splitting the datasets, the model might have seen information on the test set, during training, through the generated data points. Is it more scientifically correct to oversample after spliting the training and test set individually? You shouldn't over-sample the test set. The test set should be as objective as possible. By generating new test set data and evaluating your model on those, the procedure would lose its objectivity. Do we have to balance the test set? No, you shouldn't under any condition balance the test set. Could ENN and/or SMOTE introduce bias for specific classifiers? I don't think that k-NN or any other specific classifier would be more biased to the test set than the others. I'm not sure about this, though.
Does class balancing introduce bias?
For the main question: Does class balancing introduce bias? Yes, in most cases it does. Since the new data points are generated from the old ones, they can't introduce much variance to the dataset.
Does class balancing introduce bias? For the main question: Does class balancing introduce bias? Yes, in most cases it does. Since the new data points are generated from the old ones, they can't introduce much variance to the dataset. In most cases they are only slightly different than the original ones. Does oversampling before spliting introduce bias? Yes, and this is why you should perform the splitting before balancing the training set. You want your test set to be as unbiased as possible in order to get an objective evaluation of the model's performance. If balancing was performed before splitting the datasets, the model might have seen information on the test set, during training, through the generated data points. Is it more scientifically correct to oversample after spliting the training and test set individually? You shouldn't over-sample the test set. The test set should be as objective as possible. By generating new test set data and evaluating your model on those, the procedure would lose its objectivity. Do we have to balance the test set? No, you shouldn't under any condition balance the test set. Could ENN and/or SMOTE introduce bias for specific classifiers? I don't think that k-NN or any other specific classifier would be more biased to the test set than the others. I'm not sure about this, though.
Does class balancing introduce bias? For the main question: Does class balancing introduce bias? Yes, in most cases it does. Since the new data points are generated from the old ones, they can't introduce much variance to the dataset.
41,446
What do neural networks offer that traditional non-linear statistical models do not offer?
The ability to embed structural and algorithmic priors into the model. The simplest example of this is convolutional neural networks applied to image data. The structural prior is that nearby regions of the image are more closely related / relevant to each other compared to far-away regions. Graph convolutional networks extends this "locality" prior to arbitrary graph/network structures. 1D and 3D convolutional networks extends this prior to sound / 1D signal data and 3D scans respectively. Powerful quadratic programming solvers have been developed. It is possible to literally embed such a QP solver as part of a neural network, inducing an algorithmic prior which says "find solutions which make use of QP". Value Iteration Networks forces a prior which says "make use of this well known RL algorithm to solve this RL problem". Computer vision scientists can build 3D geometry into a neural network, enforcing the prior "we live in 3D euclidean space, and here is our camera model" into the architecture of the network.
What do neural networks offer that traditional non-linear statistical models do not offer?
The ability to embed structural and algorithmic priors into the model. The simplest example of this is convolutional neural networks applied to image data. The structural prior is that nearby regions
What do neural networks offer that traditional non-linear statistical models do not offer? The ability to embed structural and algorithmic priors into the model. The simplest example of this is convolutional neural networks applied to image data. The structural prior is that nearby regions of the image are more closely related / relevant to each other compared to far-away regions. Graph convolutional networks extends this "locality" prior to arbitrary graph/network structures. 1D and 3D convolutional networks extends this prior to sound / 1D signal data and 3D scans respectively. Powerful quadratic programming solvers have been developed. It is possible to literally embed such a QP solver as part of a neural network, inducing an algorithmic prior which says "find solutions which make use of QP". Value Iteration Networks forces a prior which says "make use of this well known RL algorithm to solve this RL problem". Computer vision scientists can build 3D geometry into a neural network, enforcing the prior "we live in 3D euclidean space, and here is our camera model" into the architecture of the network.
What do neural networks offer that traditional non-linear statistical models do not offer? The ability to embed structural and algorithmic priors into the model. The simplest example of this is convolutional neural networks applied to image data. The structural prior is that nearby regions
41,447
What do neural networks offer that traditional non-linear statistical models do not offer?
It's my understanding that at this point in time, there is no real solid mathematical reason for why NN have seen as much success as they have. Perhaps that's why you can't find any thing convincing at this time, although there are plenty of heuristic arguments. The one proof that this brought up a lot (for good reason) is the "Universal Approximation Theorem"; that is, with enough neurons, any smooth function can be approximated arbitrarily well by a large enough neural network. This suggests that given enough parameters in our NN and enough data, we should be able to get arbitrarily close to the true function we are attempting to approximate. However, the Universal Approximation Theorem alone doesn't explain the success of NN's, as NN are definitely not the only type of machine learning/statistical model that have this type of property! For a very simple alternative, you could take a linear model and simply expand the covariates to include non-linear terms and interaction effects. This can also approximate any function given enough expansions. Now in the case of linear models, although the Universal Approximation Theorem is true, we can do the math right away to see this becomes much too data hungry to ever be of practical use. For example, suppose we have a model with $k$ covariates. A simple linear model with no parameter expansions requires fitting $k$ coefficients. If we want to include just the first order interaction effects, we are now up to $k^2$ coefficients. While this is a richer set of models than simple linear effects, it's still not that complicated. If we want to include third order effects, this requires $k^3$ coefficients. Note that we haven't even addressed adding non-linear parameter expansions yet. If $k$ is at all large, it becomes obvious that this is not going to work out well for approximating functions in which the covariates have complex interactions. So to me, the real question is which kind of models can approximate complex relations from a finite set of data well. I think the paragraph above is fairly convincing that linear models with simple parameter expansions are not the way to go. It's my understanding that the argument for NN's is that (a) there's no convincing argument that they won't work and (b) empirically, they seem to be working quite well in a wide body of problems when one has lots of data and complex interaction of features.
What do neural networks offer that traditional non-linear statistical models do not offer?
It's my understanding that at this point in time, there is no real solid mathematical reason for why NN have seen as much success as they have. Perhaps that's why you can't find any thing convincing a
What do neural networks offer that traditional non-linear statistical models do not offer? It's my understanding that at this point in time, there is no real solid mathematical reason for why NN have seen as much success as they have. Perhaps that's why you can't find any thing convincing at this time, although there are plenty of heuristic arguments. The one proof that this brought up a lot (for good reason) is the "Universal Approximation Theorem"; that is, with enough neurons, any smooth function can be approximated arbitrarily well by a large enough neural network. This suggests that given enough parameters in our NN and enough data, we should be able to get arbitrarily close to the true function we are attempting to approximate. However, the Universal Approximation Theorem alone doesn't explain the success of NN's, as NN are definitely not the only type of machine learning/statistical model that have this type of property! For a very simple alternative, you could take a linear model and simply expand the covariates to include non-linear terms and interaction effects. This can also approximate any function given enough expansions. Now in the case of linear models, although the Universal Approximation Theorem is true, we can do the math right away to see this becomes much too data hungry to ever be of practical use. For example, suppose we have a model with $k$ covariates. A simple linear model with no parameter expansions requires fitting $k$ coefficients. If we want to include just the first order interaction effects, we are now up to $k^2$ coefficients. While this is a richer set of models than simple linear effects, it's still not that complicated. If we want to include third order effects, this requires $k^3$ coefficients. Note that we haven't even addressed adding non-linear parameter expansions yet. If $k$ is at all large, it becomes obvious that this is not going to work out well for approximating functions in which the covariates have complex interactions. So to me, the real question is which kind of models can approximate complex relations from a finite set of data well. I think the paragraph above is fairly convincing that linear models with simple parameter expansions are not the way to go. It's my understanding that the argument for NN's is that (a) there's no convincing argument that they won't work and (b) empirically, they seem to be working quite well in a wide body of problems when one has lots of data and complex interaction of features.
What do neural networks offer that traditional non-linear statistical models do not offer? It's my understanding that at this point in time, there is no real solid mathematical reason for why NN have seen as much success as they have. Perhaps that's why you can't find any thing convincing a
41,448
Why start with measures of central tendency?
One reason we teach measures of central tendency before measures of spread because many measures of spread involve measures of central tendency: The standard deviation involves the mean, median absolute deviation involves the median. We could teach the range without teaching the mean, but teaching range is not exactly a long term project. Indeed, the mean is used nearly everywhere in statistics. Among measures of central tendency, I think we teach the arithmetic mean first because it is familiar - "average" occurs all over the place and it usually means "arithmetic mean". Of course, there are lots of measures of central tendency that we often do not teach so early in the curriculum - e.g. the trimmed, winsorized, geometric and harmonic means.
Why start with measures of central tendency?
One reason we teach measures of central tendency before measures of spread because many measures of spread involve measures of central tendency: The standard deviation involves the mean, median absolu
Why start with measures of central tendency? One reason we teach measures of central tendency before measures of spread because many measures of spread involve measures of central tendency: The standard deviation involves the mean, median absolute deviation involves the median. We could teach the range without teaching the mean, but teaching range is not exactly a long term project. Indeed, the mean is used nearly everywhere in statistics. Among measures of central tendency, I think we teach the arithmetic mean first because it is familiar - "average" occurs all over the place and it usually means "arithmetic mean". Of course, there are lots of measures of central tendency that we often do not teach so early in the curriculum - e.g. the trimmed, winsorized, geometric and harmonic means.
Why start with measures of central tendency? One reason we teach measures of central tendency before measures of spread because many measures of spread involve measures of central tendency: The standard deviation involves the mean, median absolu
41,449
Is Reinforcement Learning the right choice for painting like Bob Ross?
I would suggest genetic algorithms (GA) or other global optimisers for this search, as your sequential score as you "build" the painting into more complex states is probably not the best guide. There are a few examples of similar puzzles, such as building Mona Lisa out of circles, and here is a more recent example of the same problem, with code examples. A GA approach would basically consist of a population of 100s of randomly generated sets of strokes, which you score and assess the best options. Then you select from the population, favouring solutions with the best score (there are lots of options for that, such as only picking from the top fraction, to using a skewed distribution that favours the top). Create pairs of solutions and "breed" them by taking some parts from the first and some from the second parent. Add just a little random noise as a "mutation". When you have done that enough to create a second generation, repeat the whole process. There are lots of variations. RL should also work, but you may have an uphill task to create a policy or value function that can learn the mapping from stroke actions and the current state to the eventual policy or value. It's definitely feasible from a theoretical standpoint though. The state is the current image. The action is a choice of next stroke. The reward is the improvement in score, and should probably be assessed on each action (but could be done every 10, every 50, or even just at the end - longer delays will challenge the RL more, but might allow faster iteration). Most RL algorithms, such as Q-learning, should be able to cope with avoiding "dead end" results where early good scores are false leads, and need to be revised. I don't know, but would be very interested to see, whether a GA or RL solves this problem more efficiently . . . my gut feeling is GA would be the way to go.
Is Reinforcement Learning the right choice for painting like Bob Ross?
I would suggest genetic algorithms (GA) or other global optimisers for this search, as your sequential score as you "build" the painting into more complex states is probably not the best guide. There
Is Reinforcement Learning the right choice for painting like Bob Ross? I would suggest genetic algorithms (GA) or other global optimisers for this search, as your sequential score as you "build" the painting into more complex states is probably not the best guide. There are a few examples of similar puzzles, such as building Mona Lisa out of circles, and here is a more recent example of the same problem, with code examples. A GA approach would basically consist of a population of 100s of randomly generated sets of strokes, which you score and assess the best options. Then you select from the population, favouring solutions with the best score (there are lots of options for that, such as only picking from the top fraction, to using a skewed distribution that favours the top). Create pairs of solutions and "breed" them by taking some parts from the first and some from the second parent. Add just a little random noise as a "mutation". When you have done that enough to create a second generation, repeat the whole process. There are lots of variations. RL should also work, but you may have an uphill task to create a policy or value function that can learn the mapping from stroke actions and the current state to the eventual policy or value. It's definitely feasible from a theoretical standpoint though. The state is the current image. The action is a choice of next stroke. The reward is the improvement in score, and should probably be assessed on each action (but could be done every 10, every 50, or even just at the end - longer delays will challenge the RL more, but might allow faster iteration). Most RL algorithms, such as Q-learning, should be able to cope with avoiding "dead end" results where early good scores are false leads, and need to be revised. I don't know, but would be very interested to see, whether a GA or RL solves this problem more efficiently . . . my gut feeling is GA would be the way to go.
Is Reinforcement Learning the right choice for painting like Bob Ross? I would suggest genetic algorithms (GA) or other global optimisers for this search, as your sequential score as you "build" the painting into more complex states is probably not the best guide. There
41,450
Is Reinforcement Learning the right choice for painting like Bob Ross?
I think your skepticism of RL for this task is well-founded. But there has been some research toward building neural networks to reproduce the style of painters. This work leverages the power of convolutional neural networks. "A Neural Algorithm of Artistic Style" Leon A. Gatys, Alexander S. Ecker, Matthias Bethge In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks.1, 2 Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision,3–7 our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.
Is Reinforcement Learning the right choice for painting like Bob Ross?
I think your skepticism of RL for this task is well-founded. But there has been some research toward building neural networks to reproduce the style of painters. This work leverages the power of convo
Is Reinforcement Learning the right choice for painting like Bob Ross? I think your skepticism of RL for this task is well-founded. But there has been some research toward building neural networks to reproduce the style of painters. This work leverages the power of convolutional neural networks. "A Neural Algorithm of Artistic Style" Leon A. Gatys, Alexander S. Ecker, Matthias Bethge In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities. However, in other key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by a class of biologically inspired vision models called Deep Neural Networks.1, 2 Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality. The system uses neural representations to separate and recombine content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. Moreover, in light of the striking similarities between performance-optimised artificial neural networks and biological vision,3–7 our work offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.
Is Reinforcement Learning the right choice for painting like Bob Ross? I think your skepticism of RL for this task is well-founded. But there has been some research toward building neural networks to reproduce the style of painters. This work leverages the power of convo
41,451
Difference between Cholesky decomposition and log-cholesky Decomposition
I think it's less confusing to call it the log-Cholesky paramet(e)rization rather than the log-Cholesky decomposition (i.e., the "decomposition" part doesn't change ...) From Pinheiro's thesis (1994, UW Madison) - I think it has the same information as the paper you cite: 6.1.2 Log-Cholesky Parametrization If one requires the diagonal elements of $\boldsymbol L$ in the Cholesky factorization to be positive then $\boldsymbol L$ is unique. In order to avoid constrained estimation, one can use the logarithms of the diagonal elements of $\boldsymbol L$. We call this parametrization the log-Cholesky parametrization. It inherits the good computational properties of the Cholesky parametrization, but has the advantage of being uniquely defined. In other words, in your notation it would be: \begin{bmatrix} \log(l_{11}) & 0 & 0 \\ l_{21} & \log(l_{22}) & 0\\ l_{31} & l_{32} & \log(l_{33})\end{bmatrix} For what it's worth, when defining a parameter vector for a model you also need to define an order in which the matrix is unpacked; for example, in lme4 the log-Cholesky lower triangle is unpacked in column-first order, i.e. $\theta_1 = \log(l_{11})$, $\theta_2=l_{21}$, $\theta_3=l_{31}$, $\theta_4=\log(l_{22})$, ...
Difference between Cholesky decomposition and log-cholesky Decomposition
I think it's less confusing to call it the log-Cholesky paramet(e)rization rather than the log-Cholesky decomposition (i.e., the "decomposition" part doesn't change ...) From Pinheiro's thesis (1994,
Difference between Cholesky decomposition and log-cholesky Decomposition I think it's less confusing to call it the log-Cholesky paramet(e)rization rather than the log-Cholesky decomposition (i.e., the "decomposition" part doesn't change ...) From Pinheiro's thesis (1994, UW Madison) - I think it has the same information as the paper you cite: 6.1.2 Log-Cholesky Parametrization If one requires the diagonal elements of $\boldsymbol L$ in the Cholesky factorization to be positive then $\boldsymbol L$ is unique. In order to avoid constrained estimation, one can use the logarithms of the diagonal elements of $\boldsymbol L$. We call this parametrization the log-Cholesky parametrization. It inherits the good computational properties of the Cholesky parametrization, but has the advantage of being uniquely defined. In other words, in your notation it would be: \begin{bmatrix} \log(l_{11}) & 0 & 0 \\ l_{21} & \log(l_{22}) & 0\\ l_{31} & l_{32} & \log(l_{33})\end{bmatrix} For what it's worth, when defining a parameter vector for a model you also need to define an order in which the matrix is unpacked; for example, in lme4 the log-Cholesky lower triangle is unpacked in column-first order, i.e. $\theta_1 = \log(l_{11})$, $\theta_2=l_{21}$, $\theta_3=l_{31}$, $\theta_4=\log(l_{22})$, ...
Difference between Cholesky decomposition and log-cholesky Decomposition I think it's less confusing to call it the log-Cholesky paramet(e)rization rather than the log-Cholesky decomposition (i.e., the "decomposition" part doesn't change ...) From Pinheiro's thesis (1994,
41,452
How to account for multiple measurements of same person in either two-group comparision or regression?
When analyzing non-independent observations (e.g. two eyes of same person) in regression, is mixed effect model the way to go? In short: Yes. Mixed models are capable of modelling the dependence or structure introduced in the data by the study design. In your example of measuring both eyes, you can use a mixed model with a random effect for individual, since individuals have two eyes and thus cause the dependence by being in the data twice. However, you still cannot consider pseudoreplications to be true replicates in a mixed model. In many cases you can make more effective use of them in a mixed model, but the number of true replicates hasn't magically increased by changing the type of model. That being said, the repeated measures you are describing are very common in medical research and can be modelled just fine with a mixed model. Mixed effect models are all regression based. How would I go about doing the equivalent of t-test or mann whitney u test while accounting for non-independence issue? You can easily perform the equivalent of a $t$-test using a (mixed) regression model: library(lme4) lmer(y ~ x + (1 | rand)) Where x is a two-level factor. The first group of x will be the intercept and significance of x as an explanatory variable means there is a significant difference between the two groups. As for the Mann-Whitney-U test, I'm not sure you could do a test based on ranks with a mixed model. However, you probably don't need to since you can either use a generalized linear mixed model (e.g. glmer(..., family = 'poisson')), or a non-linear mixed (see the nlme package). Although the nlme package is great, I would recommend you not to jump to non-linear models too fast, because a GLMM is often easier to interpret and in many cases there is a logical choice for the theoretical distribution of the data-generating process in clinical research. Alternatively, you could look into Bayesian hierarchical modelling, which is actually quite similar to mixed models, albeit a bit more difficult if you are not familiar with Bayesian statistics. There are numerous models that try and model dependence or hierarchy. I am not familiar with "Cluster-correlated robust estimates of variance", but a mixed model with nested structure is essentially a hierarchical model.
How to account for multiple measurements of same person in either two-group comparision or regressio
When analyzing non-independent observations (e.g. two eyes of same person) in regression, is mixed effect model the way to go? In short: Yes. Mixed models are capable of modelling the dependence
How to account for multiple measurements of same person in either two-group comparision or regression? When analyzing non-independent observations (e.g. two eyes of same person) in regression, is mixed effect model the way to go? In short: Yes. Mixed models are capable of modelling the dependence or structure introduced in the data by the study design. In your example of measuring both eyes, you can use a mixed model with a random effect for individual, since individuals have two eyes and thus cause the dependence by being in the data twice. However, you still cannot consider pseudoreplications to be true replicates in a mixed model. In many cases you can make more effective use of them in a mixed model, but the number of true replicates hasn't magically increased by changing the type of model. That being said, the repeated measures you are describing are very common in medical research and can be modelled just fine with a mixed model. Mixed effect models are all regression based. How would I go about doing the equivalent of t-test or mann whitney u test while accounting for non-independence issue? You can easily perform the equivalent of a $t$-test using a (mixed) regression model: library(lme4) lmer(y ~ x + (1 | rand)) Where x is a two-level factor. The first group of x will be the intercept and significance of x as an explanatory variable means there is a significant difference between the two groups. As for the Mann-Whitney-U test, I'm not sure you could do a test based on ranks with a mixed model. However, you probably don't need to since you can either use a generalized linear mixed model (e.g. glmer(..., family = 'poisson')), or a non-linear mixed (see the nlme package). Although the nlme package is great, I would recommend you not to jump to non-linear models too fast, because a GLMM is often easier to interpret and in many cases there is a logical choice for the theoretical distribution of the data-generating process in clinical research. Alternatively, you could look into Bayesian hierarchical modelling, which is actually quite similar to mixed models, albeit a bit more difficult if you are not familiar with Bayesian statistics. There are numerous models that try and model dependence or hierarchy. I am not familiar with "Cluster-correlated robust estimates of variance", but a mixed model with nested structure is essentially a hierarchical model.
How to account for multiple measurements of same person in either two-group comparision or regressio When analyzing non-independent observations (e.g. two eyes of same person) in regression, is mixed effect model the way to go? In short: Yes. Mixed models are capable of modelling the dependence
41,453
Back-propagation in Convolution layer
I try to explain the dimensions obtained (5x18x18 -> 3x20x20): 5 -> 3 the flipped convolutions are repeated 3 times, but the effects of each of the 5 filter are summed up, exactly as you do in the forward phase. In any case in a convolutional layer it is possible to give any depth in input and any number of filters in output as well. 18 -> 20 is given by the full convolution, in which is applied a padding to the input image obtaining then a bigger image as result. Anyway here the backpropagation in convolution layers is very well explained.
Back-propagation in Convolution layer
I try to explain the dimensions obtained (5x18x18 -> 3x20x20): 5 -> 3 the flipped convolutions are repeated 3 times, but the effects of each of the 5 filter are summed up, exactly as you do in the fo
Back-propagation in Convolution layer I try to explain the dimensions obtained (5x18x18 -> 3x20x20): 5 -> 3 the flipped convolutions are repeated 3 times, but the effects of each of the 5 filter are summed up, exactly as you do in the forward phase. In any case in a convolutional layer it is possible to give any depth in input and any number of filters in output as well. 18 -> 20 is given by the full convolution, in which is applied a padding to the input image obtaining then a bigger image as result. Anyway here the backpropagation in convolution layers is very well explained.
Back-propagation in Convolution layer I try to explain the dimensions obtained (5x18x18 -> 3x20x20): 5 -> 3 the flipped convolutions are repeated 3 times, but the effects of each of the 5 filter are summed up, exactly as you do in the fo
41,454
Is the negative exponential distribution a member of the exponential family?
This is correct, if $\mu$ is a parameter of the distribution rather than a given, the indicator function implies this distribution is not an exponential family.
Is the negative exponential distribution a member of the exponential family?
This is correct, if $\mu$ is a parameter of the distribution rather than a given, the indicator function implies this distribution is not an exponential family.
Is the negative exponential distribution a member of the exponential family? This is correct, if $\mu$ is a parameter of the distribution rather than a given, the indicator function implies this distribution is not an exponential family.
Is the negative exponential distribution a member of the exponential family? This is correct, if $\mu$ is a parameter of the distribution rather than a given, the indicator function implies this distribution is not an exponential family.
41,455
How to embed in Euclidean space
These are known as multidimensional scaling algorithms. From wikipedia (https://en.wikipedia.org/wiki/Multidimensional_scaling), "An MDS algorithm aims to place each object in N-dimensional space such that the between-object distances are preserved as well as possible." So essentially you input a distance matrix and the algorithms output a Euclidean representation that should approximate the distances. In your case, you have similarity scores, so you'll need to take either the reciprocal (distance = 1 / similarity) or subtract similarity from a large constant (distance = c - similarity).
How to embed in Euclidean space
These are known as multidimensional scaling algorithms. From wikipedia (https://en.wikipedia.org/wiki/Multidimensional_scaling), "An MDS algorithm aims to place each object in N-dimensional space suc
How to embed in Euclidean space These are known as multidimensional scaling algorithms. From wikipedia (https://en.wikipedia.org/wiki/Multidimensional_scaling), "An MDS algorithm aims to place each object in N-dimensional space such that the between-object distances are preserved as well as possible." So essentially you input a distance matrix and the algorithms output a Euclidean representation that should approximate the distances. In your case, you have similarity scores, so you'll need to take either the reciprocal (distance = 1 / similarity) or subtract similarity from a large constant (distance = c - similarity).
How to embed in Euclidean space These are known as multidimensional scaling algorithms. From wikipedia (https://en.wikipedia.org/wiki/Multidimensional_scaling), "An MDS algorithm aims to place each object in N-dimensional space suc
41,456
How to embed in Euclidean space
vectors that are similar under the original measure have small Euclidean distance under the embedding This is the goal of dimensionality reduction, especially the nonlinear dimensionality reduction, where it is the only goal (because they cannot in general enforce that distances between points that are far apart will be undistorted, if you're interested in proof you can find it here). Some approaches: Multidimensional scaling Isomap (this is nonlinear method that uses MDS for distances retrieved from kNN graph) Kernel PCA (uses kernel trick to do PCA in embedding space) graph-based dimensionality reduction (your distance matrix defines a graph, and graphs give rise to useful matrices, check out A tutorial on Spectral Clustering. In Python megaman implements Spectral Embedding) tSNE (reduces dimensionality trying to preserve distribution of distances) If you're interested in Python in particular, then almost all of these methods are implemented in scikit-learn, especially in manifold module.
How to embed in Euclidean space
vectors that are similar under the original measure have small Euclidean distance under the embedding This is the goal of dimensionality reduction, especially the nonlinear dimensionality reduction
How to embed in Euclidean space vectors that are similar under the original measure have small Euclidean distance under the embedding This is the goal of dimensionality reduction, especially the nonlinear dimensionality reduction, where it is the only goal (because they cannot in general enforce that distances between points that are far apart will be undistorted, if you're interested in proof you can find it here). Some approaches: Multidimensional scaling Isomap (this is nonlinear method that uses MDS for distances retrieved from kNN graph) Kernel PCA (uses kernel trick to do PCA in embedding space) graph-based dimensionality reduction (your distance matrix defines a graph, and graphs give rise to useful matrices, check out A tutorial on Spectral Clustering. In Python megaman implements Spectral Embedding) tSNE (reduces dimensionality trying to preserve distribution of distances) If you're interested in Python in particular, then almost all of these methods are implemented in scikit-learn, especially in manifold module.
How to embed in Euclidean space vectors that are similar under the original measure have small Euclidean distance under the embedding This is the goal of dimensionality reduction, especially the nonlinear dimensionality reduction
41,457
MLE of $\sqrt{\frac{2}{\pi}}\exp\left({-\frac{1}{2}(x-\theta)}^2\right)$
For $\theta\le x_{(1)}$ the likelihood is an increasing function of $\theta$ and for $\theta>x_{(1)}$, the likelihood is zero. Hence, the MLE of $\theta$ is $\hat\theta=X_{(1)}$.
MLE of $\sqrt{\frac{2}{\pi}}\exp\left({-\frac{1}{2}(x-\theta)}^2\right)$
For $\theta\le x_{(1)}$ the likelihood is an increasing function of $\theta$ and for $\theta>x_{(1)}$, the likelihood is zero. Hence, the MLE of $\theta$ is $\hat\theta=X_{(1)}$.
MLE of $\sqrt{\frac{2}{\pi}}\exp\left({-\frac{1}{2}(x-\theta)}^2\right)$ For $\theta\le x_{(1)}$ the likelihood is an increasing function of $\theta$ and for $\theta>x_{(1)}$, the likelihood is zero. Hence, the MLE of $\theta$ is $\hat\theta=X_{(1)}$.
MLE of $\sqrt{\frac{2}{\pi}}\exp\left({-\frac{1}{2}(x-\theta)}^2\right)$ For $\theta\le x_{(1)}$ the likelihood is an increasing function of $\theta$ and for $\theta>x_{(1)}$, the likelihood is zero. Hence, the MLE of $\theta$ is $\hat\theta=X_{(1)}$.
41,458
Deep Learning : Using dropout in Autoencoders?
Should i normalize my numerical data values before feeding to any type of autoencoder? If they are int and float values do I still have to normalize? Normalizing data often improves the model because it amounts to pre-conditioning the inputs so that optimization proceeds more smoothly. Which activation function should I use in autoencoder? Some article and research paper says "sigmoid" and some says "relu"? Use the one that works best for your problem. ReLUs are gaining popularity because they alleviate some problems with sigmoids units. See What are the advantages of ReLU over sigmoid function in deep neural networks? for some more information. Should I use dropout in each layer? That depends on what you want your model to do and what qualities you want it to have. Autoencoders that include dropout are often called "denoising autoencoders" because they use dropout to randomly corrupt the input, with the goal of producing a network that is more robust to noise. This tutorial has more information.
Deep Learning : Using dropout in Autoencoders?
Should i normalize my numerical data values before feeding to any type of autoencoder? If they are int and float values do I still have to normalize? Normalizing data often improves the model because
Deep Learning : Using dropout in Autoencoders? Should i normalize my numerical data values before feeding to any type of autoencoder? If they are int and float values do I still have to normalize? Normalizing data often improves the model because it amounts to pre-conditioning the inputs so that optimization proceeds more smoothly. Which activation function should I use in autoencoder? Some article and research paper says "sigmoid" and some says "relu"? Use the one that works best for your problem. ReLUs are gaining popularity because they alleviate some problems with sigmoids units. See What are the advantages of ReLU over sigmoid function in deep neural networks? for some more information. Should I use dropout in each layer? That depends on what you want your model to do and what qualities you want it to have. Autoencoders that include dropout are often called "denoising autoencoders" because they use dropout to randomly corrupt the input, with the goal of producing a network that is more robust to noise. This tutorial has more information.
Deep Learning : Using dropout in Autoencoders? Should i normalize my numerical data values before feeding to any type of autoencoder? If they are int and float values do I still have to normalize? Normalizing data often improves the model because
41,459
What is the reason for not including an intercept term in AR and ARMA models?
ARMA models can easily be formulated with an intercept term, and these are in common use. For example an AR($1$) model with an intercept term can be written as: $$\begin{equation} \begin{aligned} X_t &= \mu + \phi (X_{t-1} - \mu) + \varepsilon_t \\[6pt] &= (1-\phi) \mu + \phi X_{t-1} + \varepsilon_t. \end{aligned} \end{equation}$$ In the special case where $\mu = 0$ this simplifies to the formula you have probably seen: $$X_t = \phi X_{t-1} + \varepsilon_t.$$ For pedagogical purposes, it is common for books and lecture notes on time-series models to omit the intercept term because it does not really add anything of substance to understanding the model form (it is just a shift in location). It is relatively simple to add the intercept term into the model by defining a zero-mean model (no intercept term) for $\{ \tilde{X}_t \}$ and defining $\{ X_t \}$ with $X_t = \mu + \tilde{X}$.
What is the reason for not including an intercept term in AR and ARMA models?
ARMA models can easily be formulated with an intercept term, and these are in common use. For example an AR($1$) model with an intercept term can be written as: $$\begin{equation} \begin{aligned} X_t
What is the reason for not including an intercept term in AR and ARMA models? ARMA models can easily be formulated with an intercept term, and these are in common use. For example an AR($1$) model with an intercept term can be written as: $$\begin{equation} \begin{aligned} X_t &= \mu + \phi (X_{t-1} - \mu) + \varepsilon_t \\[6pt] &= (1-\phi) \mu + \phi X_{t-1} + \varepsilon_t. \end{aligned} \end{equation}$$ In the special case where $\mu = 0$ this simplifies to the formula you have probably seen: $$X_t = \phi X_{t-1} + \varepsilon_t.$$ For pedagogical purposes, it is common for books and lecture notes on time-series models to omit the intercept term because it does not really add anything of substance to understanding the model form (it is just a shift in location). It is relatively simple to add the intercept term into the model by defining a zero-mean model (no intercept term) for $\{ \tilde{X}_t \}$ and defining $\{ X_t \}$ with $X_t = \mu + \tilde{X}$.
What is the reason for not including an intercept term in AR and ARMA models? ARMA models can easily be formulated with an intercept term, and these are in common use. For example an AR($1$) model with an intercept term can be written as: $$\begin{equation} \begin{aligned} X_t
41,460
What is the reason for not including an intercept term in AR and ARMA models?
To begin with in arima models the constant is mandatory if d=0 i.e.no differencing is in play. If d<>=0 then the constant is optional. If d<>=0 and a constant is in the model there is a steady state constant reflecting a "slope" or growth reflecting growth as compared to deterministic growth via time/counting numbers related predictor variables in an armaX model e.g. X=1,2,3,,,t. See Constant in arima model whether to include or exclude? for a related discussion. The reason a constant should always be included in a regression model is that if it is omitted then the prediction equation is forced to go through the point 0,0 i.e. the origin.
What is the reason for not including an intercept term in AR and ARMA models?
To begin with in arima models the constant is mandatory if d=0 i.e.no differencing is in play. If d<>=0 then the constant is optional. If d<>=0 and a constant is in the model there is a steady state c
What is the reason for not including an intercept term in AR and ARMA models? To begin with in arima models the constant is mandatory if d=0 i.e.no differencing is in play. If d<>=0 then the constant is optional. If d<>=0 and a constant is in the model there is a steady state constant reflecting a "slope" or growth reflecting growth as compared to deterministic growth via time/counting numbers related predictor variables in an armaX model e.g. X=1,2,3,,,t. See Constant in arima model whether to include or exclude? for a related discussion. The reason a constant should always be included in a regression model is that if it is omitted then the prediction equation is forced to go through the point 0,0 i.e. the origin.
What is the reason for not including an intercept term in AR and ARMA models? To begin with in arima models the constant is mandatory if d=0 i.e.no differencing is in play. If d<>=0 then the constant is optional. If d<>=0 and a constant is in the model there is a steady state c
41,461
How are models combined in locally weighted linear regression?
What you are doing sounds like filtering or smoothening data. It resembles something like Savitzky–Golay filtering kernel smoothening LOcally WEighted/Estimated Scatterplot Smoothening (LOESS/LOWESS) for which there are many great resources. If you do something like such smoothening or filtering then you do the algorithm step (fitting a regression line) once for each point (or in your case 450 times). Every prediction/estimate/filtered-value $\hat{y}^{(i)}$ that you make, requires a separate regression. Thus 450 regressions to predict 450 $\hat{y}^{(i)}$. Example using your question 5 from your lecture notes In the image below you have 450 measurements (gray points) with 450 different locally weighted (using $\tau=5$) linear curves (if you like other orders could be used) and points fitted to it (blue/red). For clarity only 11 of the fitted curves and points (the ones in red) are made salient. Note that each predicted point is associated to a single regression curve. So the "model combining in locally weighted linear regression" is done by using a single model for each single point. Note: effectively this is sort of linear smoothening like the trapezium rule or Simpson's rule (which can still be derived easily) only is it more flexible with uneven spaced $x$ values and the kernel may adapt from place to place. Combining the regressions The combination of regressions is eventually done in a different way (in part ii of the question in your lecture notes). The 200 smoothened training samples are used to make predictions of the left side of the spectrum based on the right side of the spectrum (this is useful when you train the model on a selection of quasars for which you observe both sides very well and then wish to apply the trained-and tested model on more difficult quasars which have the left side obstructed). So the "combination" is done by predicting the left side of the spectrum as a weighted function of the 200 training samples using weights defined by a distance between samples on the right side of the spectrum. What is not explicitly done in this example is 'training' of some fitting parameters (I am not sure what the fitting parameter is maybe $h$, but it is not a linear regression which is performed). But, anyway this is how the 'models' are 'combined' in this example.
How are models combined in locally weighted linear regression?
What you are doing sounds like filtering or smoothening data. It resembles something like Savitzky–Golay filtering kernel smoothening LOcally WEighted/Estimated Scatterplot Smoothening (LOESS/LOWESS)
How are models combined in locally weighted linear regression? What you are doing sounds like filtering or smoothening data. It resembles something like Savitzky–Golay filtering kernel smoothening LOcally WEighted/Estimated Scatterplot Smoothening (LOESS/LOWESS) for which there are many great resources. If you do something like such smoothening or filtering then you do the algorithm step (fitting a regression line) once for each point (or in your case 450 times). Every prediction/estimate/filtered-value $\hat{y}^{(i)}$ that you make, requires a separate regression. Thus 450 regressions to predict 450 $\hat{y}^{(i)}$. Example using your question 5 from your lecture notes In the image below you have 450 measurements (gray points) with 450 different locally weighted (using $\tau=5$) linear curves (if you like other orders could be used) and points fitted to it (blue/red). For clarity only 11 of the fitted curves and points (the ones in red) are made salient. Note that each predicted point is associated to a single regression curve. So the "model combining in locally weighted linear regression" is done by using a single model for each single point. Note: effectively this is sort of linear smoothening like the trapezium rule or Simpson's rule (which can still be derived easily) only is it more flexible with uneven spaced $x$ values and the kernel may adapt from place to place. Combining the regressions The combination of regressions is eventually done in a different way (in part ii of the question in your lecture notes). The 200 smoothened training samples are used to make predictions of the left side of the spectrum based on the right side of the spectrum (this is useful when you train the model on a selection of quasars for which you observe both sides very well and then wish to apply the trained-and tested model on more difficult quasars which have the left side obstructed). So the "combination" is done by predicting the left side of the spectrum as a weighted function of the 200 training samples using weights defined by a distance between samples on the right side of the spectrum. What is not explicitly done in this example is 'training' of some fitting parameters (I am not sure what the fitting parameter is maybe $h$, but it is not a linear regression which is performed). But, anyway this is how the 'models' are 'combined' in this example.
How are models combined in locally weighted linear regression? What you are doing sounds like filtering or smoothening data. It resembles something like Savitzky–Golay filtering kernel smoothening LOcally WEighted/Estimated Scatterplot Smoothening (LOESS/LOWESS)
41,462
How are models combined in locally weighted linear regression?
Martijn Weterings posts a valid answer to this question, but I am going to add a bit more to this. With respect to the pseudo-algorithm written in the question, it isn't quite complete. It should be like this: models = [] for instance in X: Set current instance as the query point Compute weights for all instances using the equation above Compute optimal parameters using the equation for theta above Compute y_pred by taking the dot product of the optimal thetas with the instance Append y_pred to models The values of $\hat{y}$ inside the models list can then be plot against $X$ to see the smoothing. I have done this on the first instance of the training data set that can be found on question 5 at this link: http://cs229.stanford.edu/ps/ps1/ps1.pdf This is the result of a simple linear regression: After the smoothing using weighted linear regression described above, the output is:
How are models combined in locally weighted linear regression?
Martijn Weterings posts a valid answer to this question, but I am going to add a bit more to this. With respect to the pseudo-algorithm written in the question, it isn't quite complete. It should be l
How are models combined in locally weighted linear regression? Martijn Weterings posts a valid answer to this question, but I am going to add a bit more to this. With respect to the pseudo-algorithm written in the question, it isn't quite complete. It should be like this: models = [] for instance in X: Set current instance as the query point Compute weights for all instances using the equation above Compute optimal parameters using the equation for theta above Compute y_pred by taking the dot product of the optimal thetas with the instance Append y_pred to models The values of $\hat{y}$ inside the models list can then be plot against $X$ to see the smoothing. I have done this on the first instance of the training data set that can be found on question 5 at this link: http://cs229.stanford.edu/ps/ps1/ps1.pdf This is the result of a simple linear regression: After the smoothing using weighted linear regression described above, the output is:
How are models combined in locally weighted linear regression? Martijn Weterings posts a valid answer to this question, but I am going to add a bit more to this. With respect to the pseudo-algorithm written in the question, it isn't quite complete. It should be l
41,463
How are models combined in locally weighted linear regression?
As I am also working on the same problem as you (CS229!) I have been looking into implementing locally weighted regression in python. The following implementation is heavily based on the version provided by Alexandre Gramfort, an Sklearn developper, on his github page, but uses a bell shaped kernel as you have described in your question. def lowess_bell_shape_kern(x, y, tau = .005): """lowess_bell_shape_kern(x, y, tau = .005) -> yest Locally weighted regression: fits a nonparametric regression curve to a scatterplot. The arrays x and y contain an equal number of elements; each pair (x[i], y[i]) defines a data point in the scatterplot. The function returns the estimated (smooth) values of y. The kernel function is the bell shaped function with parameter tau. Larger tau will result in a smoother curve. """ m = len(x) yest = np.zeros(n) #Initializing all weights from the bell shape kernel function w = np.array([np.exp(- (x - x[i])**2/(2*tau)) for i in range(m)]) #Looping through all x-points for i in range(n): weights = w[:, i] b = np.array([np.sum(weights * y), np.sum(weights * y * x)]) A = np.array([[np.sum(weights), np.sum(weights * x)], [np.sum(weights * x), np.sum(weights * x * x)]]) theta = linalg.solve(A, b) yest[i] = theta[0] + theta[1] * x[i] return yest The trick is to express the problem in matrix form and to solve the system of equations using np.linalg, for each point in the data set. Vectorized implementation For those who might be interested here is my attempt at describing the vectorized implementation mathematically. Consider the 1D case where $\Theta = [\theta_0, \theta_1]$ and $x$ and $y$ are vectors of size $m$. The cost function $J(\theta)$ is a weighted version of the OLS regression, where the weights $w$ are defined by some kernel function \begin{aligned} J(\theta) &= \sum_{i=1}^m w^{(i)} \left( y^{(i)} - (\theta_0 + \theta_1 x^{(i)}) \right)^2 \\ \frac{\partial J}{\partial \theta_0} &= -2 \sum_{i=1}^m w^{(i)} \left( y^{(i)} - (\theta_0 + \theta_1 x^{(i)}) \right) \\ \frac{\partial J}{\partial \theta_1} &= -2 \sum_{i=1}^m w^{(i)} \left( y^{(i)} - (\theta_0 + \theta_1 x^{(i)}) \right) x^{(i)} \end{aligned} Cancelling the $-2$ terms, equating to zero, expanding and re-arranging the terms: \begin{aligned} & \frac{\partial J}{\partial \theta_0} = \sum_{i=1}^m w^{(i)} \left( y^{(i)} - (\theta_0 + \theta_1 x^{(i)}) \right) = 0 \\ & \sum_{i=1}^m w^{(i)} \theta_0 + \sum_{i=1}^m w^{(i)} \theta_1 x^{(i)} = \sum_{i=1}^m w^{(i)} y^{(i)} &\text{Eq. (1)} \\ \\ & \frac{\partial J}{\partial \theta_1} = \sum_{i=1}^m w^{(i)} \left( y^{(i)} - (\theta_0 + \theta_1 x^{(i)}) \right) x^{(i)} = 0 \\ & \sum_{i=1}^m w^{(i)} \theta_0 + \sum_{i=1}^m w^{(i)} \theta_1 x^{(i)} x^{(i)} = \sum_{i=1}^m w^{(i)} y^{(i)} x^{(i)} &\text{Eq. (2)} \end{aligned} Writing Eq. (1) and Eq. (2) in matrix form $\mathbf{A \Theta = b}$ allows us to solve for $\Theta$ \begin{aligned} & \sum_{i=1}^m w^{(i)} \theta_0 + \sum_{i=1}^m w^{(i)} \theta_1 x^{(i)} = \sum_{i=1}^m w^{(i)} y^{(i)} \\ & \sum_{i=1}^m w^{(i)} \theta_0 + \sum_{i=1}^m w^{(i)} \theta_1 x^{(i)} x^{(i)} = \sum_{i=1}^m w^{(i)} y^{(i)} x^{(i)} \\ & \begin{bmatrix} \sum w^{(i)} & \sum w^{(i)} x^{(i)} \\ \sum w^{(i)} x^{(i)} & \sum w^{(i)} x^{(i)} x^{(i)} \end{bmatrix} \begin{bmatrix} \theta_0 \\ \theta_1 \end{bmatrix} = \begin{bmatrix} \sum w^{(i)} y^{(i)} \\ \sum w^{(i)} y^{(i)} x^{(i)} \end{bmatrix} \\ & \mathbf{A} \Theta = \mathbf{b} \\ & \Theta = \mathbf{A}^{-1} \mathbf{b} \end{aligned}
How are models combined in locally weighted linear regression?
As I am also working on the same problem as you (CS229!) I have been looking into implementing locally weighted regression in python. The following implementation is heavily based on the version provi
How are models combined in locally weighted linear regression? As I am also working on the same problem as you (CS229!) I have been looking into implementing locally weighted regression in python. The following implementation is heavily based on the version provided by Alexandre Gramfort, an Sklearn developper, on his github page, but uses a bell shaped kernel as you have described in your question. def lowess_bell_shape_kern(x, y, tau = .005): """lowess_bell_shape_kern(x, y, tau = .005) -> yest Locally weighted regression: fits a nonparametric regression curve to a scatterplot. The arrays x and y contain an equal number of elements; each pair (x[i], y[i]) defines a data point in the scatterplot. The function returns the estimated (smooth) values of y. The kernel function is the bell shaped function with parameter tau. Larger tau will result in a smoother curve. """ m = len(x) yest = np.zeros(n) #Initializing all weights from the bell shape kernel function w = np.array([np.exp(- (x - x[i])**2/(2*tau)) for i in range(m)]) #Looping through all x-points for i in range(n): weights = w[:, i] b = np.array([np.sum(weights * y), np.sum(weights * y * x)]) A = np.array([[np.sum(weights), np.sum(weights * x)], [np.sum(weights * x), np.sum(weights * x * x)]]) theta = linalg.solve(A, b) yest[i] = theta[0] + theta[1] * x[i] return yest The trick is to express the problem in matrix form and to solve the system of equations using np.linalg, for each point in the data set. Vectorized implementation For those who might be interested here is my attempt at describing the vectorized implementation mathematically. Consider the 1D case where $\Theta = [\theta_0, \theta_1]$ and $x$ and $y$ are vectors of size $m$. The cost function $J(\theta)$ is a weighted version of the OLS regression, where the weights $w$ are defined by some kernel function \begin{aligned} J(\theta) &= \sum_{i=1}^m w^{(i)} \left( y^{(i)} - (\theta_0 + \theta_1 x^{(i)}) \right)^2 \\ \frac{\partial J}{\partial \theta_0} &= -2 \sum_{i=1}^m w^{(i)} \left( y^{(i)} - (\theta_0 + \theta_1 x^{(i)}) \right) \\ \frac{\partial J}{\partial \theta_1} &= -2 \sum_{i=1}^m w^{(i)} \left( y^{(i)} - (\theta_0 + \theta_1 x^{(i)}) \right) x^{(i)} \end{aligned} Cancelling the $-2$ terms, equating to zero, expanding and re-arranging the terms: \begin{aligned} & \frac{\partial J}{\partial \theta_0} = \sum_{i=1}^m w^{(i)} \left( y^{(i)} - (\theta_0 + \theta_1 x^{(i)}) \right) = 0 \\ & \sum_{i=1}^m w^{(i)} \theta_0 + \sum_{i=1}^m w^{(i)} \theta_1 x^{(i)} = \sum_{i=1}^m w^{(i)} y^{(i)} &\text{Eq. (1)} \\ \\ & \frac{\partial J}{\partial \theta_1} = \sum_{i=1}^m w^{(i)} \left( y^{(i)} - (\theta_0 + \theta_1 x^{(i)}) \right) x^{(i)} = 0 \\ & \sum_{i=1}^m w^{(i)} \theta_0 + \sum_{i=1}^m w^{(i)} \theta_1 x^{(i)} x^{(i)} = \sum_{i=1}^m w^{(i)} y^{(i)} x^{(i)} &\text{Eq. (2)} \end{aligned} Writing Eq. (1) and Eq. (2) in matrix form $\mathbf{A \Theta = b}$ allows us to solve for $\Theta$ \begin{aligned} & \sum_{i=1}^m w^{(i)} \theta_0 + \sum_{i=1}^m w^{(i)} \theta_1 x^{(i)} = \sum_{i=1}^m w^{(i)} y^{(i)} \\ & \sum_{i=1}^m w^{(i)} \theta_0 + \sum_{i=1}^m w^{(i)} \theta_1 x^{(i)} x^{(i)} = \sum_{i=1}^m w^{(i)} y^{(i)} x^{(i)} \\ & \begin{bmatrix} \sum w^{(i)} & \sum w^{(i)} x^{(i)} \\ \sum w^{(i)} x^{(i)} & \sum w^{(i)} x^{(i)} x^{(i)} \end{bmatrix} \begin{bmatrix} \theta_0 \\ \theta_1 \end{bmatrix} = \begin{bmatrix} \sum w^{(i)} y^{(i)} \\ \sum w^{(i)} y^{(i)} x^{(i)} \end{bmatrix} \\ & \mathbf{A} \Theta = \mathbf{b} \\ & \Theta = \mathbf{A}^{-1} \mathbf{b} \end{aligned}
How are models combined in locally weighted linear regression? As I am also working on the same problem as you (CS229!) I have been looking into implementing locally weighted regression in python. The following implementation is heavily based on the version provi
41,464
Are maximum likelihood estimator robust estimators?
By the definition of robust estimators, this is true. That is, M-estimators are a type of robust statistics, and MLE's are a special case of M-estimators. However, it's definitely not the case that MLE's in general have good robustness properties.
Are maximum likelihood estimator robust estimators?
By the definition of robust estimators, this is true. That is, M-estimators are a type of robust statistics, and MLE's are a special case of M-estimators. However, it's definitely not the case that M
Are maximum likelihood estimator robust estimators? By the definition of robust estimators, this is true. That is, M-estimators are a type of robust statistics, and MLE's are a special case of M-estimators. However, it's definitely not the case that MLE's in general have good robustness properties.
Are maximum likelihood estimator robust estimators? By the definition of robust estimators, this is true. That is, M-estimators are a type of robust statistics, and MLE's are a special case of M-estimators. However, it's definitely not the case that M
41,465
Are maximum likelihood estimator robust estimators?
One way to look at it is that there is no strict distinction between robust and non-robust estimators, but rather they can be compared according to their robustness properties (as already pointed in the answer by @CliffAB.) Thus, M-estimators can be viewed as explicitly designed to resemble the maximum likelihood estimators (see, e.g., their introduction in Robust Statistics: Theory and Methods by Maronna, Martin and Yohai.). That is, M-estimators are defined as minima of a function of the type shown in the OP (or as zeros of a derivative of such a function), which however does not necessarily have an interpretation of likelihood (e.g., the associated probability density $f(x; \vec{\theta})$ might not be normalizable.) Thus, both sample mean and sample median can be viewed as M-estimators for location, but the former with breakdown point of 0 and the latter with breakdown point of 50%. This suggests the other way to look at: by defining the non-robust estimators as those with extremely poor measures of robustness. Thus, we could define as non-robust any estimator with breakdown point of 0, meaning that even a single outlying observation may produce arbitrarily big deviation from the true value of the parameter being estimated.
Are maximum likelihood estimator robust estimators?
One way to look at it is that there is no strict distinction between robust and non-robust estimators, but rather they can be compared according to their robustness properties (as already pointed in t
Are maximum likelihood estimator robust estimators? One way to look at it is that there is no strict distinction between robust and non-robust estimators, but rather they can be compared according to their robustness properties (as already pointed in the answer by @CliffAB.) Thus, M-estimators can be viewed as explicitly designed to resemble the maximum likelihood estimators (see, e.g., their introduction in Robust Statistics: Theory and Methods by Maronna, Martin and Yohai.). That is, M-estimators are defined as minima of a function of the type shown in the OP (or as zeros of a derivative of such a function), which however does not necessarily have an interpretation of likelihood (e.g., the associated probability density $f(x; \vec{\theta})$ might not be normalizable.) Thus, both sample mean and sample median can be viewed as M-estimators for location, but the former with breakdown point of 0 and the latter with breakdown point of 50%. This suggests the other way to look at: by defining the non-robust estimators as those with extremely poor measures of robustness. Thus, we could define as non-robust any estimator with breakdown point of 0, meaning that even a single outlying observation may produce arbitrarily big deviation from the true value of the parameter being estimated.
Are maximum likelihood estimator robust estimators? One way to look at it is that there is no strict distinction between robust and non-robust estimators, but rather they can be compared according to their robustness properties (as already pointed in t
41,466
Propensity Score Matching implementation after multiple imputation
Update 1/4/20: A new package has been written for this purpose called MatchThem. It's on CRAN, and an article about how to use it is under review. It's compatible with version 4.0.0 of cobalt for balance checking. It has built-in functions for performing matching and estimating treatment effects from multiply imputed data. It has integration with svyglm() and svycoxph(), so you can estimate treatment effects for various forms of outcome. It really smooths out the process of estimating effects from multiply imputed data. I answered this question which provides R code for your case after using mice to multiply impute, MatchIt to match within each imputed data set, and glm() to estimate treatment effects in each imputed data set. See the documentation for cobalt for an example of the other method (averaging propensity scores across imputations).
Propensity Score Matching implementation after multiple imputation
Update 1/4/20: A new package has been written for this purpose called MatchThem. It's on CRAN, and an article about how to use it is under review. It's compatible with version 4.0.0 of cobalt for bala
Propensity Score Matching implementation after multiple imputation Update 1/4/20: A new package has been written for this purpose called MatchThem. It's on CRAN, and an article about how to use it is under review. It's compatible with version 4.0.0 of cobalt for balance checking. It has built-in functions for performing matching and estimating treatment effects from multiply imputed data. It has integration with svyglm() and svycoxph(), so you can estimate treatment effects for various forms of outcome. It really smooths out the process of estimating effects from multiply imputed data. I answered this question which provides R code for your case after using mice to multiply impute, MatchIt to match within each imputed data set, and glm() to estimate treatment effects in each imputed data set. See the documentation for cobalt for an example of the other method (averaging propensity scores across imputations).
Propensity Score Matching implementation after multiple imputation Update 1/4/20: A new package has been written for this purpose called MatchThem. It's on CRAN, and an article about how to use it is under review. It's compatible with version 4.0.0 of cobalt for bala
41,467
Why do we care about Quasi-norm in Statistics and Machine Learning?
One common area where quasinorms are used involves dimension reduction and sparsity. Consider Lasso, where the standard OLS problem is augmented by a penalty, or cost term:. $$\min_{\beta}\dfrac{1}{N}\|Y-X\beta\|_2 \quad s.t. \|\beta\|_1\leq t$$ where $\|\cdot\|_p$ is the standard $L_p$ norm. Why are we doing this again? Well, the "energy" of the signal we're trying to study might be clustered in a small number elements of $\beta = [\beta_0, \beta_1, ..., \beta_N]^T$, and by adding the above cost term, we penalize elements of $\beta$ that are "less important" to modelling the system. These "less important" ones get zeroed out, and we're left with a smaller-dimension system than where we started. With the advent of big data and problems of extremely high dimension, there's been a lot of research suggesting that the standard $\|\cdot\|_1$ reduction (Lasso), or the the standard $\|\cdot\|_2$ reduction (Ridge) might not be enough. That is, we can get better results by using $L_p$ norms with $0<p<1$. This is where quasinorms come into play, since these norms no longer satisfy the triangle inequality property of $L_p$ norms with $p\geq 1$ Diving a bit deeper, compare two different vectors representing the hypothetical "true" value of $\beta$ $$\beta_1 = [1,1,1,1,1]$$ Notice that this vector is not sparse. i.e. we need all the elements of $\beta_1$. The $L_p$ norms are $$\|\beta_1\|_2 \approx 2.23$$ $$\|\beta_1\|_1 = 5$$ $$\|\beta_1\|_{1/2} = 25$$ Now, compare this to the following "sparse" vector $$\beta_2 = [2.25,0,0,0,0]$$ Which gives us $$\|\beta_2\|_2 = 2.25$$ $$\|\beta_2\|_1 = 2.25$$ $$\|\beta_2\|_{1/2} = 2.25$$ Notice that $$\|\beta_1\|_2 \approx \|\beta_2\|_2$$ but that the differences of the norms really start to diverge as $p\rightarrow 0$ Using the initial Lasso example, if we sub in $L_{1/2}$ for $L_1$ in the cost term in the first equation, we see that a non-sparse estimates of $\beta$ will be greatly penalized. Thus, the smaller the value of $p$ the more elements of $\beta$ will end up being zeroed out. This is an example using a small, five-dimensional object, but the results get more apparent as the dimension of the space you're working in increases. This is why it's relevant for big data and machine learning.
Why do we care about Quasi-norm in Statistics and Machine Learning?
One common area where quasinorms are used involves dimension reduction and sparsity. Consider Lasso, where the standard OLS problem is augmented by a penalty, or cost term:. $$\min_{\beta}\dfrac{1}{N
Why do we care about Quasi-norm in Statistics and Machine Learning? One common area where quasinorms are used involves dimension reduction and sparsity. Consider Lasso, where the standard OLS problem is augmented by a penalty, or cost term:. $$\min_{\beta}\dfrac{1}{N}\|Y-X\beta\|_2 \quad s.t. \|\beta\|_1\leq t$$ where $\|\cdot\|_p$ is the standard $L_p$ norm. Why are we doing this again? Well, the "energy" of the signal we're trying to study might be clustered in a small number elements of $\beta = [\beta_0, \beta_1, ..., \beta_N]^T$, and by adding the above cost term, we penalize elements of $\beta$ that are "less important" to modelling the system. These "less important" ones get zeroed out, and we're left with a smaller-dimension system than where we started. With the advent of big data and problems of extremely high dimension, there's been a lot of research suggesting that the standard $\|\cdot\|_1$ reduction (Lasso), or the the standard $\|\cdot\|_2$ reduction (Ridge) might not be enough. That is, we can get better results by using $L_p$ norms with $0<p<1$. This is where quasinorms come into play, since these norms no longer satisfy the triangle inequality property of $L_p$ norms with $p\geq 1$ Diving a bit deeper, compare two different vectors representing the hypothetical "true" value of $\beta$ $$\beta_1 = [1,1,1,1,1]$$ Notice that this vector is not sparse. i.e. we need all the elements of $\beta_1$. The $L_p$ norms are $$\|\beta_1\|_2 \approx 2.23$$ $$\|\beta_1\|_1 = 5$$ $$\|\beta_1\|_{1/2} = 25$$ Now, compare this to the following "sparse" vector $$\beta_2 = [2.25,0,0,0,0]$$ Which gives us $$\|\beta_2\|_2 = 2.25$$ $$\|\beta_2\|_1 = 2.25$$ $$\|\beta_2\|_{1/2} = 2.25$$ Notice that $$\|\beta_1\|_2 \approx \|\beta_2\|_2$$ but that the differences of the norms really start to diverge as $p\rightarrow 0$ Using the initial Lasso example, if we sub in $L_{1/2}$ for $L_1$ in the cost term in the first equation, we see that a non-sparse estimates of $\beta$ will be greatly penalized. Thus, the smaller the value of $p$ the more elements of $\beta$ will end up being zeroed out. This is an example using a small, five-dimensional object, but the results get more apparent as the dimension of the space you're working in increases. This is why it's relevant for big data and machine learning.
Why do we care about Quasi-norm in Statistics and Machine Learning? One common area where quasinorms are used involves dimension reduction and sparsity. Consider Lasso, where the standard OLS problem is augmented by a penalty, or cost term:. $$\min_{\beta}\dfrac{1}{N
41,468
Why is F-measure more popular than accuracy?
$F_1$ works better when you care most for classification of rare positives. $F_\beta$ is just a weighted harmonic mean of two positive focused measures, i.e. precision and recall. When $\beta=1$ precision and recall are equally weighted in $F_1$, which is used most in practice. When positive incidence rate is low in the sample the predictive performance metrics such as accuracy can get overwhelmed by high rates of prediction in negatives, which can be accomplished by overpredicting them in expense of positives. Suppose that you're classifying rare events, i.e. positives rate in the sample is low. So, you make your model mark everything negative. What will be the performance scores? Let's look at the case where the positive are 10% of the sample of size 100. Your "model" always outputs negative. It's not really a model, of course, but let's see what happens : TP = 0, FP = 0, TN = 90 and FN = 10. $$ \begin{array}{|c|c|} \hline & \rm{True} & \rm{False} \\ \hline \rm{Positive} & TP = 0 & FP = 0 \\ \hline \rm{Negative} & FN = 10 & TN = 90 \\ \hline \end{array} $$ Obviously, this "model" missed all positives, but it's accuracy is 90/100 = 90%! Luckily, F1 = 0/10 = 0%. Now compare this performance to a completely random marking of outputs with $(5+45)/100 =$ 50% positive: TP = 5, FP =5, TN = 45 and FN = 45. $$ \begin{array}{|c|c|} \hline & \rm{True} & \rm{False} \\ \hline \rm{Positive} & TP = 5 & FP = 5 \\ \hline \rm{Negative} & FN = 45 & TN = 45 \\ \hline \end{array} $$ You get accuracy = 50/100 = 50%, and F1 = 10/60 = 17%. Oh, what happened? The random marking is less accurate, despite marking half of positives right according "accuracy" measure, but F1 measure indicates that it's better than "always neg model." A dummy model that marks everything positive in this sample will produce TP = 10, FP =90, TN = 0 and FN = 0. $$ \begin{array}{|c|c|} \hline & \rm{True} & \rm{False} \\ \hline \rm{Positive} & TP = 10 & FP = 90 \\ \hline \rm{Negative} & FN = 0 & TN = 0 \\ \hline \end{array} $$ So, its accuracy = 10/100 = 10%, while F1 = 20/110 = 18%. All three models are not really models at all. They can be used as benchmarks when comparing to actual models. Here's another comparison, between two models. Suppose that you built a real model A and it produced the following metrics: TP = 9, FP =5, TN = 85 and FN = 1. $$ \begin{array}{|c|c|} \hline & \rm{True} & \rm{False} \\ \hline \rm{Positive} & TP = 9 & FP = 5 \\ \hline \rm{Negative} & FN = 1 & TN = 85 \\ \hline \end{array} $$ This model will have accuracy = 94/100 = 94% and F1= 18/24 = 75%. Then you build another model B: TP = 8, FP =4, TN = 86 and FN = 2. $$ \begin{array}{|c|c|} \hline & \rm{True} & \rm{False} \\ \hline \rm{Positive} & TP = 8 & FP = 4 \\ \hline \rm{Negative} & FN = 2 & TN = 86 \\ \hline \end{array} $$ The accuracy= 94/100 = 94% and F1= 16/22 = 73%. Accuracy doesn't catch the difference between A and B because it cares equally for TP and TN, model B missed one more positive, but picked up one correct negative, so its accuracy is the same. F1 "doesn't care" for correct negatives, so it catches the lower rate of positives in model B.
Why is F-measure more popular than accuracy?
$F_1$ works better when you care most for classification of rare positives. $F_\beta$ is just a weighted harmonic mean of two positive focused measures, i.e. precision and recall. When $\beta=1$ preci
Why is F-measure more popular than accuracy? $F_1$ works better when you care most for classification of rare positives. $F_\beta$ is just a weighted harmonic mean of two positive focused measures, i.e. precision and recall. When $\beta=1$ precision and recall are equally weighted in $F_1$, which is used most in practice. When positive incidence rate is low in the sample the predictive performance metrics such as accuracy can get overwhelmed by high rates of prediction in negatives, which can be accomplished by overpredicting them in expense of positives. Suppose that you're classifying rare events, i.e. positives rate in the sample is low. So, you make your model mark everything negative. What will be the performance scores? Let's look at the case where the positive are 10% of the sample of size 100. Your "model" always outputs negative. It's not really a model, of course, but let's see what happens : TP = 0, FP = 0, TN = 90 and FN = 10. $$ \begin{array}{|c|c|} \hline & \rm{True} & \rm{False} \\ \hline \rm{Positive} & TP = 0 & FP = 0 \\ \hline \rm{Negative} & FN = 10 & TN = 90 \\ \hline \end{array} $$ Obviously, this "model" missed all positives, but it's accuracy is 90/100 = 90%! Luckily, F1 = 0/10 = 0%. Now compare this performance to a completely random marking of outputs with $(5+45)/100 =$ 50% positive: TP = 5, FP =5, TN = 45 and FN = 45. $$ \begin{array}{|c|c|} \hline & \rm{True} & \rm{False} \\ \hline \rm{Positive} & TP = 5 & FP = 5 \\ \hline \rm{Negative} & FN = 45 & TN = 45 \\ \hline \end{array} $$ You get accuracy = 50/100 = 50%, and F1 = 10/60 = 17%. Oh, what happened? The random marking is less accurate, despite marking half of positives right according "accuracy" measure, but F1 measure indicates that it's better than "always neg model." A dummy model that marks everything positive in this sample will produce TP = 10, FP =90, TN = 0 and FN = 0. $$ \begin{array}{|c|c|} \hline & \rm{True} & \rm{False} \\ \hline \rm{Positive} & TP = 10 & FP = 90 \\ \hline \rm{Negative} & FN = 0 & TN = 0 \\ \hline \end{array} $$ So, its accuracy = 10/100 = 10%, while F1 = 20/110 = 18%. All three models are not really models at all. They can be used as benchmarks when comparing to actual models. Here's another comparison, between two models. Suppose that you built a real model A and it produced the following metrics: TP = 9, FP =5, TN = 85 and FN = 1. $$ \begin{array}{|c|c|} \hline & \rm{True} & \rm{False} \\ \hline \rm{Positive} & TP = 9 & FP = 5 \\ \hline \rm{Negative} & FN = 1 & TN = 85 \\ \hline \end{array} $$ This model will have accuracy = 94/100 = 94% and F1= 18/24 = 75%. Then you build another model B: TP = 8, FP =4, TN = 86 and FN = 2. $$ \begin{array}{|c|c|} \hline & \rm{True} & \rm{False} \\ \hline \rm{Positive} & TP = 8 & FP = 4 \\ \hline \rm{Negative} & FN = 2 & TN = 86 \\ \hline \end{array} $$ The accuracy= 94/100 = 94% and F1= 16/22 = 73%. Accuracy doesn't catch the difference between A and B because it cares equally for TP and TN, model B missed one more positive, but picked up one correct negative, so its accuracy is the same. F1 "doesn't care" for correct negatives, so it catches the lower rate of positives in model B.
Why is F-measure more popular than accuracy? $F_1$ works better when you care most for classification of rare positives. $F_\beta$ is just a weighted harmonic mean of two positive focused measures, i.e. precision and recall. When $\beta=1$ preci
41,469
How to understand RandomForestExplainer output (R package)
At each node, a subset of the full set of predictors is evaluated for their strength of association with the dependent variable. The strength of association may be measured using a correlation coefficient, or some other metric (necessary if there are both categorical or continuous predictors). The most strongly associated predictor is then used to split the data. This implies that variables that occur closer to the root are more important, in the sense that they are most strongly associated with the dependent variable in each of the bootstrap data subsets. times_a_root and the mean_min_depth are straightforward ways of measuring importance in this sense: a variable that is closer to the root, or one that on average occurs closer to the root, is one that is strongly associated with the dependent variable. no_of_nodes is distinct from the other two. Picture the dependent variable being a strong sinusoidal function of one predictor. The predictor will not necessarily appear as being important on the previous two metrics because of the lack of a clear trend/direction/non-zero linear slope in a bivariate plot. However, eventually the trees will start to split on this predictor, and will then continue to split on it a very large number of times to approximate the sinusoidal function.no_of_nodes will capture the importance of this and (I suspect) other nonlinear predictors (without a clear trend/direction/non-zero linear slope) better than the earlier metrics. That said, I think that accuracy_decrease (for classification) and mse_increase (for regression) are far better metrics of importance than the rest. They measure the decrease in the forest's predictive performance if a particular predictor is permuted. Correlated predictors will influence this, but they will also affect the other importance metrics as well. And whether that matters depends on what your goals for the analysis are.
How to understand RandomForestExplainer output (R package)
At each node, a subset of the full set of predictors is evaluated for their strength of association with the dependent variable. The strength of association may be measured using a correlation coeffic
How to understand RandomForestExplainer output (R package) At each node, a subset of the full set of predictors is evaluated for their strength of association with the dependent variable. The strength of association may be measured using a correlation coefficient, or some other metric (necessary if there are both categorical or continuous predictors). The most strongly associated predictor is then used to split the data. This implies that variables that occur closer to the root are more important, in the sense that they are most strongly associated with the dependent variable in each of the bootstrap data subsets. times_a_root and the mean_min_depth are straightforward ways of measuring importance in this sense: a variable that is closer to the root, or one that on average occurs closer to the root, is one that is strongly associated with the dependent variable. no_of_nodes is distinct from the other two. Picture the dependent variable being a strong sinusoidal function of one predictor. The predictor will not necessarily appear as being important on the previous two metrics because of the lack of a clear trend/direction/non-zero linear slope in a bivariate plot. However, eventually the trees will start to split on this predictor, and will then continue to split on it a very large number of times to approximate the sinusoidal function.no_of_nodes will capture the importance of this and (I suspect) other nonlinear predictors (without a clear trend/direction/non-zero linear slope) better than the earlier metrics. That said, I think that accuracy_decrease (for classification) and mse_increase (for regression) are far better metrics of importance than the rest. They measure the decrease in the forest's predictive performance if a particular predictor is permuted. Correlated predictors will influence this, but they will also affect the other importance metrics as well. And whether that matters depends on what your goals for the analysis are.
How to understand RandomForestExplainer output (R package) At each node, a subset of the full set of predictors is evaluated for their strength of association with the dependent variable. The strength of association may be measured using a correlation coeffic
41,470
Is Monte Carlo cross-validation procedure valid?
Short answer: it is neither wrong nor new. We've been discussing this validation scheme under the name "set validation" ≈ 15 a ago when preparing a paper*, but in the end never actually referred to it as we didn't find it used in practice. Wikipedia refers to the same validation scheme as repeated random sub-sampling validation or Monte Carlo cross validation From a theory point of view, the concept was of interest to us because it is another interpretation of the same numbers usually referred to as hold-out (just the model the estimate is used for is different: hold-out estimates are used as performance estimate for exactly the model tested, this set or Monte Carlo validation treats the tested model(s) as surrogate model(s) and interprets the very same number as performace estimate for a model built on the whole data set - as it is usually done with cross validation or out-of-bootstrap validation estimates) and it is somewhere in between more common cross validation techniques (resampling with replacement, interpretation as estimate for whole-data model), hold-out (see above, same calculation + numbers, typically without N iterations/repetitions, though and different interpretation) and out-of-bootstrap (the N iterations/repetitions are typical for out-of-bootstrap, but I've never seen this applied to hold-out and it is [unfortunately] rarely done with cross validation). * Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005). The "set validation" error for N = 1 is hidden in fig. 6 (i.e. its bias + variance can be recostructed from the data given but are not explicitly given.) but it seems not optimal in terms of variance. Are there arguments in favor or against the second procedure? Well, in the paper above we found the total error (bias² + variance) of out-of-bootstrap and repeated/iterated $k$-fold cross validation to be pretty similar (with oob having somewhat lower variance but higher bias - but we did not follow up to check whether/how much of this trade-off is due resampling with/without replacement and how much is due to the different split ratio of about 1 : 2 for oob). Keep in mind, though, that I'm talking about accuracy in small sample size situations, where the dominating contributor to variance uncertainty is the same for all resampling schemes: the limited number of true samples for testing, and that is the same for oob, cross validation or set validation. Iterations/repetitions allow you to reduce the variance caused by instability of the (surrogate) models, but not the variance uncertainty due to the limited total sample size. Thus, assuming that you perform an adequately large number of iterations/repetitions N, I'd not expect practically relevant differences in the performance of these validation schemes. One validation scheme may fit better with the scenario you try to simulate by the resampling, though.
Is Monte Carlo cross-validation procedure valid?
Short answer: it is neither wrong nor new. We've been discussing this validation scheme under the name "set validation" ≈ 15 a ago when preparing a paper*, but in the end never actually referred to
Is Monte Carlo cross-validation procedure valid? Short answer: it is neither wrong nor new. We've been discussing this validation scheme under the name "set validation" ≈ 15 a ago when preparing a paper*, but in the end never actually referred to it as we didn't find it used in practice. Wikipedia refers to the same validation scheme as repeated random sub-sampling validation or Monte Carlo cross validation From a theory point of view, the concept was of interest to us because it is another interpretation of the same numbers usually referred to as hold-out (just the model the estimate is used for is different: hold-out estimates are used as performance estimate for exactly the model tested, this set or Monte Carlo validation treats the tested model(s) as surrogate model(s) and interprets the very same number as performace estimate for a model built on the whole data set - as it is usually done with cross validation or out-of-bootstrap validation estimates) and it is somewhere in between more common cross validation techniques (resampling with replacement, interpretation as estimate for whole-data model), hold-out (see above, same calculation + numbers, typically without N iterations/repetitions, though and different interpretation) and out-of-bootstrap (the N iterations/repetitions are typical for out-of-bootstrap, but I've never seen this applied to hold-out and it is [unfortunately] rarely done with cross validation). * Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005). The "set validation" error for N = 1 is hidden in fig. 6 (i.e. its bias + variance can be recostructed from the data given but are not explicitly given.) but it seems not optimal in terms of variance. Are there arguments in favor or against the second procedure? Well, in the paper above we found the total error (bias² + variance) of out-of-bootstrap and repeated/iterated $k$-fold cross validation to be pretty similar (with oob having somewhat lower variance but higher bias - but we did not follow up to check whether/how much of this trade-off is due resampling with/without replacement and how much is due to the different split ratio of about 1 : 2 for oob). Keep in mind, though, that I'm talking about accuracy in small sample size situations, where the dominating contributor to variance uncertainty is the same for all resampling schemes: the limited number of true samples for testing, and that is the same for oob, cross validation or set validation. Iterations/repetitions allow you to reduce the variance caused by instability of the (surrogate) models, but not the variance uncertainty due to the limited total sample size. Thus, assuming that you perform an adequately large number of iterations/repetitions N, I'd not expect practically relevant differences in the performance of these validation schemes. One validation scheme may fit better with the scenario you try to simulate by the resampling, though.
Is Monte Carlo cross-validation procedure valid? Short answer: it is neither wrong nor new. We've been discussing this validation scheme under the name "set validation" ≈ 15 a ago when preparing a paper*, but in the end never actually referred to
41,471
In a Bayesian hierarchical model, if exchangeability doesn't hold, what exactly goes wrong?
Exchangeability is not necessary. There are Bayesian models in which observations are not exchangeable. For example, models for time-series analysis and forecast in weather prediction or finance. Generally speaking, in such models more recent observations are considered to be more relevant for inference about future ones; a sort of "fading memory". Exchangeability therefore cannot be assumed for them. There is a huge variety of non-exchangeable models; see the references below. Exchangeable models are often easier to deal with, but they may be inappropriate. In fact, rather than "wrong" vs "right", the question is whether exchangeability or other assumptions, like the "fading memory" mentioned above, are more appropriate or reasonable for the inferences you're making, or computationally easier. We must often find a balance between these two aspects. There's no "right" or "wrong" because there's no experiment that can tell us whether an inference model is "correct". This is the fundamental issue of induction, about which many, many authors have written; I recommend the works of Hume, Johnson, Jeffreys, de Finetti, Jaynes cited below. We can only apply a particular way of doing induction, formalized as a statistical model, and then see if we're satisfied with it or not. And this satisfaction depends on many criteria, many of which subjective. Texts like Bernardo & Smith: Bayesian Theory (Wiley 2000) focus more on exchangeability, but as they themselves remark (§ 1.4.1), their book is not meant to cover all kinds of inferences in Bayesian probability theory. Texts specifically focused on non-exchangeable models are for example: R. Prado, M. West: Time Series: Modeling, Computation, and Inference (CRC 2010) – this should be a good and recent starting point if you're already familiar with exchangeable models. A. Pole, M. West, J. Harrison: Applied Bayesian Forecasting and Time Series Analysis (Springer 1994) E. Greenberg: Introduction to Bayesian Econometrics (Cambridge 2008) A. Zellner: An Introduction to Bayesian Inference in Econometrics (Wiley 1996) G. L. Bretthorst: Bayesian Spectrum Analysis and Parameter Estimation (Springer 1988) http://bayes.wustl.edu/glb/bib.html G. E. Box, G. M. Jenkins, G. C. Reinsel, G. M. Ljung: Time Series Analysis: Forecasting and Control (Wiley 2016), especially ch. 7 W. Palma: Long-Memory Time Series: Theory and Methods (Wiley 2007), especially ch. 8 See also the numerous references about time series that Bernardo & Smith give in § 5.6.5. Regarding induction, some insightful texts are: D. Hume: A Treatise of Human Nature: Being an Attempt to Introduce the Experimental Method of Reasoning into Moral Subject (Oxford 1896) https://archive.org/details/treatiseofhumann00hume_0, Book I, § III.VI W. E. Johnson: Probability: the deductive and inductive problems, Mind 41 n. 164 (1932), 409–423 W. E. Johnson: Logic. Part II: Demonstrative Inference: Deductive and Inductive (Cambridge 1922) https://archive.org/details/logic02john, chapters VIII and following W. E. Johnson: Logic. Part III: The Logical Foundations of Science (Cambridge 1924) https://archive.org/details/logic03john, the Appendix B. de Finetti: Foresight: Its Logical Laws, Its Subjective Sources, in Kyburg, Smokler: Studies in Subjective Probability (Krieger 1980), pp. 53–118 B. de Finetti: Probability, Induction and Statistics: The art of guessing (Wiley 1972), chapter 9 H. Jeffreys: The present position in probability theory, Brit. J. Phil. Sci. 5 n. 20 (1955), 275–289 H. Jeffreys: Scientific Inference (Cambridge 1973), chap. I H. Jeffreys: Theory of Probability (Oxford 2003), § 1.0 E. T. Jaynes: Probability Theory: The Logic of Science (Cambridge 2003) http://www-biba.inrialpes.fr/Jaynes/prob.html, http://omega.albany.edu:8008/JaynesBook.html, http://omega.albany.edu:8008/JaynesBookPdf.html, § 9.4
In a Bayesian hierarchical model, if exchangeability doesn't hold, what exactly goes wrong?
Exchangeability is not necessary. There are Bayesian models in which observations are not exchangeable. For example, models for time-series analysis and forecast in weather prediction or finance. Gene
In a Bayesian hierarchical model, if exchangeability doesn't hold, what exactly goes wrong? Exchangeability is not necessary. There are Bayesian models in which observations are not exchangeable. For example, models for time-series analysis and forecast in weather prediction or finance. Generally speaking, in such models more recent observations are considered to be more relevant for inference about future ones; a sort of "fading memory". Exchangeability therefore cannot be assumed for them. There is a huge variety of non-exchangeable models; see the references below. Exchangeable models are often easier to deal with, but they may be inappropriate. In fact, rather than "wrong" vs "right", the question is whether exchangeability or other assumptions, like the "fading memory" mentioned above, are more appropriate or reasonable for the inferences you're making, or computationally easier. We must often find a balance between these two aspects. There's no "right" or "wrong" because there's no experiment that can tell us whether an inference model is "correct". This is the fundamental issue of induction, about which many, many authors have written; I recommend the works of Hume, Johnson, Jeffreys, de Finetti, Jaynes cited below. We can only apply a particular way of doing induction, formalized as a statistical model, and then see if we're satisfied with it or not. And this satisfaction depends on many criteria, many of which subjective. Texts like Bernardo & Smith: Bayesian Theory (Wiley 2000) focus more on exchangeability, but as they themselves remark (§ 1.4.1), their book is not meant to cover all kinds of inferences in Bayesian probability theory. Texts specifically focused on non-exchangeable models are for example: R. Prado, M. West: Time Series: Modeling, Computation, and Inference (CRC 2010) – this should be a good and recent starting point if you're already familiar with exchangeable models. A. Pole, M. West, J. Harrison: Applied Bayesian Forecasting and Time Series Analysis (Springer 1994) E. Greenberg: Introduction to Bayesian Econometrics (Cambridge 2008) A. Zellner: An Introduction to Bayesian Inference in Econometrics (Wiley 1996) G. L. Bretthorst: Bayesian Spectrum Analysis and Parameter Estimation (Springer 1988) http://bayes.wustl.edu/glb/bib.html G. E. Box, G. M. Jenkins, G. C. Reinsel, G. M. Ljung: Time Series Analysis: Forecasting and Control (Wiley 2016), especially ch. 7 W. Palma: Long-Memory Time Series: Theory and Methods (Wiley 2007), especially ch. 8 See also the numerous references about time series that Bernardo & Smith give in § 5.6.5. Regarding induction, some insightful texts are: D. Hume: A Treatise of Human Nature: Being an Attempt to Introduce the Experimental Method of Reasoning into Moral Subject (Oxford 1896) https://archive.org/details/treatiseofhumann00hume_0, Book I, § III.VI W. E. Johnson: Probability: the deductive and inductive problems, Mind 41 n. 164 (1932), 409–423 W. E. Johnson: Logic. Part II: Demonstrative Inference: Deductive and Inductive (Cambridge 1922) https://archive.org/details/logic02john, chapters VIII and following W. E. Johnson: Logic. Part III: The Logical Foundations of Science (Cambridge 1924) https://archive.org/details/logic03john, the Appendix B. de Finetti: Foresight: Its Logical Laws, Its Subjective Sources, in Kyburg, Smokler: Studies in Subjective Probability (Krieger 1980), pp. 53–118 B. de Finetti: Probability, Induction and Statistics: The art of guessing (Wiley 1972), chapter 9 H. Jeffreys: The present position in probability theory, Brit. J. Phil. Sci. 5 n. 20 (1955), 275–289 H. Jeffreys: Scientific Inference (Cambridge 1973), chap. I H. Jeffreys: Theory of Probability (Oxford 2003), § 1.0 E. T. Jaynes: Probability Theory: The Logic of Science (Cambridge 2003) http://www-biba.inrialpes.fr/Jaynes/prob.html, http://omega.albany.edu:8008/JaynesBook.html, http://omega.albany.edu:8008/JaynesBookPdf.html, § 9.4
In a Bayesian hierarchical model, if exchangeability doesn't hold, what exactly goes wrong? Exchangeability is not necessary. There are Bayesian models in which observations are not exchangeable. For example, models for time-series analysis and forecast in weather prediction or finance. Gene
41,472
In a Bayesian hierarchical model, if exchangeability doesn't hold, what exactly goes wrong?
From the representation theorem, we know that exchangeability is essentially just an operational condition that is equivalent to the conditional IID form (which implies equicorrelation among the observable values). If this doesn't hold, it just means that there is some structure to the problem that is incompatible with the conditional IID form. This could be some kind of auto-correlation, or another order-based correlated form (as opposed to equicorrelation), or some other kind of effect involving statistical dependencies that are not equal among pairs of observables.
In a Bayesian hierarchical model, if exchangeability doesn't hold, what exactly goes wrong?
From the representation theorem, we know that exchangeability is essentially just an operational condition that is equivalent to the conditional IID form (which implies equicorrelation among the obser
In a Bayesian hierarchical model, if exchangeability doesn't hold, what exactly goes wrong? From the representation theorem, we know that exchangeability is essentially just an operational condition that is equivalent to the conditional IID form (which implies equicorrelation among the observable values). If this doesn't hold, it just means that there is some structure to the problem that is incompatible with the conditional IID form. This could be some kind of auto-correlation, or another order-based correlated form (as opposed to equicorrelation), or some other kind of effect involving statistical dependencies that are not equal among pairs of observables.
In a Bayesian hierarchical model, if exchangeability doesn't hold, what exactly goes wrong? From the representation theorem, we know that exchangeability is essentially just an operational condition that is equivalent to the conditional IID form (which implies equicorrelation among the obser
41,473
How is PCA applied to new data?
I will answer each question: $A$ is indeed the covariance matrix (so $X^TX$ assuming $X$ is standardized) The output of PCA is 3 things: the vector of column means $\mu$ of $X$, the vector of column stddevs $\sigma$ of $X$ and the rotation matrix $R = [v_1 ... v_p]$. Therefore, for a new sample row $x_0^T$, to compute its projection onto Principal Component Space, you have to standardize and rotate, that is $((x_0 - \mu) / \sigma)^T R$, which will yield a row vector with $x_0$ in PC coordinates. Please note that here I'm dividing by $\sigma$ elementwise.
How is PCA applied to new data?
I will answer each question: $A$ is indeed the covariance matrix (so $X^TX$ assuming $X$ is standardized) The output of PCA is 3 things: the vector of column means $\mu$ of $X$, the vector of column
How is PCA applied to new data? I will answer each question: $A$ is indeed the covariance matrix (so $X^TX$ assuming $X$ is standardized) The output of PCA is 3 things: the vector of column means $\mu$ of $X$, the vector of column stddevs $\sigma$ of $X$ and the rotation matrix $R = [v_1 ... v_p]$. Therefore, for a new sample row $x_0^T$, to compute its projection onto Principal Component Space, you have to standardize and rotate, that is $((x_0 - \mu) / \sigma)^T R$, which will yield a row vector with $x_0$ in PC coordinates. Please note that here I'm dividing by $\sigma$ elementwise.
How is PCA applied to new data? I will answer each question: $A$ is indeed the covariance matrix (so $X^TX$ assuming $X$ is standardized) The output of PCA is 3 things: the vector of column means $\mu$ of $X$, the vector of column
41,474
Lower Bound on $E[\frac{1}{X}]$ for positive symmetric distribution
Let's try the usual preliminaries: simplify by choosing appropriate units of measurement and exploiting the symmetry assumption. Reframing the question Change the units of $X$ so that its mean is $m=1$: this will not alter the truth of the inequality. Thus the distribution $F$ of $X$ is symmetric about $1$ and the range of $X$ is within the interval $[0,2]$. Our objective is to prove $$\int_0^2 \frac{1}{x}dF(x) = E\left[\frac{1}{X}\right] \ge 1 + \sigma^2 = E[X^2] = \int_0^2 x^2 dF(x).$$ Since $dF$ is invariant under the symmetry $x\to 2-x$, break each integral into two integrals over the intervals $[0,1)$ and $(1,2]$ and change the variable from $x$ to $2-x$ over the second interval. We may ignore any probability concentrated at the value $1$ because at that point $1/x = x^2.$ Whence the problem reduces to demonstrating $$\int_0^1 \left[\left(\frac{1}{x} + \frac{1}{2-x}\right) -(x^2 + (2-x)^2)\right]dF(x) \ge 0.\tag{*}$$ This can happen only if the integrand $$g(x) = \frac{1}{x} + \frac{1}{2-x} -(x^2 + (2-x)^2)$$ is nonnegative on the interval $(0,1].$ That's what we must show. Solution You could apply differential calculus. Elementary demonstrations are also available. When $0 \le x \le 1$, it will be the case that $0 \le |x-1| \le 1$, whence $1 \le 1/|x-1|$, entailing $$1 \le \frac{1}{(x-1)^2} \le \frac{1}{(x-1)^4}.$$ This implies $$0 \le \frac{1}{(x-1)^4} - \frac{1}{(x-1)^2} = \frac{2}{g(x)},$$ showing $g(x) \ge 0$ for $x\in (0,1],$ QED. The inequality is tight in the sense that when $F$ concentrates its probability closer to $1$, the inequality gets closer to being an equality. Thus, we could not replace $\sigma^2/m^3$ in the original inequality by any multiple $\lambda\sigma^2/m^3$ with $\lambda \gt 1$.
Lower Bound on $E[\frac{1}{X}]$ for positive symmetric distribution
Let's try the usual preliminaries: simplify by choosing appropriate units of measurement and exploiting the symmetry assumption. Reframing the question Change the units of $X$ so that its mean is $m=1
Lower Bound on $E[\frac{1}{X}]$ for positive symmetric distribution Let's try the usual preliminaries: simplify by choosing appropriate units of measurement and exploiting the symmetry assumption. Reframing the question Change the units of $X$ so that its mean is $m=1$: this will not alter the truth of the inequality. Thus the distribution $F$ of $X$ is symmetric about $1$ and the range of $X$ is within the interval $[0,2]$. Our objective is to prove $$\int_0^2 \frac{1}{x}dF(x) = E\left[\frac{1}{X}\right] \ge 1 + \sigma^2 = E[X^2] = \int_0^2 x^2 dF(x).$$ Since $dF$ is invariant under the symmetry $x\to 2-x$, break each integral into two integrals over the intervals $[0,1)$ and $(1,2]$ and change the variable from $x$ to $2-x$ over the second interval. We may ignore any probability concentrated at the value $1$ because at that point $1/x = x^2.$ Whence the problem reduces to demonstrating $$\int_0^1 \left[\left(\frac{1}{x} + \frac{1}{2-x}\right) -(x^2 + (2-x)^2)\right]dF(x) \ge 0.\tag{*}$$ This can happen only if the integrand $$g(x) = \frac{1}{x} + \frac{1}{2-x} -(x^2 + (2-x)^2)$$ is nonnegative on the interval $(0,1].$ That's what we must show. Solution You could apply differential calculus. Elementary demonstrations are also available. When $0 \le x \le 1$, it will be the case that $0 \le |x-1| \le 1$, whence $1 \le 1/|x-1|$, entailing $$1 \le \frac{1}{(x-1)^2} \le \frac{1}{(x-1)^4}.$$ This implies $$0 \le \frac{1}{(x-1)^4} - \frac{1}{(x-1)^2} = \frac{2}{g(x)},$$ showing $g(x) \ge 0$ for $x\in (0,1],$ QED. The inequality is tight in the sense that when $F$ concentrates its probability closer to $1$, the inequality gets closer to being an equality. Thus, we could not replace $\sigma^2/m^3$ in the original inequality by any multiple $\lambda\sigma^2/m^3$ with $\lambda \gt 1$.
Lower Bound on $E[\frac{1}{X}]$ for positive symmetric distribution Let's try the usual preliminaries: simplify by choosing appropriate units of measurement and exploiting the symmetry assumption. Reframing the question Change the units of $X$ so that its mean is $m=1
41,475
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$?
In the summary of notation (page xvi) of Sutton and Barto's book, they define $S_t$ as: state at time $t$, typically due, stochastically, to $S_{t-1}$ and $A_{t-1}$ This is similar to what you observed: However, what confuses me is that in order to properly define $S_t$ for $t>0$ it seems necessary to first define all the $S_i, A_i$, for $0≤i<t$. The main difference between the book's definition and your observation is that they only take $i = t-1$, not $0 \leq i < t$. Only that single prior step is sufficient due to the Markov property which is basically assumed to hold throughout the entire book, we're almost always talking about Markov Decision Processes. Another difference is that $S_t$ does not require $S_{t-1}$ and $A_{t-1}$ to be well-defined necessarily, those values only happen to typically explain how we ended up where we are now ($S_t$). The obvious exception is the initial state $S_0$, which we simply end up in... out of the blue kind of. Hence, it seems that the definition of $S_t$ only makes sense if we specify with which policy we're choosing our actions at all the time-steps before we reached time-step $t$. This is not necessary. An agent could theoretically change their policies during an episode too. In fact, as a random variable, $S_t$ doesn't even really represent just a single value. It's a symbol that we use to denote the state that we happen to be in during some episode at time $t$, without caring about what we did before that or plan to do after it. See the wikipedia page on random variables. \begin{equation} Q_\pi (s,a) = \mathbb{E}_\pi \left[ G_t \mid S_t = s, A_t = a \right] \end{equation} This equation simply says that the value of $Q_\pi$ is equal to the returns that we expect to obtain if: we start following policy $\pi$ from now on (it doesn't matter which policy we've been following up until now) we happen to currently be in state $s$ (this is a specific value, this is no longer a random variable), and we happen to have chosen to select action $a$. Note how the definition does not depend directly on what policy we've been using in the past. It only depends on the policy we've been using in the past indirectly, in the sense that that explains how we may have ended up in state $S_t = s$. But we do not require knowledge of our past policy to properly define anything in this equation, and that past policy will often not even be the only requirement for a complete explanation of how we ended up where we happen to be now. For example, in nondeterministic environments, we may also require knowledge of a random seed and a Random Number Generator to completely explain why we are where we are now. But the definition of the equation does not rely on the ability to explain this. We just take for granted that, at time $t$, we are in state $S_t = s$, and the equation is well-defined from there. This equation happens to rely on the future policy $\pi$, but that may be a different policy from our past policy, and this reliance is denoted by the subscript on $Q_\pi$ and $\mathbb{E}_\pi$.
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$?
In the summary of notation (page xvi) of Sutton and Barto's book, they define $S_t$ as: state at time $t$, typically due, stochastically, to $S_{t-1}$ and $A_{t-1}$ This is similar to what you obser
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$? In the summary of notation (page xvi) of Sutton and Barto's book, they define $S_t$ as: state at time $t$, typically due, stochastically, to $S_{t-1}$ and $A_{t-1}$ This is similar to what you observed: However, what confuses me is that in order to properly define $S_t$ for $t>0$ it seems necessary to first define all the $S_i, A_i$, for $0≤i<t$. The main difference between the book's definition and your observation is that they only take $i = t-1$, not $0 \leq i < t$. Only that single prior step is sufficient due to the Markov property which is basically assumed to hold throughout the entire book, we're almost always talking about Markov Decision Processes. Another difference is that $S_t$ does not require $S_{t-1}$ and $A_{t-1}$ to be well-defined necessarily, those values only happen to typically explain how we ended up where we are now ($S_t$). The obvious exception is the initial state $S_0$, which we simply end up in... out of the blue kind of. Hence, it seems that the definition of $S_t$ only makes sense if we specify with which policy we're choosing our actions at all the time-steps before we reached time-step $t$. This is not necessary. An agent could theoretically change their policies during an episode too. In fact, as a random variable, $S_t$ doesn't even really represent just a single value. It's a symbol that we use to denote the state that we happen to be in during some episode at time $t$, without caring about what we did before that or plan to do after it. See the wikipedia page on random variables. \begin{equation} Q_\pi (s,a) = \mathbb{E}_\pi \left[ G_t \mid S_t = s, A_t = a \right] \end{equation} This equation simply says that the value of $Q_\pi$ is equal to the returns that we expect to obtain if: we start following policy $\pi$ from now on (it doesn't matter which policy we've been following up until now) we happen to currently be in state $s$ (this is a specific value, this is no longer a random variable), and we happen to have chosen to select action $a$. Note how the definition does not depend directly on what policy we've been using in the past. It only depends on the policy we've been using in the past indirectly, in the sense that that explains how we may have ended up in state $S_t = s$. But we do not require knowledge of our past policy to properly define anything in this equation, and that past policy will often not even be the only requirement for a complete explanation of how we ended up where we happen to be now. For example, in nondeterministic environments, we may also require knowledge of a random seed and a Random Number Generator to completely explain why we are where we are now. But the definition of the equation does not rely on the ability to explain this. We just take for granted that, at time $t$, we are in state $S_t = s$, and the equation is well-defined from there. This equation happens to rely on the future policy $\pi$, but that may be a different policy from our past policy, and this reliance is denoted by the subscript on $Q_\pi$ and $\mathbb{E}_\pi$.
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$? In the summary of notation (page xvi) of Sutton and Barto's book, they define $S_t$ as: state at time $t$, typically due, stochastically, to $S_{t-1}$ and $A_{t-1}$ This is similar to what you obser
41,476
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$?
I came up with one possible interpretation of these symbols that seem to be consistent with all the material I've seen so far in these courses. As pointed out in Dennis Soemers' answer, Sutton's book define $S_t$ as: state at time $t$, typically due, stochastically, to $S_{t−1}$ and $A_{t−1}$ The "typically due" led me to think that perhaps $S_t$ doesn't refer to the same thing in all contexts. Maybe, $S_t$ in one context is actually a random variable, with a distribution that depends on $S_{t-1}$ and $A_{t-1}$. Then, in another context (or in the same context but for other $S_i$, $i\neq j$), it is a deterministic quantity, already specified for us. Which leads me to interpret $S_t$ as simply a placeholder symbol. With this I mean that the symbol $S_t$ doesn't mean much without a context. The context then specifies if that state is supposed to be random, with a distribution conditioned on the state and action at the previous time step, or if it is a deterministic value. Furthermore, given a context, it's not necessary for all $S_i$ to share the same nature, some can be random and some can be deterministic (although the case where more than one state in the chain is deterministic is probably meaningless$^{[1] }$). Hence, when we write things like $$ Q_\pi(s,a)=\mathbb E_\pi[G_t | S_t=s, A_t=a] ~,$$ we mean that $S_t$ and $A_t$ are not random (not even random variables that we have observed their values). Instead, they are simply deterministic values given by us. However, the subscript $\pi$ in the expectation is then the context that defines all the $S_i$ and $A_i$ for $i>t$ to be random, with distributions that depend on the previous state and action, or the policy $\pi$ and the previous state, respectively. Whether or not the subscript $\pi$ also determines the nature of all the $S_i$, $A_i$ for $i<t$ seems irrelevant, since (because of its definition) the quantity $G_t$ won't depend on them (but I would argue it doesn't, otherwise $S_t$ shouldn't be deterministic). This previous equation is in some sense asking: "What is the expected value of $G_t$, in the context that we start at state $s$, take the action $a$, and from that point onward we sample states and returns from the MDP model we defined somewhere else, and sample actions from our policy $\pi$. I think it's important to repeat that in that equation $S_t$ and $A_t$ are not (at least according to what I could come up as a sensible interpretation of this notation) random variables! They are not random variables that we have somehow observed their values. Otherwise, this equation would suffer from the problem I described earlier: if policy $\pi$ never chooses action $a$, or never reaches state $s$, then we would be conditioning on impossible events. Instead, they're deterministic quantities. I would even go as far as advocate towards using the notation: $$ Q_\pi(s,a)=\mathbb E_\pi[G_t] \quad,\quad S_t=s \text{ and } A_t=a$$ instead, to clear up any possible confusion that these variables might be random. [1]: Note that this is fundamentally different from the case where these states are all random, but we have observed the value of them.
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$?
I came up with one possible interpretation of these symbols that seem to be consistent with all the material I've seen so far in these courses. As pointed out in Dennis Soemers' answer, Sutton's book
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$? I came up with one possible interpretation of these symbols that seem to be consistent with all the material I've seen so far in these courses. As pointed out in Dennis Soemers' answer, Sutton's book define $S_t$ as: state at time $t$, typically due, stochastically, to $S_{t−1}$ and $A_{t−1}$ The "typically due" led me to think that perhaps $S_t$ doesn't refer to the same thing in all contexts. Maybe, $S_t$ in one context is actually a random variable, with a distribution that depends on $S_{t-1}$ and $A_{t-1}$. Then, in another context (or in the same context but for other $S_i$, $i\neq j$), it is a deterministic quantity, already specified for us. Which leads me to interpret $S_t$ as simply a placeholder symbol. With this I mean that the symbol $S_t$ doesn't mean much without a context. The context then specifies if that state is supposed to be random, with a distribution conditioned on the state and action at the previous time step, or if it is a deterministic value. Furthermore, given a context, it's not necessary for all $S_i$ to share the same nature, some can be random and some can be deterministic (although the case where more than one state in the chain is deterministic is probably meaningless$^{[1] }$). Hence, when we write things like $$ Q_\pi(s,a)=\mathbb E_\pi[G_t | S_t=s, A_t=a] ~,$$ we mean that $S_t$ and $A_t$ are not random (not even random variables that we have observed their values). Instead, they are simply deterministic values given by us. However, the subscript $\pi$ in the expectation is then the context that defines all the $S_i$ and $A_i$ for $i>t$ to be random, with distributions that depend on the previous state and action, or the policy $\pi$ and the previous state, respectively. Whether or not the subscript $\pi$ also determines the nature of all the $S_i$, $A_i$ for $i<t$ seems irrelevant, since (because of its definition) the quantity $G_t$ won't depend on them (but I would argue it doesn't, otherwise $S_t$ shouldn't be deterministic). This previous equation is in some sense asking: "What is the expected value of $G_t$, in the context that we start at state $s$, take the action $a$, and from that point onward we sample states and returns from the MDP model we defined somewhere else, and sample actions from our policy $\pi$. I think it's important to repeat that in that equation $S_t$ and $A_t$ are not (at least according to what I could come up as a sensible interpretation of this notation) random variables! They are not random variables that we have somehow observed their values. Otherwise, this equation would suffer from the problem I described earlier: if policy $\pi$ never chooses action $a$, or never reaches state $s$, then we would be conditioning on impossible events. Instead, they're deterministic quantities. I would even go as far as advocate towards using the notation: $$ Q_\pi(s,a)=\mathbb E_\pi[G_t] \quad,\quad S_t=s \text{ and } A_t=a$$ instead, to clear up any possible confusion that these variables might be random. [1]: Note that this is fundamentally different from the case where these states are all random, but we have observed the value of them.
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$? I came up with one possible interpretation of these symbols that seem to be consistent with all the material I've seen so far in these courses. As pointed out in Dennis Soemers' answer, Sutton's book
41,477
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$?
I am scratching my head about this as well... In principal what follows is more of a comment than an answer. Everything I say is just guessing because I have neither read a formal description yet. I think that most applied statisticians have a very deep understanding what they are doing but the do not allow the rest of the world to understand it properly because, as you said: apparently they like it very much to confuse random variables, values, measures, distributions, densities and so on. Nevertheless: in all cases there is a clean, mathematical, unambigous interpretation behind the symbols: We start with a probability space $\Omega$ (with some $\sigma$-algebra and some measure $P$ on it). In Markov processes we have some easy 'input' that we are given and we need to interpret and understand it. The input is a (for now finite) set of possible state values $S = \{s_1, ..., s_n\}$ and a (for now finite) set of actions $A = \{a_1, ..., a_m\}$ and a deterministic transition function $\Delta : S \times A \to S$ that, when evaluated like so $\Delta(s,a)$ giving us a completely fixed description of what happens (of what is the next state) when we are in state $s$ and take action $a$. Nothing probability related so far. The first thing that people seem to have confusion about is the policy (is it random or not?). The input we are given is what they call $\pi : S \times A \to [0,1]$ such that for all $s$ we have $\sum_{a \in A} \pi(s,a) = 1$ The first mistake is to say that $\pi$ is some sort of random or random variable. Up to now we can read that as a matrix of size $|S| \times |A|$ such that the row sums are $1$. Absolutely deterministic. I think what they actually mean is the following: We are given a whole set of random variables $(\alpha_s)_{s \in S} : \Omega \to A$ that are responsible for sampling the action that we take. We will write this as $\alpha(s, \omega) = \alpha_s(\omega)$. Now what their $\pi$ means is nothing else but $$P[\alpha(s, \omega) = a] = \pi(s, a)$$ i.e. their $\pi$ gives the distribution of the variables $\alpha_s$. I think that you are absolutely right in saying that we must define the random variables $S_t$ and $A_t$ now recursively: $$ A_t(\omega) = \alpha(S_t(\omega), \omega) $$ i.e. the action we take at time $t$ is defined by what the policy "randomly" choses when inserting the current state $s_t = S_t(\omega)$ and the general information $\omega$. $S_t$ in turn is defined as $$S_t(\omega) = \Delta(S_{t-1}(\omega), A_{t-1}(\omega))$$ i.e. the state $s_t$ at time $t$ is the one that we get by the transition function $\Delta$ when evaluated at the state $s_{t-1} = S_{t-1}(\omega)$ and the action $a_{t-1} = A_{t-1}(\omega)$ that we took before. The mere fact that this definition is recursive is not too bad: On the one hand this seems to be the nature of the RL game: take the current state and do something with it in order to get to the next state. In mathematics in general there are many recursive things: the faculty function, the Fibonacci numbers, ... Now the only question that remains is how to define $S_0$! However, this is something that must come from the input: When considering a Markov process we are usually given a finite state graph. When unflating that and doing probability theory with it we always end up in terms like $$p(s_{t}|s_{t-1}) \cdot ... \cdot p(s_1|s_0) \cdot p(s_0)$$ i.e. that tiny little $p(s_0)$ stays there no matter what you do. I think it is something the user must specify (like a prior in the Bayesian setup). However, as we gain more and more information with many states and the current situation only depends on the very last state and not the thing we did initially, I guess that it is not so important how to define $S_0$ so I think it is fine to start defining $S_0$ to be just some uniformly distributed random variable over the states. Or maybe it is like this: The thing we want to do in the end is to compare policies (in order to get the best one). When comparing them I think that $p(s_0)$ has little influence. For example, if you get some KPI that somehow needs to divide the expressions above for different policies then the $p(s_0)$ cancels out... Does that make sense? edit: Now thinking about what you said about the deterministic policy and the "impossible" event in the comments...
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$?
I am scratching my head about this as well... In principal what follows is more of a comment than an answer. Everything I say is just guessing because I have neither read a formal description yet. I t
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$? I am scratching my head about this as well... In principal what follows is more of a comment than an answer. Everything I say is just guessing because I have neither read a formal description yet. I think that most applied statisticians have a very deep understanding what they are doing but the do not allow the rest of the world to understand it properly because, as you said: apparently they like it very much to confuse random variables, values, measures, distributions, densities and so on. Nevertheless: in all cases there is a clean, mathematical, unambigous interpretation behind the symbols: We start with a probability space $\Omega$ (with some $\sigma$-algebra and some measure $P$ on it). In Markov processes we have some easy 'input' that we are given and we need to interpret and understand it. The input is a (for now finite) set of possible state values $S = \{s_1, ..., s_n\}$ and a (for now finite) set of actions $A = \{a_1, ..., a_m\}$ and a deterministic transition function $\Delta : S \times A \to S$ that, when evaluated like so $\Delta(s,a)$ giving us a completely fixed description of what happens (of what is the next state) when we are in state $s$ and take action $a$. Nothing probability related so far. The first thing that people seem to have confusion about is the policy (is it random or not?). The input we are given is what they call $\pi : S \times A \to [0,1]$ such that for all $s$ we have $\sum_{a \in A} \pi(s,a) = 1$ The first mistake is to say that $\pi$ is some sort of random or random variable. Up to now we can read that as a matrix of size $|S| \times |A|$ such that the row sums are $1$. Absolutely deterministic. I think what they actually mean is the following: We are given a whole set of random variables $(\alpha_s)_{s \in S} : \Omega \to A$ that are responsible for sampling the action that we take. We will write this as $\alpha(s, \omega) = \alpha_s(\omega)$. Now what their $\pi$ means is nothing else but $$P[\alpha(s, \omega) = a] = \pi(s, a)$$ i.e. their $\pi$ gives the distribution of the variables $\alpha_s$. I think that you are absolutely right in saying that we must define the random variables $S_t$ and $A_t$ now recursively: $$ A_t(\omega) = \alpha(S_t(\omega), \omega) $$ i.e. the action we take at time $t$ is defined by what the policy "randomly" choses when inserting the current state $s_t = S_t(\omega)$ and the general information $\omega$. $S_t$ in turn is defined as $$S_t(\omega) = \Delta(S_{t-1}(\omega), A_{t-1}(\omega))$$ i.e. the state $s_t$ at time $t$ is the one that we get by the transition function $\Delta$ when evaluated at the state $s_{t-1} = S_{t-1}(\omega)$ and the action $a_{t-1} = A_{t-1}(\omega)$ that we took before. The mere fact that this definition is recursive is not too bad: On the one hand this seems to be the nature of the RL game: take the current state and do something with it in order to get to the next state. In mathematics in general there are many recursive things: the faculty function, the Fibonacci numbers, ... Now the only question that remains is how to define $S_0$! However, this is something that must come from the input: When considering a Markov process we are usually given a finite state graph. When unflating that and doing probability theory with it we always end up in terms like $$p(s_{t}|s_{t-1}) \cdot ... \cdot p(s_1|s_0) \cdot p(s_0)$$ i.e. that tiny little $p(s_0)$ stays there no matter what you do. I think it is something the user must specify (like a prior in the Bayesian setup). However, as we gain more and more information with many states and the current situation only depends on the very last state and not the thing we did initially, I guess that it is not so important how to define $S_0$ so I think it is fine to start defining $S_0$ to be just some uniformly distributed random variable over the states. Or maybe it is like this: The thing we want to do in the end is to compare policies (in order to get the best one). When comparing them I think that $p(s_0)$ has little influence. For example, if you get some KPI that somehow needs to divide the expressions above for different policies then the $p(s_0)$ cancels out... Does that make sense? edit: Now thinking about what you said about the deterministic policy and the "impossible" event in the comments...
In reinforcement learning, what is the formal definition of the symbols $S_t$ and $A_t$? I am scratching my head about this as well... In principal what follows is more of a comment than an answer. Everything I say is just guessing because I have neither read a formal description yet. I t
41,478
Best approach for count prediction in time-series?
You have intermittent-time-series, i.e., your time series are integer-valued, nonnegative and "mostly" zero. You may want to search for "forecasting intermittent time series" or similar. The classical approach for point forecasts in such a case is crostons-method. One alternative is a Poisson or Negative Binomial regression on whatever regressors make sense (e.g., trend, seasonal dummies, causals etc.). I have also seen Integer ARMA (INARMA) models, e.g., in a Ph.D. thesis by Mona Mohammadipour, but these are not very common. One thing that I have not yet seen is linking multiple such time series together. In the continuous case, this would be Vector Autoregression (VAR, not to be confused with VaR, which is Value at Risk). An analogue to the integer case could conceivably be called VINAR, but as I said, I have never seen this.
Best approach for count prediction in time-series?
You have intermittent-time-series, i.e., your time series are integer-valued, nonnegative and "mostly" zero. You may want to search for "forecasting intermittent time series" or similar. The classical
Best approach for count prediction in time-series? You have intermittent-time-series, i.e., your time series are integer-valued, nonnegative and "mostly" zero. You may want to search for "forecasting intermittent time series" or similar. The classical approach for point forecasts in such a case is crostons-method. One alternative is a Poisson or Negative Binomial regression on whatever regressors make sense (e.g., trend, seasonal dummies, causals etc.). I have also seen Integer ARMA (INARMA) models, e.g., in a Ph.D. thesis by Mona Mohammadipour, but these are not very common. One thing that I have not yet seen is linking multiple such time series together. In the continuous case, this would be Vector Autoregression (VAR, not to be confused with VaR, which is Value at Risk). An analogue to the integer case could conceivably be called VINAR, but as I said, I have never seen this.
Best approach for count prediction in time-series? You have intermittent-time-series, i.e., your time series are integer-valued, nonnegative and "mostly" zero. You may want to search for "forecasting intermittent time series" or similar. The classical
41,479
Mathematical Machine Learning Theory "from scratch" textbook?
Per @Coffee's recommendation, I would recommend the text Machine Learning: A Bayesian and Optimization Perspective by Sergios Theodoridis along with Pattern Recognition by the same author. These two texts combined are 2,000 pages total and cover everything from undergrad-level probability to linear models, and (as far as I can tell) everything covered by Elements of Statistical Learning, in addition to time series, probabilistic graphical models, deep learning, and Monte Carlo methods. The author makes an excellent effort to make all notation clear and consistent (thank you for bolding all of your vectors!) and seems to have used carefully chosen exercises. Having a background in probability as well as stats at the level of Casella and Berger would be extremely helpful to have before pursing these texts. There is some discussion of UMVUEs in here.
Mathematical Machine Learning Theory "from scratch" textbook?
Per @Coffee's recommendation, I would recommend the text Machine Learning: A Bayesian and Optimization Perspective by Sergios Theodoridis along with Pattern Recognition by the same author. These two
Mathematical Machine Learning Theory "from scratch" textbook? Per @Coffee's recommendation, I would recommend the text Machine Learning: A Bayesian and Optimization Perspective by Sergios Theodoridis along with Pattern Recognition by the same author. These two texts combined are 2,000 pages total and cover everything from undergrad-level probability to linear models, and (as far as I can tell) everything covered by Elements of Statistical Learning, in addition to time series, probabilistic graphical models, deep learning, and Monte Carlo methods. The author makes an excellent effort to make all notation clear and consistent (thank you for bolding all of your vectors!) and seems to have used carefully chosen exercises. Having a background in probability as well as stats at the level of Casella and Berger would be extremely helpful to have before pursing these texts. There is some discussion of UMVUEs in here.
Mathematical Machine Learning Theory "from scratch" textbook? Per @Coffee's recommendation, I would recommend the text Machine Learning: A Bayesian and Optimization Perspective by Sergios Theodoridis along with Pattern Recognition by the same author. These two
41,480
Q-learning vs. Value Iteration
I don't see anything in its update equation as to why it requires knowledge of rewards before hand and why it cannot be trained in an online way in the same way that Q-learning can? The usual Value Iteration update rule is: $$v(s) \leftarrow \text{max}_a[\sum_{r,s'} p(r, s'|s,a)(r + \gamma v(s')]$$ or it might be written as some variation of $$v(s) \leftarrow \text{max}_a[R_s^a + \gamma \sum_{s'} (p(s'|s,a)v(s')]$$ In the first equation, $r$ is a known reward value, where you must already know its distribution. In the second equation $R_s^a$ is the expected reward from a particular state, action pair. It is also possible to use $R_{ss'}^a$ - the expected reward from a particular transition. None of these are about observed reward instances. You cannot use observed reward instances directly in Value Iteration and trust that this will work in expectation in the long term, as you need to take the max over possible actions, and it is not usually possible to observe the output of taking all possible actions. You could in theory maintain a separate table of estimated expected rewards, and that would then be a complete RL algorithm, not much different from other single-step TD algorithms. It would be a kind of half-way house to Q-learning as you would still need to know transition probabilities in order to use it. The equation you give for policy iteration uses a slightly different view of reward: $$U(s) = R(s) + \gamma \text{max}_a\sum_{s'}{T(s, a, s')U(s')}$$ Here $R(s)$ appears to stand for "fixed reward from arriving in this state". If you have a full model to go with the state transitions, then it is provided. But could you learn it from observations? Yes, with one important caveat: The formula assumes that each state has one and only one associated reward value. If you can safely assume that your MDP has fixed rewards associated with landing in specific states (and often you can if you have constructed the MDP as part of a game or virtual environment), then yes you should be able to run value iteration much like Q-learning in an online fashion using that update rule. The equation will work just fine without you needing to store or estimate anything about immediate rewards - the expected and immediate values would be the same. You should note this is not a completely generic solution that applies to all MDPs. The next thing is that I'm not sure why the policy would ever change in value iteration? The policy changes implicitly. The deterministic policy from value iteration is the one that takes the action with best expected return: $$\pi(s) \leftarrow \text{argmax}_a[\sum_{r,s'} p(r, s'|s,a)(r + \gamma v(s')]$$ This clearly will change as your values converge. All optimal control algorithms need to deal with this, which is why many use a rolling average (a learning rate parameter) to estimate values, and not a true mean.
Q-learning vs. Value Iteration
I don't see anything in its update equation as to why it requires knowledge of rewards before hand and why it cannot be trained in an online way in the same way that Q-learning can? The usual Value
Q-learning vs. Value Iteration I don't see anything in its update equation as to why it requires knowledge of rewards before hand and why it cannot be trained in an online way in the same way that Q-learning can? The usual Value Iteration update rule is: $$v(s) \leftarrow \text{max}_a[\sum_{r,s'} p(r, s'|s,a)(r + \gamma v(s')]$$ or it might be written as some variation of $$v(s) \leftarrow \text{max}_a[R_s^a + \gamma \sum_{s'} (p(s'|s,a)v(s')]$$ In the first equation, $r$ is a known reward value, where you must already know its distribution. In the second equation $R_s^a$ is the expected reward from a particular state, action pair. It is also possible to use $R_{ss'}^a$ - the expected reward from a particular transition. None of these are about observed reward instances. You cannot use observed reward instances directly in Value Iteration and trust that this will work in expectation in the long term, as you need to take the max over possible actions, and it is not usually possible to observe the output of taking all possible actions. You could in theory maintain a separate table of estimated expected rewards, and that would then be a complete RL algorithm, not much different from other single-step TD algorithms. It would be a kind of half-way house to Q-learning as you would still need to know transition probabilities in order to use it. The equation you give for policy iteration uses a slightly different view of reward: $$U(s) = R(s) + \gamma \text{max}_a\sum_{s'}{T(s, a, s')U(s')}$$ Here $R(s)$ appears to stand for "fixed reward from arriving in this state". If you have a full model to go with the state transitions, then it is provided. But could you learn it from observations? Yes, with one important caveat: The formula assumes that each state has one and only one associated reward value. If you can safely assume that your MDP has fixed rewards associated with landing in specific states (and often you can if you have constructed the MDP as part of a game or virtual environment), then yes you should be able to run value iteration much like Q-learning in an online fashion using that update rule. The equation will work just fine without you needing to store or estimate anything about immediate rewards - the expected and immediate values would be the same. You should note this is not a completely generic solution that applies to all MDPs. The next thing is that I'm not sure why the policy would ever change in value iteration? The policy changes implicitly. The deterministic policy from value iteration is the one that takes the action with best expected return: $$\pi(s) \leftarrow \text{argmax}_a[\sum_{r,s'} p(r, s'|s,a)(r + \gamma v(s')]$$ This clearly will change as your values converge. All optimal control algorithms need to deal with this, which is why many use a rolling average (a learning rate parameter) to estimate values, and not a true mean.
Q-learning vs. Value Iteration I don't see anything in its update equation as to why it requires knowledge of rewards before hand and why it cannot be trained in an online way in the same way that Q-learning can? The usual Value
41,481
Sampling distribution of sample variance of non-normal iid r.v.s
The asymptotic distribution for the sample variance (in the general non-normal case) can be found in O'Neill (2014) (Result 14, p. 285). As others have pointed out in the comments to your question, the more general result can be obtained via a combination of the CLT and Slutsky's theorem, working on an expansion for the sample variance (the cited paper has the proof so you can see that technique). The generalised asymptotic result is similar to the (exact) distribution for the normal case, except that the degrees-of-freedom parameter is affected by the kurtosis of the underlying distribution. Higher kurtosis in the underlying distribution leads to greater accuracy, since tail values are less rare; lower kurtosis leads to less accuracy, since tail values are more rare. As can be seen from Result 14 in the above-cited paper, the general case (with finite variance and kurtosis) has the asymptotic approximation: $$\frac{S^2}{\sigma^2} \sim \frac{\chi^2 (DF_n)}{DF_n} \quad \quad \quad DF_n \equiv \frac{2 \sigma^4}{\mathbb{V}(S^2)} = \frac{2n}{\kappa - (n-3)/(n-1)},$$ where $\kappa$ is the kurtosis of the underlying distribution. In the case of a mesokurtic distribution (such as the normal distribution) you have $\kappa = 3$, which gives $DF_n = n-1$, which is the well-known distribution for the normal case. (You have accidentally squared this term in the equation in your question.) In the case of an underlying platykurtic (leptokurtic) distribution, the degrees-of-freedom is higher (lower) than in the normal case. As you can see from the definition of the degrees-of-freedom parameter in this result, this parameter is formed from the underlying kurtosis through the variance of the sample variance. (The kurtosis affects the variance of the sample variance, so that is why it enters into this analysis.) The degrees-of-freedom parameter is adjusted to ensure that the variance of the chi-squared distribution matches the true variance of the sampling statistic.
Sampling distribution of sample variance of non-normal iid r.v.s
The asymptotic distribution for the sample variance (in the general non-normal case) can be found in O'Neill (2014) (Result 14, p. 285). As others have pointed out in the comments to your question, t
Sampling distribution of sample variance of non-normal iid r.v.s The asymptotic distribution for the sample variance (in the general non-normal case) can be found in O'Neill (2014) (Result 14, p. 285). As others have pointed out in the comments to your question, the more general result can be obtained via a combination of the CLT and Slutsky's theorem, working on an expansion for the sample variance (the cited paper has the proof so you can see that technique). The generalised asymptotic result is similar to the (exact) distribution for the normal case, except that the degrees-of-freedom parameter is affected by the kurtosis of the underlying distribution. Higher kurtosis in the underlying distribution leads to greater accuracy, since tail values are less rare; lower kurtosis leads to less accuracy, since tail values are more rare. As can be seen from Result 14 in the above-cited paper, the general case (with finite variance and kurtosis) has the asymptotic approximation: $$\frac{S^2}{\sigma^2} \sim \frac{\chi^2 (DF_n)}{DF_n} \quad \quad \quad DF_n \equiv \frac{2 \sigma^4}{\mathbb{V}(S^2)} = \frac{2n}{\kappa - (n-3)/(n-1)},$$ where $\kappa$ is the kurtosis of the underlying distribution. In the case of a mesokurtic distribution (such as the normal distribution) you have $\kappa = 3$, which gives $DF_n = n-1$, which is the well-known distribution for the normal case. (You have accidentally squared this term in the equation in your question.) In the case of an underlying platykurtic (leptokurtic) distribution, the degrees-of-freedom is higher (lower) than in the normal case. As you can see from the definition of the degrees-of-freedom parameter in this result, this parameter is formed from the underlying kurtosis through the variance of the sample variance. (The kurtosis affects the variance of the sample variance, so that is why it enters into this analysis.) The degrees-of-freedom parameter is adjusted to ensure that the variance of the chi-squared distribution matches the true variance of the sampling statistic.
Sampling distribution of sample variance of non-normal iid r.v.s The asymptotic distribution for the sample variance (in the general non-normal case) can be found in O'Neill (2014) (Result 14, p. 285). As others have pointed out in the comments to your question, t
41,482
Deep NNs, backpropagation and error calculation
"When we apply the transpose weight matrix, $(w^{l+1})^T$, we can think intuitively of this as moving the error backward through the network" Is that the sentence you're confused about? Note that in this sentence he's not trying to provide intuition for ''applying the transpose operator'', he's trying to provide intuition for ''aplplying the transpose weight matrix'', which basically means ''multiplying by the transpose weight matrix''. That entire act of multiplying the error vector with the transposed weight matrix, that is what can be understood intuitively as moving the error backward through the network. Not just the transpose operator on its own. The reason for doing the transpose is simply to make the dimensions work out, that matrix needs to be transposed in order to guarantee that multiplying it with the error vector afterwards results in the correct dimensionality. The reason why this application (~= multiplication) with the transposed weight matrix can be understood as moving the error backward through the network is kind of exactly the same intuition you probably already have for the forwards pass. In the forwards pass, a high/strong weight means a node in one layer has a large influence on the connected node in the next layer. Exactly the same intuition holds here for the backwards pass, because, just like in the forwards pass, we're multiplying something with a weight. In the forwards pass, that something is the activation level of a node in the previous layer. In the backwards pass, that something is the error observed in a node in the later layer. If a particular connection is ''strong'' (weight has a high value), and we also have a large error, we should put a large amount of ''blame'' for the error on that large weight. This is what we get by multiplying by the (transposed) weight matrix; large weights take a large portion of the ''blame'' for large errors. More Elaborate explanation for transpose: Let's consider the forwards pass, following the notation from your link. For simplicity, let's ignore activation functions and biases for now, they're not really important for our intuition. Then, equation 25 in your link tells us that the activation vector $a^l$ of layer $l$ is defined as $a^l = w^l * a^{l-1}$. To make the notation a bit more consistent with the equation for the backwards pass, we'll rewrite this as $a^{l+1} = w^{l+1} * a^l$ (simply added $1$ to all layer indices). $w^{l+1}$ is a matrix, and the $a$ things are vectors, so it's convenient to consider again what the matrix-vector multiplication looks like. Take a look at the image near the top here for example: http://mathinsight.org/matrix_vector_multiplication , copied at the bottom of this answer. Let's consider the activation level of the very first node (at the top) of layer $l+1$. This would be the top element of the vector in the right-hand side of the equation on the page I linked. This one activation level is determined by the complete vector of activation levels of the entire previous layer, and the very first row of weights in the weight matrix. Now suppose we have a vector $\delta^{l+1}$ of errors in layer $l+1$. Let's again consider only the top node of this layer. The activation level of this particular node was determined by the top row of the weight matrix $w^{l+1}$, and the entire vector of activation levels $a^l$. We'll want to "punish" each of the weights in that top row proportional to the magnitude of that weight and our error. So, to do that, we'll want some kind of multiplication between $w^{l+1}$ and $\delta^{l+1}$ (note that this is different from the forwards pass, in which we had a multiplication between the same weight matrix, but an activation vector from a different layer). This should again result in a vector with the same shape as $a^l$ (this is already one clue that we have to do the transpose, otherwise the shape simply won't be correct). To figure out how this multiplication should look exactly, we'll have to investigate which weights exactly are to blame for which errors. The top row of weights in the matrix $w^{l+1}$ had (during the forwards pass) influence on only the top element of $a^{l+1}$, and is therefore now only to blame for the top element of the error vector $\delta^{l+1}$ and that entire top row of weights should therefore now also only be multiplied by the top element of $\delta^{l+1}$ in order to compute a part of $\delta^l$. In the notation of the image near the top of that page I linked, this means that we would like our top row of weights to be only multiplied $x_1$. But that's not what the picture of matrix-vector multiplication says, that picture says that elements of the top row of the matrix are multiplied with all $x$. If you look carefully, there is a vector in that matrix which is solely multiplied by the top element $x_1$ though, which is the very first column of the (weight matrix). By transposing the weight matrix, we interchange rows and columns, and we essentially get exactly the multiplications we desire. Linked matrix-vector multiplication $\begin{align*} A{x}= \left[ \begin{array}{cccc} a_{11}& a_{12}& \ldots& a_{1n}\\ a_{21}& a_{22}& \ldots& a_{2n}\\ \vdots& \vdots& \ddots& \vdots\\ a_{m1}& a_{m2}& \ldots& a_{mn} \end{array} \right] \left[ \begin{array}{c} x_1\\ x_2\\ \vdots\\ x_n \end{array} \right] = \left[ \begin{array}{c} a_{11}x_1+a_{12}x_2 + \cdots + a_{1n} x_n\\ a_{21}x_1+a_{22}x_2 + \cdots + a_{2n} x_n\\ \vdots\\ a_{m1}x_1+a_{m2}x_2 + \cdots + a_{mn} x_n\\ \end{array} \right] \end{align*} $
Deep NNs, backpropagation and error calculation
"When we apply the transpose weight matrix, $(w^{l+1})^T$, we can think intuitively of this as moving the error backward through the network" Is that the sentence you're confused about? Note that in t
Deep NNs, backpropagation and error calculation "When we apply the transpose weight matrix, $(w^{l+1})^T$, we can think intuitively of this as moving the error backward through the network" Is that the sentence you're confused about? Note that in this sentence he's not trying to provide intuition for ''applying the transpose operator'', he's trying to provide intuition for ''aplplying the transpose weight matrix'', which basically means ''multiplying by the transpose weight matrix''. That entire act of multiplying the error vector with the transposed weight matrix, that is what can be understood intuitively as moving the error backward through the network. Not just the transpose operator on its own. The reason for doing the transpose is simply to make the dimensions work out, that matrix needs to be transposed in order to guarantee that multiplying it with the error vector afterwards results in the correct dimensionality. The reason why this application (~= multiplication) with the transposed weight matrix can be understood as moving the error backward through the network is kind of exactly the same intuition you probably already have for the forwards pass. In the forwards pass, a high/strong weight means a node in one layer has a large influence on the connected node in the next layer. Exactly the same intuition holds here for the backwards pass, because, just like in the forwards pass, we're multiplying something with a weight. In the forwards pass, that something is the activation level of a node in the previous layer. In the backwards pass, that something is the error observed in a node in the later layer. If a particular connection is ''strong'' (weight has a high value), and we also have a large error, we should put a large amount of ''blame'' for the error on that large weight. This is what we get by multiplying by the (transposed) weight matrix; large weights take a large portion of the ''blame'' for large errors. More Elaborate explanation for transpose: Let's consider the forwards pass, following the notation from your link. For simplicity, let's ignore activation functions and biases for now, they're not really important for our intuition. Then, equation 25 in your link tells us that the activation vector $a^l$ of layer $l$ is defined as $a^l = w^l * a^{l-1}$. To make the notation a bit more consistent with the equation for the backwards pass, we'll rewrite this as $a^{l+1} = w^{l+1} * a^l$ (simply added $1$ to all layer indices). $w^{l+1}$ is a matrix, and the $a$ things are vectors, so it's convenient to consider again what the matrix-vector multiplication looks like. Take a look at the image near the top here for example: http://mathinsight.org/matrix_vector_multiplication , copied at the bottom of this answer. Let's consider the activation level of the very first node (at the top) of layer $l+1$. This would be the top element of the vector in the right-hand side of the equation on the page I linked. This one activation level is determined by the complete vector of activation levels of the entire previous layer, and the very first row of weights in the weight matrix. Now suppose we have a vector $\delta^{l+1}$ of errors in layer $l+1$. Let's again consider only the top node of this layer. The activation level of this particular node was determined by the top row of the weight matrix $w^{l+1}$, and the entire vector of activation levels $a^l$. We'll want to "punish" each of the weights in that top row proportional to the magnitude of that weight and our error. So, to do that, we'll want some kind of multiplication between $w^{l+1}$ and $\delta^{l+1}$ (note that this is different from the forwards pass, in which we had a multiplication between the same weight matrix, but an activation vector from a different layer). This should again result in a vector with the same shape as $a^l$ (this is already one clue that we have to do the transpose, otherwise the shape simply won't be correct). To figure out how this multiplication should look exactly, we'll have to investigate which weights exactly are to blame for which errors. The top row of weights in the matrix $w^{l+1}$ had (during the forwards pass) influence on only the top element of $a^{l+1}$, and is therefore now only to blame for the top element of the error vector $\delta^{l+1}$ and that entire top row of weights should therefore now also only be multiplied by the top element of $\delta^{l+1}$ in order to compute a part of $\delta^l$. In the notation of the image near the top of that page I linked, this means that we would like our top row of weights to be only multiplied $x_1$. But that's not what the picture of matrix-vector multiplication says, that picture says that elements of the top row of the matrix are multiplied with all $x$. If you look carefully, there is a vector in that matrix which is solely multiplied by the top element $x_1$ though, which is the very first column of the (weight matrix). By transposing the weight matrix, we interchange rows and columns, and we essentially get exactly the multiplications we desire. Linked matrix-vector multiplication $\begin{align*} A{x}= \left[ \begin{array}{cccc} a_{11}& a_{12}& \ldots& a_{1n}\\ a_{21}& a_{22}& \ldots& a_{2n}\\ \vdots& \vdots& \ddots& \vdots\\ a_{m1}& a_{m2}& \ldots& a_{mn} \end{array} \right] \left[ \begin{array}{c} x_1\\ x_2\\ \vdots\\ x_n \end{array} \right] = \left[ \begin{array}{c} a_{11}x_1+a_{12}x_2 + \cdots + a_{1n} x_n\\ a_{21}x_1+a_{22}x_2 + \cdots + a_{2n} x_n\\ \vdots\\ a_{m1}x_1+a_{m2}x_2 + \cdots + a_{mn} x_n\\ \end{array} \right] \end{align*} $
Deep NNs, backpropagation and error calculation "When we apply the transpose weight matrix, $(w^{l+1})^T$, we can think intuitively of this as moving the error backward through the network" Is that the sentence you're confused about? Note that in t
41,483
Generate a Gaussian and a binary random variables with predefined correlation
To generate such a pair $(B,Y)$ with $B$ Bernoulli (with parameter $p$) and $Y$ normal, why not begin with a suitable binormal variable $(X,Y)$ and define $B$ to be the indicator that $X$ exceeds its $1-p$ quantile? By centering $(X,Y)$ at the origin and standardizing its marginals, the only question concerns what correlation $r$ should hold between $X$ and $Y$ so that the correlation between $B$ and $Y$ will be a given value $\rho$. To this end, express $Y = r X + \sqrt{1-r^2}Z$ for independent standard Normal variables $X$ and $Z$. Set $x_0$ to be the $1-p$ quantile of $X$, so that $\Phi(x_0)=1-p$. (As is conventional, $\Phi$ is the standard Normal distribution and $\phi$ will be its density.) Since the variance of $B$ is $p(1-p)$ and the variance of $Y$ is $1$, and $Y$ has zero mean, the correlation between $B$ and $Y$ is $$\eqalign{ \rho&=\operatorname{Cor}(B,X) = \frac{E[BY] - E[B]E[Y]}{\sqrt{p(1-p)}\sqrt{1}}\\ &= \frac{E[B(rX+\sqrt{1-r^2}Z)]-0} {\sqrt{p(1-p)}} \\ &= \frac{rE[X\mid X \ge x_0]\Pr(X \ge x_0)}{\sqrt{p(1-p)}}. }$$ The conditional expectation is readily computed by integration, giving $$E[X\mid X \ge x_0]\Pr(X \ge x_0) = \frac{1}{\sqrt{2\pi}}\int_{x_0}^\infty x e^{-x^2/2}dx = \frac{e^{-x_0^2/2}}{\sqrt{2\pi}} = \phi(x_0),$$ whence $$\rho = \frac{r \phi(x_0)}{\sqrt{p(1-p)}}.$$ Solve this for $r$: by setting $$r = \frac{\rho \sqrt{p(1-p)}}{\phi(x_0)},$$ $B$ and $Y$ will have correlation $\rho$. Note that since it's necessary that $1-r^2\ge 0$, any values of $\rho$ that cause $|r|$ to exceed $1$ will not be achievable in this fashion. The figure plots feasible values of $r$ as a function of the desired correlation $\rho$ and Bernoulli parameter $p$: the contours range in increments of $1/10$ from $-1$ at the upper left through $+1$ at the upper right.
Generate a Gaussian and a binary random variables with predefined correlation
To generate such a pair $(B,Y)$ with $B$ Bernoulli (with parameter $p$) and $Y$ normal, why not begin with a suitable binormal variable $(X,Y)$ and define $B$ to be the indicator that $X$ exceeds its
Generate a Gaussian and a binary random variables with predefined correlation To generate such a pair $(B,Y)$ with $B$ Bernoulli (with parameter $p$) and $Y$ normal, why not begin with a suitable binormal variable $(X,Y)$ and define $B$ to be the indicator that $X$ exceeds its $1-p$ quantile? By centering $(X,Y)$ at the origin and standardizing its marginals, the only question concerns what correlation $r$ should hold between $X$ and $Y$ so that the correlation between $B$ and $Y$ will be a given value $\rho$. To this end, express $Y = r X + \sqrt{1-r^2}Z$ for independent standard Normal variables $X$ and $Z$. Set $x_0$ to be the $1-p$ quantile of $X$, so that $\Phi(x_0)=1-p$. (As is conventional, $\Phi$ is the standard Normal distribution and $\phi$ will be its density.) Since the variance of $B$ is $p(1-p)$ and the variance of $Y$ is $1$, and $Y$ has zero mean, the correlation between $B$ and $Y$ is $$\eqalign{ \rho&=\operatorname{Cor}(B,X) = \frac{E[BY] - E[B]E[Y]}{\sqrt{p(1-p)}\sqrt{1}}\\ &= \frac{E[B(rX+\sqrt{1-r^2}Z)]-0} {\sqrt{p(1-p)}} \\ &= \frac{rE[X\mid X \ge x_0]\Pr(X \ge x_0)}{\sqrt{p(1-p)}}. }$$ The conditional expectation is readily computed by integration, giving $$E[X\mid X \ge x_0]\Pr(X \ge x_0) = \frac{1}{\sqrt{2\pi}}\int_{x_0}^\infty x e^{-x^2/2}dx = \frac{e^{-x_0^2/2}}{\sqrt{2\pi}} = \phi(x_0),$$ whence $$\rho = \frac{r \phi(x_0)}{\sqrt{p(1-p)}}.$$ Solve this for $r$: by setting $$r = \frac{\rho \sqrt{p(1-p)}}{\phi(x_0)},$$ $B$ and $Y$ will have correlation $\rho$. Note that since it's necessary that $1-r^2\ge 0$, any values of $\rho$ that cause $|r|$ to exceed $1$ will not be achievable in this fashion. The figure plots feasible values of $r$ as a function of the desired correlation $\rho$ and Bernoulli parameter $p$: the contours range in increments of $1/10$ from $-1$ at the upper left through $+1$ at the upper right.
Generate a Gaussian and a binary random variables with predefined correlation To generate such a pair $(B,Y)$ with $B$ Bernoulli (with parameter $p$) and $Y$ normal, why not begin with a suitable binormal variable $(X,Y)$ and define $B$ to be the indicator that $X$ exceeds its
41,484
Bayesian network vs. association rules
This question is similar to the question: what's the difference between parametric vs. non-parametric models. Bayesian network can be viewed as parametric model. Where we have explicit assumptions on the random variables, and dependencies among random variables (assuming we only do parameter learning no structure learning). Apriori algorithm is type of "data mining" algorithm, which means it will give all the patterns with an effective algorithm, not really "machine learning", i.e., learn/tune certain parameters to optimize certain objective function. Which is better? or pros and cons? just like the discussion about the parametric model vs. non-parametric model. If Bayesian network assumptions are good, then it will be "better". On the other hand, if the assumptions are not accurate, apriori may be better. In addition, Bayesian network and Apriori algorithm are used differently. Bayesian Networks are mainly used to "inference". Questions we can ask Bayesian Network is like "If I know A and B happened, what is the chance of C happen and D not happen"? The model will give probabilities of the query. Apriori algorithm is used for getting frequent items set that satisfy the condition. The typical question asked would be "what are the frequent items come together", which is different from the conditional probability query mentioned in Bayesian Networks. Informally speaking, we can think Apriori is trying to ask questions on joint probability and store all high frequency combinations. On the other hand, Bayesian network is trying to ask questions on conditional probability: given the data, which hypothesis is more likely.
Bayesian network vs. association rules
This question is similar to the question: what's the difference between parametric vs. non-parametric models. Bayesian network can be viewed as parametric model. Where we have explicit assumptions on
Bayesian network vs. association rules This question is similar to the question: what's the difference between parametric vs. non-parametric models. Bayesian network can be viewed as parametric model. Where we have explicit assumptions on the random variables, and dependencies among random variables (assuming we only do parameter learning no structure learning). Apriori algorithm is type of "data mining" algorithm, which means it will give all the patterns with an effective algorithm, not really "machine learning", i.e., learn/tune certain parameters to optimize certain objective function. Which is better? or pros and cons? just like the discussion about the parametric model vs. non-parametric model. If Bayesian network assumptions are good, then it will be "better". On the other hand, if the assumptions are not accurate, apriori may be better. In addition, Bayesian network and Apriori algorithm are used differently. Bayesian Networks are mainly used to "inference". Questions we can ask Bayesian Network is like "If I know A and B happened, what is the chance of C happen and D not happen"? The model will give probabilities of the query. Apriori algorithm is used for getting frequent items set that satisfy the condition. The typical question asked would be "what are the frequent items come together", which is different from the conditional probability query mentioned in Bayesian Networks. Informally speaking, we can think Apriori is trying to ask questions on joint probability and store all high frequency combinations. On the other hand, Bayesian network is trying to ask questions on conditional probability: given the data, which hypothesis is more likely.
Bayesian network vs. association rules This question is similar to the question: what's the difference between parametric vs. non-parametric models. Bayesian network can be viewed as parametric model. Where we have explicit assumptions on
41,485
Likelihood in Linear Regression
The key assumption to derive $f_{Y_i|X_i}$ is that the noise is independent from the input, that is $\epsilon_i$ is independent from $X_i$. You don't need to know or assume anything about the distribution of $X_i$. You start with: $$f_{Y_i|X_i}(x,y)=p(Y_i=y|X_i=x)=p(\beta_0x+\epsilon_i=y|X_i=x)=p(\epsilon_i=y-\beta_0x|X_i=x)$$ Now the independence assumption is used, since $\epsilon_i$ is independent from $X_i$, its density given a value of $X_i$ is simply its density: $$p(\epsilon_i=y-\beta_0x|X_i=x)=p(\epsilon_i=y-\beta_0x)=...e^{(y-\beta_0x)^2/2\sigma^2}$$ You could alternatively say that the distribution of the noise conditionally to $X_i$ is normal with a constant variance (and mean 0) given any value of $X_i$. This is what really matters. But this is strictly equivalent to the usual assumption: $\epsilon_i$ is independent of $X_i$ $\epsilon_i$ is normally distributed (with mean 0)
Likelihood in Linear Regression
The key assumption to derive $f_{Y_i|X_i}$ is that the noise is independent from the input, that is $\epsilon_i$ is independent from $X_i$. You don't need to know or assume anything about the distribu
Likelihood in Linear Regression The key assumption to derive $f_{Y_i|X_i}$ is that the noise is independent from the input, that is $\epsilon_i$ is independent from $X_i$. You don't need to know or assume anything about the distribution of $X_i$. You start with: $$f_{Y_i|X_i}(x,y)=p(Y_i=y|X_i=x)=p(\beta_0x+\epsilon_i=y|X_i=x)=p(\epsilon_i=y-\beta_0x|X_i=x)$$ Now the independence assumption is used, since $\epsilon_i$ is independent from $X_i$, its density given a value of $X_i$ is simply its density: $$p(\epsilon_i=y-\beta_0x|X_i=x)=p(\epsilon_i=y-\beta_0x)=...e^{(y-\beta_0x)^2/2\sigma^2}$$ You could alternatively say that the distribution of the noise conditionally to $X_i$ is normal with a constant variance (and mean 0) given any value of $X_i$. This is what really matters. But this is strictly equivalent to the usual assumption: $\epsilon_i$ is independent of $X_i$ $\epsilon_i$ is normally distributed (with mean 0)
Likelihood in Linear Regression The key assumption to derive $f_{Y_i|X_i}$ is that the noise is independent from the input, that is $\epsilon_i$ is independent from $X_i$. You don't need to know or assume anything about the distribu
41,486
Likelihood in Linear Regression
Thanks to the answer of Benoit Sanchez I finally understood (but got hooked up on the wrong path of a replacement rule for conditional densities). The Answer is as follows: One needs to assume that The pairs $(x_i, y_i)$ come from random variables $(X_i, Y_i)$ such that the variables $Z_i = (X_i, Y_i)$ are independent $Y_i = \beta_0 X_i + \epsilon_i$ The $\epsilon_i$ are iid. $N(0,\sigma)$ distributed $\epsilon_i$ is independent from $X_i$ (the error does not go up or down with the feature but is unrelated to it) $X = (X_1, ..., X_n)$ and $Y = (Y_1, ..., Y_n)$ have a common density $f_{X,Y}$. In particular, all the $(X_i, Y_i)$ have common densities $f_{X_i, Y_i}$. One needs the following simple observation: Given $n$ real valued random variables $Z_1, ..., Z_n$ with a common density $f_{Z_1, ..., Z_n}$ and bijection $\Phi : \mathbb{R}^n \to \mathbb{R}^n$ such that $\Phi$ and $\Phi^{-1}$ are differentiable then $$f_{\Phi(Z_1, ..., Z_n)}(z_1, ..., z_n) = |\det(\partial \Phi^{-1})| f_{Z_1, ..., Z_n}(\Phi^{-1}(z_1, ..., z_n))$$ i.e. the density of the transformed random variable is the old density evaluated at a transformed point. The key observation is that the two dimensional random variable $(Y_i, X_i)$ is a simple transformation of $(\epsilon_i, X_i)$, namely $$(Y_i, X_i) = \Phi(\epsilon_i, X_i)$$ where $\Phi(e, x) = (e + \beta_0 x, x)$. We have $\Phi^{-1}(y, x) = (y - \beta_0 x, x)$. Its differential matrix is $$\partial \Phi^{-1} = \begin{pmatrix}1 & \beta_0 \\ 0 & 1 \end{pmatrix}$$ which is of determinant one. Now we apply the observation to this situation and obtain $$f_{Y_i, X_i}(y,x) = f_{\Phi(\epsilon_i, X_i)}(y, x) = 1 \cdot f_{\epsilon_i, X_i}(\Phi^{-1}(y, x)) = f_{\epsilon_i, X_i}(y - \beta_0 x, x)$$ Now $\epsilon_i$ is independent from $X_i$ by assumption, hence $$f_{Y_i, X_i}(y,x) = f_{\epsilon_i}(y - \beta_0 x) f_X(x)$$ or rather $$f_{Y_i| X_i}(y|x) = \frac{f_{\epsilon_i}(y - \beta_0 x) f_X(x)}{f_X(x)} = f_{\epsilon_i}(y - \beta_0 x)$$ and from this (and from $f_{Y, X} = \prod_{i} f_{Y_i, X_i}$ by the inedpendence assumption) one obtains the usual likelihood equations. I am happy now :-)
Likelihood in Linear Regression
Thanks to the answer of Benoit Sanchez I finally understood (but got hooked up on the wrong path of a replacement rule for conditional densities). The Answer is as follows: One needs to assume that T
Likelihood in Linear Regression Thanks to the answer of Benoit Sanchez I finally understood (but got hooked up on the wrong path of a replacement rule for conditional densities). The Answer is as follows: One needs to assume that The pairs $(x_i, y_i)$ come from random variables $(X_i, Y_i)$ such that the variables $Z_i = (X_i, Y_i)$ are independent $Y_i = \beta_0 X_i + \epsilon_i$ The $\epsilon_i$ are iid. $N(0,\sigma)$ distributed $\epsilon_i$ is independent from $X_i$ (the error does not go up or down with the feature but is unrelated to it) $X = (X_1, ..., X_n)$ and $Y = (Y_1, ..., Y_n)$ have a common density $f_{X,Y}$. In particular, all the $(X_i, Y_i)$ have common densities $f_{X_i, Y_i}$. One needs the following simple observation: Given $n$ real valued random variables $Z_1, ..., Z_n$ with a common density $f_{Z_1, ..., Z_n}$ and bijection $\Phi : \mathbb{R}^n \to \mathbb{R}^n$ such that $\Phi$ and $\Phi^{-1}$ are differentiable then $$f_{\Phi(Z_1, ..., Z_n)}(z_1, ..., z_n) = |\det(\partial \Phi^{-1})| f_{Z_1, ..., Z_n}(\Phi^{-1}(z_1, ..., z_n))$$ i.e. the density of the transformed random variable is the old density evaluated at a transformed point. The key observation is that the two dimensional random variable $(Y_i, X_i)$ is a simple transformation of $(\epsilon_i, X_i)$, namely $$(Y_i, X_i) = \Phi(\epsilon_i, X_i)$$ where $\Phi(e, x) = (e + \beta_0 x, x)$. We have $\Phi^{-1}(y, x) = (y - \beta_0 x, x)$. Its differential matrix is $$\partial \Phi^{-1} = \begin{pmatrix}1 & \beta_0 \\ 0 & 1 \end{pmatrix}$$ which is of determinant one. Now we apply the observation to this situation and obtain $$f_{Y_i, X_i}(y,x) = f_{\Phi(\epsilon_i, X_i)}(y, x) = 1 \cdot f_{\epsilon_i, X_i}(\Phi^{-1}(y, x)) = f_{\epsilon_i, X_i}(y - \beta_0 x, x)$$ Now $\epsilon_i$ is independent from $X_i$ by assumption, hence $$f_{Y_i, X_i}(y,x) = f_{\epsilon_i}(y - \beta_0 x) f_X(x)$$ or rather $$f_{Y_i| X_i}(y|x) = \frac{f_{\epsilon_i}(y - \beta_0 x) f_X(x)}{f_X(x)} = f_{\epsilon_i}(y - \beta_0 x)$$ and from this (and from $f_{Y, X} = \prod_{i} f_{Y_i, X_i}$ by the inedpendence assumption) one obtains the usual likelihood equations. I am happy now :-)
Likelihood in Linear Regression Thanks to the answer of Benoit Sanchez I finally understood (but got hooked up on the wrong path of a replacement rule for conditional densities). The Answer is as follows: One needs to assume that T
41,487
Why is softmax regression often written without the bias term?
If you use matrix notation, then $$ \beta_0 + \beta_1 X_1 + \dots +\beta_k X_k $$ can be defined in terms of design matrix that already contains a column of ones for the intercept $$ \mathbf{X} = \left[ \begin{array}{cccc} 1 & x_{1,1} & \dots & x_{1,k} \\ 1 & x_{2,1} & \dots & x_{2,k} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n,1} & \dots & x_{n,k} \end{array} \right] $$ so writing $\beta_0 + \dots$ is redundant.
Why is softmax regression often written without the bias term?
If you use matrix notation, then $$ \beta_0 + \beta_1 X_1 + \dots +\beta_k X_k $$ can be defined in terms of design matrix that already contains a column of ones for the intercept $$ \mathbf{X} = \l
Why is softmax regression often written without the bias term? If you use matrix notation, then $$ \beta_0 + \beta_1 X_1 + \dots +\beta_k X_k $$ can be defined in terms of design matrix that already contains a column of ones for the intercept $$ \mathbf{X} = \left[ \begin{array}{cccc} 1 & x_{1,1} & \dots & x_{1,k} \\ 1 & x_{2,1} & \dots & x_{2,k} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n,1} & \dots & x_{n,k} \end{array} \right] $$ so writing $\beta_0 + \dots$ is redundant.
Why is softmax regression often written without the bias term? If you use matrix notation, then $$ \beta_0 + \beta_1 X_1 + \dots +\beta_k X_k $$ can be defined in terms of design matrix that already contains a column of ones for the intercept $$ \mathbf{X} = \l
41,488
Importance of choice of latent distribution in GAN's
A sufficiently powerful function approximator can map any probability distribution to any other probability distribution. In practice, this means that any for reasonable choice of latent distribution, you can train a generator to map that to the distribution of images in your dataset. So it doesn't make any fundamental difference to use uniform distribution over gaussian or vice versa. However, in variational autoencoders, where you have the encoder trying to predict the latent representation of the image $q(z|X)$, then a normal distribution makes things easier to work with, as your predictions can't ever go "out of bounds" as they could if you used a uniform distribution. Of course this doesn't apply to GANs. The variability of the samples generated which you mentioned in your question isn't really a function of the latent distribution, since the network can scale up and down the spread of the distribution with a single layer.
Importance of choice of latent distribution in GAN's
A sufficiently powerful function approximator can map any probability distribution to any other probability distribution. In practice, this means that any for reasonable choice of latent distribution,
Importance of choice of latent distribution in GAN's A sufficiently powerful function approximator can map any probability distribution to any other probability distribution. In practice, this means that any for reasonable choice of latent distribution, you can train a generator to map that to the distribution of images in your dataset. So it doesn't make any fundamental difference to use uniform distribution over gaussian or vice versa. However, in variational autoencoders, where you have the encoder trying to predict the latent representation of the image $q(z|X)$, then a normal distribution makes things easier to work with, as your predictions can't ever go "out of bounds" as they could if you used a uniform distribution. Of course this doesn't apply to GANs. The variability of the samples generated which you mentioned in your question isn't really a function of the latent distribution, since the network can scale up and down the spread of the distribution with a single layer.
Importance of choice of latent distribution in GAN's A sufficiently powerful function approximator can map any probability distribution to any other probability distribution. In practice, this means that any for reasonable choice of latent distribution,
41,489
Importance of choice of latent distribution in GAN's
I partially disagree with what @shimano said The variability of the samples generated which you mentioned in your question isn't really a function of the latent distribution The sampling space is very crucial for the GANs results. For instance, sampling $z\sim\mathbb{N}(\mu, \sigma)$ where $\sigma=1$ or $\sigma=10$ would ends up quite differently even when you dataset is not natural images (i.e. MNIST). The intuition behind this is where your prior ($z$) has very small STD it would create very similar results but with relatively high quality and on the other hand too large sigma would tend to create more versatile images but with relatively low quality. I am sorry that I don't have any reference for this statement but I explored a bit this area of prior sampling so it isn't only a gut feeling.
Importance of choice of latent distribution in GAN's
I partially disagree with what @shimano said The variability of the samples generated which you mentioned in your question isn't really a function of the latent distribution The sampling space is v
Importance of choice of latent distribution in GAN's I partially disagree with what @shimano said The variability of the samples generated which you mentioned in your question isn't really a function of the latent distribution The sampling space is very crucial for the GANs results. For instance, sampling $z\sim\mathbb{N}(\mu, \sigma)$ where $\sigma=1$ or $\sigma=10$ would ends up quite differently even when you dataset is not natural images (i.e. MNIST). The intuition behind this is where your prior ($z$) has very small STD it would create very similar results but with relatively high quality and on the other hand too large sigma would tend to create more versatile images but with relatively low quality. I am sorry that I don't have any reference for this statement but I explored a bit this area of prior sampling so it isn't only a gut feeling.
Importance of choice of latent distribution in GAN's I partially disagree with what @shimano said The variability of the samples generated which you mentioned in your question isn't really a function of the latent distribution The sampling space is v
41,490
How to find the smallest $\lambda$ such that all Lasso / Elastic Net coefficients are zero?
A lasso solution $\widehat{\beta}(\lambda)$ solves $$\min_\beta \frac{1}{2}||y-X\beta||_2^2 +\lambda||\beta||_1.$$ and it is well known that we have $\widehat{\beta}(\lambda)=0$ for all $\lambda \geq \lambda_1$ where $\lambda_1 = \max_j |X_j^Ty|$, which should give you the desired value. Note that $\lambda_1$ may need a different scaling if the objective function is scaled differently. Using the cars example with GLMNET: fit<-glmnet(as.matrix(mtcars[,-1]),mtcars[,1], intercept=FALSE, standardize=FALSE) 1/32*max(abs(t(as.matrix(mtcars[,-1]))%*%mtcars[,1]))/(head(fit$lambda))[1] This gives the value 1, as expected. Note that standardize as well as intercept is set to FALSE. If standardize and intercept is set to TRUE, then the value of $\lambda$ is calculated on the scaled regressors. (In this regards, take a look at https://think-lab.github.io/d/205/#5 for how to perform a proper scaling to get the results you want.): xy<-scale(mtcars) fit<-glmnet(as.matrix(mtcars[,-1]),mtcars[,1]) (1/32*max(abs(t(xy[,-1])%*%mtcars[,1]*sqrt(32/31))))/(head(fit$lambda))[1] This once again gives the value 1... However I am not sure what glmnet is calculating if intercept = TRUE but standardize = FALSE. We saw that glmnet with its standard options calculates $\lambda_{1}$ as $$\lambda_{1} = \max_j| \frac{1}{n} \sum_{i=1}^n x_j^*y|$$, where $x_j^* = \frac{x_j-\overline{x_j}}{\sqrt{\frac{1}{n}\sum_{i=1}^n (x_j-\overline{x_j})^2}}.$ It turns out that for an elastic net problem (corresponding to $\alpha \in (0,1]$ in glmnet) its maximum value $\lambda_{1,\alpha}$ is calculated as $$\lambda_{1,\alpha}= \lambda_{1}/\alpha$$. Indeed, setting for example $\alpha=0.3$ we have: aa<-0.3 xy<-scale(mtcars) fit<-glmnet(as.matrix(mtcars[,-1]),mtcars[,1],a=aa) 1/aa*(1/32*max(abs(t(xy[,-1])%*%mtcars[,1]*sqrt(32/31))))/(head(fit$lambda))[1] which results once again in an output value of $1$. That's for the calculations. Note however that the elastic net criterion can be rewritten as a standard lasso problem.
How to find the smallest $\lambda$ such that all Lasso / Elastic Net coefficients are zero?
A lasso solution $\widehat{\beta}(\lambda)$ solves $$\min_\beta \frac{1}{2}||y-X\beta||_2^2 +\lambda||\beta||_1.$$ and it is well known that we have $\widehat{\beta}(\lambda)=0$ for all $\lambda \g
How to find the smallest $\lambda$ such that all Lasso / Elastic Net coefficients are zero? A lasso solution $\widehat{\beta}(\lambda)$ solves $$\min_\beta \frac{1}{2}||y-X\beta||_2^2 +\lambda||\beta||_1.$$ and it is well known that we have $\widehat{\beta}(\lambda)=0$ for all $\lambda \geq \lambda_1$ where $\lambda_1 = \max_j |X_j^Ty|$, which should give you the desired value. Note that $\lambda_1$ may need a different scaling if the objective function is scaled differently. Using the cars example with GLMNET: fit<-glmnet(as.matrix(mtcars[,-1]),mtcars[,1], intercept=FALSE, standardize=FALSE) 1/32*max(abs(t(as.matrix(mtcars[,-1]))%*%mtcars[,1]))/(head(fit$lambda))[1] This gives the value 1, as expected. Note that standardize as well as intercept is set to FALSE. If standardize and intercept is set to TRUE, then the value of $\lambda$ is calculated on the scaled regressors. (In this regards, take a look at https://think-lab.github.io/d/205/#5 for how to perform a proper scaling to get the results you want.): xy<-scale(mtcars) fit<-glmnet(as.matrix(mtcars[,-1]),mtcars[,1]) (1/32*max(abs(t(xy[,-1])%*%mtcars[,1]*sqrt(32/31))))/(head(fit$lambda))[1] This once again gives the value 1... However I am not sure what glmnet is calculating if intercept = TRUE but standardize = FALSE. We saw that glmnet with its standard options calculates $\lambda_{1}$ as $$\lambda_{1} = \max_j| \frac{1}{n} \sum_{i=1}^n x_j^*y|$$, where $x_j^* = \frac{x_j-\overline{x_j}}{\sqrt{\frac{1}{n}\sum_{i=1}^n (x_j-\overline{x_j})^2}}.$ It turns out that for an elastic net problem (corresponding to $\alpha \in (0,1]$ in glmnet) its maximum value $\lambda_{1,\alpha}$ is calculated as $$\lambda_{1,\alpha}= \lambda_{1}/\alpha$$. Indeed, setting for example $\alpha=0.3$ we have: aa<-0.3 xy<-scale(mtcars) fit<-glmnet(as.matrix(mtcars[,-1]),mtcars[,1],a=aa) 1/aa*(1/32*max(abs(t(xy[,-1])%*%mtcars[,1]*sqrt(32/31))))/(head(fit$lambda))[1] which results once again in an output value of $1$. That's for the calculations. Note however that the elastic net criterion can be rewritten as a standard lasso problem.
How to find the smallest $\lambda$ such that all Lasso / Elastic Net coefficients are zero? A lasso solution $\widehat{\beta}(\lambda)$ solves $$\min_\beta \frac{1}{2}||y-X\beta||_2^2 +\lambda||\beta||_1.$$ and it is well known that we have $\widehat{\beta}(\lambda)=0$ for all $\lambda \g
41,491
How to find the smallest $\lambda$ such that all Lasso / Elastic Net coefficients are zero?
First, I think glmnet will start with a large $\lambda$ instead of a small $\lambda$. Here is the documentation: note, if we want to specify $\lambda$, it is better in decreasing order. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Do not supply a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit. Also, See my question here: Why does `R` `glmnet` need to run with $\lambda$ in decreasing order? The fitting results contains the lambda value used. Here is an example. library(glmnet) fit=glmnet(as.matrix(mtcars[,-1]),mtcars[,1]) head(fit$lambda) [1] 5.146981 4.689737 4.273114 3.893502 3.547614 3.232454
How to find the smallest $\lambda$ such that all Lasso / Elastic Net coefficients are zero?
First, I think glmnet will start with a large $\lambda$ instead of a small $\lambda$. Here is the documentation: note, if we want to specify $\lambda$, it is better in decreasing order. Typical usag
How to find the smallest $\lambda$ such that all Lasso / Elastic Net coefficients are zero? First, I think glmnet will start with a large $\lambda$ instead of a small $\lambda$. Here is the documentation: note, if we want to specify $\lambda$, it is better in decreasing order. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Do not supply a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit. Also, See my question here: Why does `R` `glmnet` need to run with $\lambda$ in decreasing order? The fitting results contains the lambda value used. Here is an example. library(glmnet) fit=glmnet(as.matrix(mtcars[,-1]),mtcars[,1]) head(fit$lambda) [1] 5.146981 4.689737 4.273114 3.893502 3.547614 3.232454
How to find the smallest $\lambda$ such that all Lasso / Elastic Net coefficients are zero? First, I think glmnet will start with a large $\lambda$ instead of a small $\lambda$. Here is the documentation: note, if we want to specify $\lambda$, it is better in decreasing order. Typical usag
41,492
Poisson GLMM vs GLM in R (lme4)
There is probably something wrong with the question. The deviance for fit1 can be computed with deviance(fit1) # same as sum(resid(fit1)^2) But for the GLMM, the lme4 packages uses another method documented in ?llikAIC, which gives 1128 - higher than for the GLM. Perhaps the question actually wants to test for the decrease in deviance when adding the single fixed effect, because it comes out as 118, close to your 116.6: fit1 <- glm(count~race, family=poisson(link = log),data=homi) fit0 <- glm(count~1, family=poisson(link = log),data=homi) > anova(fit0, fit1) Analysis of Deviance Table Model 1: count ~ 1 Model 2: count ~ race Resid. Df Resid. Dev Df Deviance 1 1307 962.80 2 1306 844.71 1 118.09
Poisson GLMM vs GLM in R (lme4)
There is probably something wrong with the question. The deviance for fit1 can be computed with deviance(fit1) # same as sum(resid(fit1)^2) But for the GLMM, the lme4 packages uses another method
Poisson GLMM vs GLM in R (lme4) There is probably something wrong with the question. The deviance for fit1 can be computed with deviance(fit1) # same as sum(resid(fit1)^2) But for the GLMM, the lme4 packages uses another method documented in ?llikAIC, which gives 1128 - higher than for the GLM. Perhaps the question actually wants to test for the decrease in deviance when adding the single fixed effect, because it comes out as 118, close to your 116.6: fit1 <- glm(count~race, family=poisson(link = log),data=homi) fit0 <- glm(count~1, family=poisson(link = log),data=homi) > anova(fit0, fit1) Analysis of Deviance Table Model 1: count ~ 1 Model 2: count ~ race Resid. Df Resid. Dev Df Deviance 1 1307 962.80 2 1306 844.71 1 118.09
Poisson GLMM vs GLM in R (lme4) There is probably something wrong with the question. The deviance for fit1 can be computed with deviance(fit1) # same as sum(resid(fit1)^2) But for the GLMM, the lme4 packages uses another method
41,493
Poisson GLMM vs GLM in R (lme4)
This is a tricky exercise. Let's break it down in two parts. Specify a Poisson GLMM for the 1990 General Society Survey data. 1308 subjects responded to the question: Within the past 12 months, how many people have you known personally that were victims of homicide? The responses are broken down by race ("white" and "black"). Fit the Poisson GLMM. This is survey data; it seems natural to let the effects vary across participants. So let's make the GLMM a random-intercepts model. In mathematical notation we will compare: A Poisson GLM with fixed race effects. The intercept for "white" subjects is $\beta_0$ and the intercept for "black" subjects is $\beta_0 + \beta_1$. $$ \begin{aligned} \operatorname{count}_{i} &\sim \operatorname{Poisson}(\lambda_i) \\ \log(\lambda_i) &= \beta_{0} + \beta_{1}(\operatorname{black}) \end{aligned} $$ A Poisson GLMM with random race effects. The intercepts for "white" subjects are $\operatorname{Normal}(\gamma_0, \sigma^2)$ and the intercepts for "black" subjects are $\operatorname{Normal}(\gamma_0 + \gamma_1, \sigma^2)$. $$ \begin{aligned} \operatorname{count}_{i} &\sim \operatorname{Poisson}(\lambda_i) \\ \log(\lambda_i) &=\alpha_{j[i]} \\ \alpha_{j} &\sim N \left(\gamma_{0}+ \gamma_{1}(\operatorname{black}), \sigma^2\right) \end{aligned} $$ We have only observation from each participant, so we don't expect to get reliable estimates of the random intercepts. This is not a very sophisticated mixed-effects model. In fact, a Negative Binomial GLM fits the data better . Let's fit the two models: the GLM with stats::glm and the GLMM with lme4::glmer. fit.glm <- glm( count ~ race, family = poisson, data = homicides ) fit.glmer <- glmer( count ~ race + (1 | Obs), family = poisson, data = homicides, nAGQ = 20 ) And now the tricky part. Let's calculate the deviance two ways: a) two times the negative log likelihood and b) the sum of the deviance residuals squared. # Compute deviance as -2 times the log likelihood. -2 * logLik(fit.glm) #> 1117.99 (df=2) -2 * logLik(fit.glmer) #> 728.0926 (df=3) # Compute deviance as the sum of the deviance residuals squared. sum(resid(fit.glm)^2) #> 844.7073 sum(resid(fit.glmer)^2) #> 214.0758 Notice that 844.7073 - 728.0926 = 116.6147. This gives the "right answer" though the computation is not meaningful. The answer @RemkoDuursma points it out as well. To choose between the models we can use the deviance = -2 × log-likelihood or the AIC = -2 × log-likelihood + 2 × #parameters. See also Residual deviance, residuals, and log-likelihood in [weighted] logistic regression PS. I came across this error when I used a different R package to fit the Poisson GLMM, glmmML. fit.glmmML <- glmmML( count ~ race, cluster = Obs, family = poisson, method = "ghq", data = homicides ) deviance(fit.glm) #> 844.7073 deviance(fit.glmer) #> 214.0758 deviance(fit.glmmML) #> 728.107 lme4::lmer and glmmML::glmmML report very different "deviance" for the same Poisson GLMM even though both use maximum likelihood and their parameter estimates are almost the same. It took me a while to realize they have a different definition of deviance.
Poisson GLMM vs GLM in R (lme4)
This is a tricky exercise. Let's break it down in two parts. Specify a Poisson GLMM for the 1990 General Society Survey data. 1308 subjects responded to the question: Within the past 12 months, how m
Poisson GLMM vs GLM in R (lme4) This is a tricky exercise. Let's break it down in two parts. Specify a Poisson GLMM for the 1990 General Society Survey data. 1308 subjects responded to the question: Within the past 12 months, how many people have you known personally that were victims of homicide? The responses are broken down by race ("white" and "black"). Fit the Poisson GLMM. This is survey data; it seems natural to let the effects vary across participants. So let's make the GLMM a random-intercepts model. In mathematical notation we will compare: A Poisson GLM with fixed race effects. The intercept for "white" subjects is $\beta_0$ and the intercept for "black" subjects is $\beta_0 + \beta_1$. $$ \begin{aligned} \operatorname{count}_{i} &\sim \operatorname{Poisson}(\lambda_i) \\ \log(\lambda_i) &= \beta_{0} + \beta_{1}(\operatorname{black}) \end{aligned} $$ A Poisson GLMM with random race effects. The intercepts for "white" subjects are $\operatorname{Normal}(\gamma_0, \sigma^2)$ and the intercepts for "black" subjects are $\operatorname{Normal}(\gamma_0 + \gamma_1, \sigma^2)$. $$ \begin{aligned} \operatorname{count}_{i} &\sim \operatorname{Poisson}(\lambda_i) \\ \log(\lambda_i) &=\alpha_{j[i]} \\ \alpha_{j} &\sim N \left(\gamma_{0}+ \gamma_{1}(\operatorname{black}), \sigma^2\right) \end{aligned} $$ We have only observation from each participant, so we don't expect to get reliable estimates of the random intercepts. This is not a very sophisticated mixed-effects model. In fact, a Negative Binomial GLM fits the data better . Let's fit the two models: the GLM with stats::glm and the GLMM with lme4::glmer. fit.glm <- glm( count ~ race, family = poisson, data = homicides ) fit.glmer <- glmer( count ~ race + (1 | Obs), family = poisson, data = homicides, nAGQ = 20 ) And now the tricky part. Let's calculate the deviance two ways: a) two times the negative log likelihood and b) the sum of the deviance residuals squared. # Compute deviance as -2 times the log likelihood. -2 * logLik(fit.glm) #> 1117.99 (df=2) -2 * logLik(fit.glmer) #> 728.0926 (df=3) # Compute deviance as the sum of the deviance residuals squared. sum(resid(fit.glm)^2) #> 844.7073 sum(resid(fit.glmer)^2) #> 214.0758 Notice that 844.7073 - 728.0926 = 116.6147. This gives the "right answer" though the computation is not meaningful. The answer @RemkoDuursma points it out as well. To choose between the models we can use the deviance = -2 × log-likelihood or the AIC = -2 × log-likelihood + 2 × #parameters. See also Residual deviance, residuals, and log-likelihood in [weighted] logistic regression PS. I came across this error when I used a different R package to fit the Poisson GLMM, glmmML. fit.glmmML <- glmmML( count ~ race, cluster = Obs, family = poisson, method = "ghq", data = homicides ) deviance(fit.glm) #> 844.7073 deviance(fit.glmer) #> 214.0758 deviance(fit.glmmML) #> 728.107 lme4::lmer and glmmML::glmmML report very different "deviance" for the same Poisson GLMM even though both use maximum likelihood and their parameter estimates are almost the same. It took me a while to realize they have a different definition of deviance.
Poisson GLMM vs GLM in R (lme4) This is a tricky exercise. Let's break it down in two parts. Specify a Poisson GLMM for the 1990 General Society Survey data. 1308 subjects responded to the question: Within the past 12 months, how m
41,494
How is the decision boundary's equation determined?
You need one additional piece of information to determine a decision boundary: a level to threshold the probabilities. Given a threshold $T$, we make positive decisions when $$ g(\theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_1^2 + \theta_4 x_2^2) \geq T $$ and negative decisions when $$ g(\theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_1^2 + \theta_4 x_2^2) < T $$ so the boundary is given by $$ g(\theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_1^2 + \theta_4 x_2^2) = T $$ In your case, logistic regression, $g$ is the sigmoid function, whose inverse is the log odds, so the decision boundary is $$ \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_1^2 + \theta_4 x_2^2 = \log \left(\frac{T}{1-T}\right) $$ The right hand side is just a constant. You can complete the square to figure out what type of geometric curve this determines in any given case. Andrew got $0$ on the right hand side by setting $T = 0.5$, something I generally would not advise without studying the specific problem you are trying to solve. Thresholds are best set by examining the cost tradeoffs between false negatives and false positives for various values of $T$. However it's still not clear to me: did Andrew say "Cool, my data can be separated by a circle, let's go with the circle equation [...]"? Did the algorithm figure it out instead? In this case, certainly the first thing! Logistic regression has no built in ability to create and use transformations of raw features, and it's common to use exploratory data analysis to assist when building models. Other approaches are: Use a basis expansion of features in the regression, like cubic splines. This will allow the regression to fit very general shapes. Use a generalized version of logistic regression, like gradient boosted logistic regression. This has the ability to adaptively create new features to fit your data. But for a first shot at logistic regression, it's good practice to look at data, and engineer appropriate features. This is almost certainly the lesson Andrew is trying to communicate.
How is the decision boundary's equation determined?
You need one additional piece of information to determine a decision boundary: a level to threshold the probabilities. Given a threshold $T$, we make positive decisions when $$ g(\theta_0 + \theta_1
How is the decision boundary's equation determined? You need one additional piece of information to determine a decision boundary: a level to threshold the probabilities. Given a threshold $T$, we make positive decisions when $$ g(\theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_1^2 + \theta_4 x_2^2) \geq T $$ and negative decisions when $$ g(\theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_1^2 + \theta_4 x_2^2) < T $$ so the boundary is given by $$ g(\theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_1^2 + \theta_4 x_2^2) = T $$ In your case, logistic regression, $g$ is the sigmoid function, whose inverse is the log odds, so the decision boundary is $$ \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_1^2 + \theta_4 x_2^2 = \log \left(\frac{T}{1-T}\right) $$ The right hand side is just a constant. You can complete the square to figure out what type of geometric curve this determines in any given case. Andrew got $0$ on the right hand side by setting $T = 0.5$, something I generally would not advise without studying the specific problem you are trying to solve. Thresholds are best set by examining the cost tradeoffs between false negatives and false positives for various values of $T$. However it's still not clear to me: did Andrew say "Cool, my data can be separated by a circle, let's go with the circle equation [...]"? Did the algorithm figure it out instead? In this case, certainly the first thing! Logistic regression has no built in ability to create and use transformations of raw features, and it's common to use exploratory data analysis to assist when building models. Other approaches are: Use a basis expansion of features in the regression, like cubic splines. This will allow the regression to fit very general shapes. Use a generalized version of logistic regression, like gradient boosted logistic regression. This has the ability to adaptively create new features to fit your data. But for a first shot at logistic regression, it's good practice to look at data, and engineer appropriate features. This is almost certainly the lesson Andrew is trying to communicate.
How is the decision boundary's equation determined? You need one additional piece of information to determine a decision boundary: a level to threshold the probabilities. Given a threshold $T$, we make positive decisions when $$ g(\theta_0 + \theta_1
41,495
What is the chance that someone is dealt a suit in the game of Bridge?
The answer is a tiny number, but large enough to suggest someone has at sometime been dealt a suit in the US. This post shows how to find that chance (with a sequence of three simple calculations), provides an interpretation, and concludes by showing how to compute it accurately. Let's begin with a small generalization, because it uncovers the essence of the problem. Let a "suit" consist of $k\ge 1$ cards. A "deck" is the union of $m\ge 1$ distinct suits. In the question, $k=13$ and $m=4$. To model a deal, suppose the deck is randomly shuffled and partitioned into $m$ groups of $k$ contiguous cards in the shuffle. Each of these groups is a "hand". First let's find the chance that one pre-specified player is dealt a suit. There are $$\binom{mk}{k} = \frac{(mk)(mk-1)\cdots((m-1)k+1)}{(k)(k-1)\cdots(1)}$$ possible hands, all of them equally likely, and $m$ of them are suits. This chance therefore is $$p^{(1)}(m,k) = \frac{m}{\binom{mk}{k}}.\tag{1}$$ If this is what the question was asking, we are done. However, the more likely interpretation is that it asks for the chance that one or more of the hands is a suit. To do this, proceed to find the chance that two pre-designated players are dealt suits. Conditional on the first player being dealt a suit (the chance given by $(1)$), there remain $m-1$ suits. Result $(1)$ applies with $m$ replaced by $m-1$ to give the conditional probability. These two values multiply to give the joint probability $$p^{(2)}(m,k) = p^{(1)}(m,k)p^{(1)}(m-1,k).$$ Continuing this reasoning inductively gives the chance that $s\ge 1$ pre-designated players each are dealt suits, $$p^{(s)}(m,k) = \prod_{i=0}^{s-1} p^{(1)}(m-i,k).\tag{2}$$ The Principle of Inclusion-Exclusion ("PIE") supplies the chance that one or more players (not designated in advance) are dealt suits; it is $$p(m,k) = \sum_{s=1}^m (-1)^{s-1} \binom{m}{s} p^{(s)}(m,k).\tag{3}$$ In particular, $$\eqalign{ p(4,13) &= \frac{(3181)(233437)(25281233)}{(2^6)(3)(5^4)(7^4)(17^3)(19^2)(23^2)(29)(31)(37)(41)(43)(47)} \\ &=\frac{18772910672458601}{745065802298455456100520000}\\ &\approx 2.519631234522642\times 10^{-11}. }$$ The small primes in the denominator were expected: they cannot exceed $mk=52$. The large primes in the numerator strongly suggest there exists no general closed form formula for $p(m,k)$. What does this answer mean? Wikipedia traces the modern version of Bridge to 1904, states that it used to be more popular in the US, and reports there are around 25 million players today in the US. Although it's difficult to know exactly what it means to be a Bridge player, we might expect each one on average to play between a few hands and a few hundred hands annually, with each hand involving four players. (In Duplicate Bridge some deals are played multiple times, but let's ignore that complication and just absorb it into the "few to a few hundred" estimate.) The expected number of Bridge deals annually in the US in which a suit is dealt therefore is on the order of ten to a hundred times the product $$p(4,13) \times 25 \times 10^6 \approx \frac{1}{1588}.$$ Accounting for the $110$ or so years that have transpired since 1904, we might multiply this expectation by another two orders of magnitude. The result is somewhere between $1/10$ and $10$. Although $p(4,13)$ might seem "impossibly small," it is not negligible: depending on the assumptions about how active Bridge players have been, it's somewhere between plausible and highly likely a suit has already been dealt in the US. Many people have reported such hands. The obvious explanation is that some (many?) decks are not randomly shuffled or dealt. See Peter Rowlett on Four Perfect Hands or Science News on Thirteen Spades. Computing notes Computing the answer is as straightforward and simple as it looks in formulas $(1)$, $(2)$, and $(3)$: see the R example below. When applying PIE it's usually best to avoid large values of $m$ due to the alternating addition and subtraction in the final formula: round-off error can accumulate rapidly when some individual terms in the sum are much greater in size than the result. This situation is nicer. Since in general the first term--based on the chance of just one particular player getting a suit--dominates the rest, this code performs the sum in reverse order to avoid that roundoff error. # NB: `choose` computes the binomial coefficient p <- function(m, k) { p.1 <- function(m, k) m / choose(m*k, k) # Formula (1) p.s <- function(s, m, k) prod(p.1(m:(m-s+1), k)) # Formula (2) p.s <- Vectorize(p.s, "s") sum((-1)^(m:1-1) * choose(m, m:1) * p.s(m:1, m, k)) # Formula (3) } print(p(4, 13), digits=16) [1] 2.519631234522642e-11 The result is correct to the full precision inherent in IEEE floating point arithmetic. References Gridgeman, N. T. "The Mystery of the Missing Deal." Amer. Stat. 18, 15-16, Feb. 1964. Mosteller, F. "Perfect Bridge Hand." Problem 8 in Fifty Challenging Problems in Probability with Solutions. New York: Dover, pp. 2 and 22-24, 1987. Wolfram Mathworld quotes the same rational value for $p(4,13)$. Its references are Mosteller and Gridgeman.
What is the chance that someone is dealt a suit in the game of Bridge?
The answer is a tiny number, but large enough to suggest someone has at sometime been dealt a suit in the US. This post shows how to find that chance (with a sequence of three simple calculations), p
What is the chance that someone is dealt a suit in the game of Bridge? The answer is a tiny number, but large enough to suggest someone has at sometime been dealt a suit in the US. This post shows how to find that chance (with a sequence of three simple calculations), provides an interpretation, and concludes by showing how to compute it accurately. Let's begin with a small generalization, because it uncovers the essence of the problem. Let a "suit" consist of $k\ge 1$ cards. A "deck" is the union of $m\ge 1$ distinct suits. In the question, $k=13$ and $m=4$. To model a deal, suppose the deck is randomly shuffled and partitioned into $m$ groups of $k$ contiguous cards in the shuffle. Each of these groups is a "hand". First let's find the chance that one pre-specified player is dealt a suit. There are $$\binom{mk}{k} = \frac{(mk)(mk-1)\cdots((m-1)k+1)}{(k)(k-1)\cdots(1)}$$ possible hands, all of them equally likely, and $m$ of them are suits. This chance therefore is $$p^{(1)}(m,k) = \frac{m}{\binom{mk}{k}}.\tag{1}$$ If this is what the question was asking, we are done. However, the more likely interpretation is that it asks for the chance that one or more of the hands is a suit. To do this, proceed to find the chance that two pre-designated players are dealt suits. Conditional on the first player being dealt a suit (the chance given by $(1)$), there remain $m-1$ suits. Result $(1)$ applies with $m$ replaced by $m-1$ to give the conditional probability. These two values multiply to give the joint probability $$p^{(2)}(m,k) = p^{(1)}(m,k)p^{(1)}(m-1,k).$$ Continuing this reasoning inductively gives the chance that $s\ge 1$ pre-designated players each are dealt suits, $$p^{(s)}(m,k) = \prod_{i=0}^{s-1} p^{(1)}(m-i,k).\tag{2}$$ The Principle of Inclusion-Exclusion ("PIE") supplies the chance that one or more players (not designated in advance) are dealt suits; it is $$p(m,k) = \sum_{s=1}^m (-1)^{s-1} \binom{m}{s} p^{(s)}(m,k).\tag{3}$$ In particular, $$\eqalign{ p(4,13) &= \frac{(3181)(233437)(25281233)}{(2^6)(3)(5^4)(7^4)(17^3)(19^2)(23^2)(29)(31)(37)(41)(43)(47)} \\ &=\frac{18772910672458601}{745065802298455456100520000}\\ &\approx 2.519631234522642\times 10^{-11}. }$$ The small primes in the denominator were expected: they cannot exceed $mk=52$. The large primes in the numerator strongly suggest there exists no general closed form formula for $p(m,k)$. What does this answer mean? Wikipedia traces the modern version of Bridge to 1904, states that it used to be more popular in the US, and reports there are around 25 million players today in the US. Although it's difficult to know exactly what it means to be a Bridge player, we might expect each one on average to play between a few hands and a few hundred hands annually, with each hand involving four players. (In Duplicate Bridge some deals are played multiple times, but let's ignore that complication and just absorb it into the "few to a few hundred" estimate.) The expected number of Bridge deals annually in the US in which a suit is dealt therefore is on the order of ten to a hundred times the product $$p(4,13) \times 25 \times 10^6 \approx \frac{1}{1588}.$$ Accounting for the $110$ or so years that have transpired since 1904, we might multiply this expectation by another two orders of magnitude. The result is somewhere between $1/10$ and $10$. Although $p(4,13)$ might seem "impossibly small," it is not negligible: depending on the assumptions about how active Bridge players have been, it's somewhere between plausible and highly likely a suit has already been dealt in the US. Many people have reported such hands. The obvious explanation is that some (many?) decks are not randomly shuffled or dealt. See Peter Rowlett on Four Perfect Hands or Science News on Thirteen Spades. Computing notes Computing the answer is as straightforward and simple as it looks in formulas $(1)$, $(2)$, and $(3)$: see the R example below. When applying PIE it's usually best to avoid large values of $m$ due to the alternating addition and subtraction in the final formula: round-off error can accumulate rapidly when some individual terms in the sum are much greater in size than the result. This situation is nicer. Since in general the first term--based on the chance of just one particular player getting a suit--dominates the rest, this code performs the sum in reverse order to avoid that roundoff error. # NB: `choose` computes the binomial coefficient p <- function(m, k) { p.1 <- function(m, k) m / choose(m*k, k) # Formula (1) p.s <- function(s, m, k) prod(p.1(m:(m-s+1), k)) # Formula (2) p.s <- Vectorize(p.s, "s") sum((-1)^(m:1-1) * choose(m, m:1) * p.s(m:1, m, k)) # Formula (3) } print(p(4, 13), digits=16) [1] 2.519631234522642e-11 The result is correct to the full precision inherent in IEEE floating point arithmetic. References Gridgeman, N. T. "The Mystery of the Missing Deal." Amer. Stat. 18, 15-16, Feb. 1964. Mosteller, F. "Perfect Bridge Hand." Problem 8 in Fifty Challenging Problems in Probability with Solutions. New York: Dover, pp. 2 and 22-24, 1987. Wolfram Mathworld quotes the same rational value for $p(4,13)$. Its references are Mosteller and Gridgeman.
What is the chance that someone is dealt a suit in the game of Bridge? The answer is a tiny number, but large enough to suggest someone has at sometime been dealt a suit in the US. This post shows how to find that chance (with a sequence of three simple calculations), p
41,496
What is the chance that someone is dealt a suit in the game of Bridge?
Using the conditional probabilities, $P$ with a subscript $i$ $i=1, 2, \ldots, 13$ to denote the probability of getting the "right" card, we can use inductive reasoning to calculate the probability. We can assume WLOG that the player is dealt all 13 cards in sequence irrespective of what may be dealt to the other players. We care not what the suit of the first card is that is dealt, so the $P_1 = 1$, but the next card that is dealt must match that suit, which of the 51 remaining there are only 12 to draw, so $P_2 = 12/51$. Continuing in this fashion, the 13th card has probability $P_{13} = 1 / 40$ to be the correct card. Mathematically, this combinatoric reduces to (13! 39! / 52!) which is impossibly small.
What is the chance that someone is dealt a suit in the game of Bridge?
Using the conditional probabilities, $P$ with a subscript $i$ $i=1, 2, \ldots, 13$ to denote the probability of getting the "right" card, we can use inductive reasoning to calculate the probability. W
What is the chance that someone is dealt a suit in the game of Bridge? Using the conditional probabilities, $P$ with a subscript $i$ $i=1, 2, \ldots, 13$ to denote the probability of getting the "right" card, we can use inductive reasoning to calculate the probability. We can assume WLOG that the player is dealt all 13 cards in sequence irrespective of what may be dealt to the other players. We care not what the suit of the first card is that is dealt, so the $P_1 = 1$, but the next card that is dealt must match that suit, which of the 51 remaining there are only 12 to draw, so $P_2 = 12/51$. Continuing in this fashion, the 13th card has probability $P_{13} = 1 / 40$ to be the correct card. Mathematically, this combinatoric reduces to (13! 39! / 52!) which is impossibly small.
What is the chance that someone is dealt a suit in the game of Bridge? Using the conditional probabilities, $P$ with a subscript $i$ $i=1, 2, \ldots, 13$ to denote the probability of getting the "right" card, we can use inductive reasoning to calculate the probability. W
41,497
What is the chance that someone is dealt a suit in the game of Bridge?
One player = $\frac {\binom{4}{1}} {\binom{52}{13}} = 6.29907808979643E-012$ One player not having it is 1 - 6.29907808979643E-012 so that to the 4th is none Any player $1 - (1 - 6.29907808979643E-012)^4 = 2.5196289499263E-011$ One or more is one minus the chance of no player This is a little odd because if 3 are suited then the 4th has to be suited
What is the chance that someone is dealt a suit in the game of Bridge?
One player = $\frac {\binom{4}{1}} {\binom{52}{13}} = 6.29907808979643E-012$ One player not having it is 1 - 6.29907808979643E-012 so that to the 4th is none Any player $1 - (1 - 6.29907808979643E
What is the chance that someone is dealt a suit in the game of Bridge? One player = $\frac {\binom{4}{1}} {\binom{52}{13}} = 6.29907808979643E-012$ One player not having it is 1 - 6.29907808979643E-012 so that to the 4th is none Any player $1 - (1 - 6.29907808979643E-012)^4 = 2.5196289499263E-011$ One or more is one minus the chance of no player This is a little odd because if 3 are suited then the 4th has to be suited
What is the chance that someone is dealt a suit in the game of Bridge? One player = $\frac {\binom{4}{1}} {\binom{52}{13}} = 6.29907808979643E-012$ One player not having it is 1 - 6.29907808979643E-012 so that to the 4th is none Any player $1 - (1 - 6.29907808979643E
41,498
What would the distribution of time spent per day on a given site look like?
For youtube I think the distribution depends on the distribution of video length. Most videos are about 8 to 10 minutes (this is the average span of the people attention). One might watch 0, 1, 2, or any other number of videos. A Poison distribution might be a good candidate to model the number of visits to a website or the number of videos watched per day. If the distribution of the time of a single video would be $P_V(t)$ then the distribution of total time would be something like $$P_{Total}(t)=\sum_{i}^{} P_p(N=i)P_{Vi}(t)$$ where $i$ is the number of visits, $P_p(N=i)$ is the Poisson distribution and $P_{Vi}$ is distribution of total time of $i$ videos which can be obtained by performing convolution on $P_v$. If $P_v(t)$ is approximately Gaussian then the distribution would be a mixture of Gaussians. Something that looks like the following picture. (I only guessed the distribution, not to scale) For facebook there might be something similar to a video length. For example the time required to look at one page.
What would the distribution of time spent per day on a given site look like?
For youtube I think the distribution depends on the distribution of video length. Most videos are about 8 to 10 minutes (this is the average span of the people attention). One might watch 0, 1, 2, or
What would the distribution of time spent per day on a given site look like? For youtube I think the distribution depends on the distribution of video length. Most videos are about 8 to 10 minutes (this is the average span of the people attention). One might watch 0, 1, 2, or any other number of videos. A Poison distribution might be a good candidate to model the number of visits to a website or the number of videos watched per day. If the distribution of the time of a single video would be $P_V(t)$ then the distribution of total time would be something like $$P_{Total}(t)=\sum_{i}^{} P_p(N=i)P_{Vi}(t)$$ where $i$ is the number of visits, $P_p(N=i)$ is the Poisson distribution and $P_{Vi}$ is distribution of total time of $i$ videos which can be obtained by performing convolution on $P_v$. If $P_v(t)$ is approximately Gaussian then the distribution would be a mixture of Gaussians. Something that looks like the following picture. (I only guessed the distribution, not to scale) For facebook there might be something similar to a video length. For example the time required to look at one page.
What would the distribution of time spent per day on a given site look like? For youtube I think the distribution depends on the distribution of video length. Most videos are about 8 to 10 minutes (this is the average span of the people attention). One might watch 0, 1, 2, or
41,499
What would the distribution of time spent per day on a given site look like?
Having the vocabulary to describe a distribution is an important skill as a data scientist when it comes to communicating ideas to your peers. There are 4 important concepts, with supporting vocabulary, that you can use to structure your answer to a question like this. These are: Center (mean, median, mode) Spread (standard deviation, inter quartile range, range) Shape (skewness, kurtosis, uni or bimodal) Outliers (Do they exist?) In terms of the distribution of time spent per day on Facebook (FB), one can imagine there may be two groups of people on Facebook: People who scroll quickly through their feed and don’t spend too much time on FB. People who spend a large amount of their social media time on FB. From this point of view, we can make the following claims about the distribution of time spent on FB, with the caveat that this needs to be validated with real world data. Center: Since we expect the distribution to be bimodal (see Shape), we could describe the distribution using mode and median instead of mean. These summary statistics are good for investigating distributions that deviate from the classical normal distribution. Spread: Since we expect the distribution to be bimodal (see Shape), the spread and range will be fairly large. This means there will be a large inter quartile range that will be needed to accurately describe this distribution. Further, refrain from using standard deviation to describe the spread of this distribution. Shape: From our description, the distribution would be bimodal. One large group of people would be clustered around the lower end of the distribution, and another large group would be centered around the higher end. There could also be some skewness to the right for those people who may spend a bit too much time on FB. Outliers: You can run outlier detection tests like Grubb’s test, z-score, or the IQR methods to quantitatively tell which users are not like the rest.
What would the distribution of time spent per day on a given site look like?
Having the vocabulary to describe a distribution is an important skill as a data scientist when it comes to communicating ideas to your peers. There are 4 important concepts, with supporting vocabular
What would the distribution of time spent per day on a given site look like? Having the vocabulary to describe a distribution is an important skill as a data scientist when it comes to communicating ideas to your peers. There are 4 important concepts, with supporting vocabulary, that you can use to structure your answer to a question like this. These are: Center (mean, median, mode) Spread (standard deviation, inter quartile range, range) Shape (skewness, kurtosis, uni or bimodal) Outliers (Do they exist?) In terms of the distribution of time spent per day on Facebook (FB), one can imagine there may be two groups of people on Facebook: People who scroll quickly through their feed and don’t spend too much time on FB. People who spend a large amount of their social media time on FB. From this point of view, we can make the following claims about the distribution of time spent on FB, with the caveat that this needs to be validated with real world data. Center: Since we expect the distribution to be bimodal (see Shape), we could describe the distribution using mode and median instead of mean. These summary statistics are good for investigating distributions that deviate from the classical normal distribution. Spread: Since we expect the distribution to be bimodal (see Shape), the spread and range will be fairly large. This means there will be a large inter quartile range that will be needed to accurately describe this distribution. Further, refrain from using standard deviation to describe the spread of this distribution. Shape: From our description, the distribution would be bimodal. One large group of people would be clustered around the lower end of the distribution, and another large group would be centered around the higher end. There could also be some skewness to the right for those people who may spend a bit too much time on FB. Outliers: You can run outlier detection tests like Grubb’s test, z-score, or the IQR methods to quantitatively tell which users are not like the rest.
What would the distribution of time spent per day on a given site look like? Having the vocabulary to describe a distribution is an important skill as a data scientist when it comes to communicating ideas to your peers. There are 4 important concepts, with supporting vocabular
41,500
What would the distribution of time spent per day on a given site look like?
The answer is wrong in the sense that you do not explain how you arrived at this answer. The reason that they ask a question like this during in an interview is to see how you think. One way to answer such a question would be to say that you don't know but that you can make an educated guess. Let's assume that if a person is visiting a site there is a probability $p$ after one unit of time $t$ has passed that she will leave the website. With a probability of $p$ her visit will be limited to $1$ unit of time. With a probability of $(1-p)p$ (i.e. the probability she hasn't left times the probability she will) her visit will be limited to $2$ units of time. With a probability of $(1-p)^2p$ her visit will be limited to $3$ units of time. Etc. The probability mass function of this distribution is therefore $(1-p)^t p$. This the geometric distribution. Note: I'm not saying that this is correct but this is a correct answer for the interview. You might also complicate it a bit by then saying that perhaps $p$ is a function of $t$ etc.
What would the distribution of time spent per day on a given site look like?
The answer is wrong in the sense that you do not explain how you arrived at this answer. The reason that they ask a question like this during in an interview is to see how you think. One way to answe
What would the distribution of time spent per day on a given site look like? The answer is wrong in the sense that you do not explain how you arrived at this answer. The reason that they ask a question like this during in an interview is to see how you think. One way to answer such a question would be to say that you don't know but that you can make an educated guess. Let's assume that if a person is visiting a site there is a probability $p$ after one unit of time $t$ has passed that she will leave the website. With a probability of $p$ her visit will be limited to $1$ unit of time. With a probability of $(1-p)p$ (i.e. the probability she hasn't left times the probability she will) her visit will be limited to $2$ units of time. With a probability of $(1-p)^2p$ her visit will be limited to $3$ units of time. Etc. The probability mass function of this distribution is therefore $(1-p)^t p$. This the geometric distribution. Note: I'm not saying that this is correct but this is a correct answer for the interview. You might also complicate it a bit by then saying that perhaps $p$ is a function of $t$ etc.
What would the distribution of time spent per day on a given site look like? The answer is wrong in the sense that you do not explain how you arrived at this answer. The reason that they ask a question like this during in an interview is to see how you think. One way to answe