idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
46,001
How to mathematically denote common elements
If it's reasonable to treat them as sets (e.g. each distinct element can only ever appear once, order doesn't matter, etc), then you can use set-intersection to denote the elements in common: $A_1 = \{8,1,5,6\}$ $A_2 = \{6,8\}$ ... etc Then $A_1\cap A_2\cap A_3\cap A_4\cap A_5 = \{6,8\}$ If you have a large number of sets to take the intersection of, you could write $A_1\cap A_2\cap \ldots \cap A_n$ as $$\bigcap_{i=1}^n A_i$$
How to mathematically denote common elements
If it's reasonable to treat them as sets (e.g. each distinct element can only ever appear once, order doesn't matter, etc), then you can use set-intersection to denote the elements in common: $A_1 = \
How to mathematically denote common elements If it's reasonable to treat them as sets (e.g. each distinct element can only ever appear once, order doesn't matter, etc), then you can use set-intersection to denote the elements in common: $A_1 = \{8,1,5,6\}$ $A_2 = \{6,8\}$ ... etc Then $A_1\cap A_2\cap A_3\cap A_4\cap A_5 = \{6,8\}$ If you have a large number of sets to take the intersection of, you could write $A_1\cap A_2\cap \ldots \cap A_n$ as $$\bigcap_{i=1}^n A_i$$
How to mathematically denote common elements If it's reasonable to treat them as sets (e.g. each distinct element can only ever appear once, order doesn't matter, etc), then you can use set-intersection to denote the elements in common: $A_1 = \
46,002
What is the loss function used for CNN?
In most cases CNNs use a cross-entropy loss on the one-hot encoded output. For a single image the cross entropy loss looks like this: $$ - \sum_{c=1}^M{(y_c \cdot \log{\hat y_c})} $$ where $M$ is the number of classes (i.e. $1000$ in ImageNet) and $\hat y_c$ is the model's prediction for that class (i.e. the output of the softmax for class $c$). Due to the fact that the labels are one-hot encoded and $y$ is a $(1000 \times 1)$ vector of ones and zeroes, $y_c$ is either $1$ or $0$. Thus, out of the whole sum only one term will actually be added: the one with $y_c=1$.
What is the loss function used for CNN?
In most cases CNNs use a cross-entropy loss on the one-hot encoded output. For a single image the cross entropy loss looks like this: $$ - \sum_{c=1}^M{(y_c \cdot \log{\hat y_c})} $$ where $M$ is the
What is the loss function used for CNN? In most cases CNNs use a cross-entropy loss on the one-hot encoded output. For a single image the cross entropy loss looks like this: $$ - \sum_{c=1}^M{(y_c \cdot \log{\hat y_c})} $$ where $M$ is the number of classes (i.e. $1000$ in ImageNet) and $\hat y_c$ is the model's prediction for that class (i.e. the output of the softmax for class $c$). Due to the fact that the labels are one-hot encoded and $y$ is a $(1000 \times 1)$ vector of ones and zeroes, $y_c$ is either $1$ or $0$. Thus, out of the whole sum only one term will actually be added: the one with $y_c=1$.
What is the loss function used for CNN? In most cases CNNs use a cross-entropy loss on the one-hot encoded output. For a single image the cross entropy loss looks like this: $$ - \sum_{c=1}^M{(y_c \cdot \log{\hat y_c})} $$ where $M$ is the
46,003
What is the loss function used for CNN?
As Jan says in a comment, AlexNet uses cross entropy as the loss function. It's important to note, though, that a Convolutional Neural Network describes the architecture of the network, not the goal of the network. It is the goal of a network that determines the loss function. CNN architectures can be used for many tasks with different loss functions: multi-class classification as in AlexNet Typically cross entropy loss regression Typically Squared Error loss image segmentation Can use cross entropy loss as well, but can also use several other kinds of loss functions reinforcement learning In Deep Q-Networks, the "Expected discounted accumulated future reward" can be used generative adversarial networks (generating images) The Jensen–Shannon divergence was used in the original implementation
What is the loss function used for CNN?
As Jan says in a comment, AlexNet uses cross entropy as the loss function. It's important to note, though, that a Convolutional Neural Network describes the architecture of the network, not the goal o
What is the loss function used for CNN? As Jan says in a comment, AlexNet uses cross entropy as the loss function. It's important to note, though, that a Convolutional Neural Network describes the architecture of the network, not the goal of the network. It is the goal of a network that determines the loss function. CNN architectures can be used for many tasks with different loss functions: multi-class classification as in AlexNet Typically cross entropy loss regression Typically Squared Error loss image segmentation Can use cross entropy loss as well, but can also use several other kinds of loss functions reinforcement learning In Deep Q-Networks, the "Expected discounted accumulated future reward" can be used generative adversarial networks (generating images) The Jensen–Shannon divergence was used in the original implementation
What is the loss function used for CNN? As Jan says in a comment, AlexNet uses cross entropy as the loss function. It's important to note, though, that a Convolutional Neural Network describes the architecture of the network, not the goal o
46,004
Behaviour of Welch's t-test with unequal group sizes
This question is sufficiently broad that a comprehensive answer would require a simulation study (some key parts of which have undoubtedly been done), going far beyond our usual style of answers. A Welch 2-sample t test can't be exactly as good as a pooled t test if we know the two populations have the same variance. (1) A Welch test at "level 5%" with $n_1 = 5, n_2 = 100$ when variances are equal gave Type I error a little above 5% $(0.054 \pm 0.0014).$ I didn't explore samples sizes below 5. set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5), rnorm(100))[2:3])) mean(out[1,]) [1] 4.902672 # avg DF about 5 mean(out[2,] < .05) [1] 0.05443 # avg P-val about 5% In the histogram below of 100,000 simulated P-values, the maroon bar at left represents the true significance level (Type I error) of the test. Its width is 0.5 and its height is about 0.108, for a total area of about 0.054, (2) Pooled test at level 5% with $n_1 = 5, n_2 = 100$ when variances are equal. Here, DF = $n_1 + n_2 - 2 = 103$ for all tests. The significance level is very near the intended 5%. set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5), rnorm(100), var.eq=T)[2:3])) mean(out[1,]) [1] 103 # DF exactly n1 + n2 - 2 = 103 mean(out[2,] < .05) [1] 0.0493 # P-val exactly 5% from transparent theory (3) However, if the smaller sample has variance 4 and the larger has variance 1, then the pooled tests with a nominal 5% significance level actually has significance level almost 30% (rich in false discoveries). set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5,0,2), rnorm(100), var.eq=T)[2:3])) mean(out[1,]) [1] 103 mean(out[2,] < .05) [1] 0.2853 (4) The Welch test may not have exactly 5% Type I error. Even so, with average P-value near 5%, it is clearly preferable to the pooled test when variances are unequal, as in (3). set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5,0,2), rnorm(100))[2:3])) mean(out[1,]) [1] 4.209818 mean(out[2,] < .05) [1] 0.05158 (5) First of two simulations focused on power. If variances are unequal and sizes are grossly unbalanced $(n_1 = 5, n_2 =100$), the power to detect that $\mu_1 = 4$ differs from $\mu_2 = 0$ is about 90%. set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5,4,2), rnorm(100))[2:3])) mean(out[1,]) [1] 4.209818 mean(out[2,] < .05) [1] 0.90977 In the figure below, the green bar at left represents the power of the test. So now, a tall first bar is a good thing. (6) If we balance the data (both samples of size 5), the power of the Welch test is reduced very little if any, compared to the previous simulation. Changing the means to be equal, leaving sample sizes at 5, and leaving variances unequal, a similar simulation (not shown) gave significance level 4.9% set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5,4,2), rnorm(5))[2:3])) mean(out[1,]) [1] 6.000759 mean(out[2,] < .05) [1] 0.89743 Summary comments. None of these simulations changes the advice to prefer the Welch test to the pooled test, except for tiny sample sizes, In that case prior information whether variances are equal might help to decide between Welch test and a pooled test. I have tried to probe in relevant directions with the few simulations above, but I certainly don't claim that they settle anything. If anyone has information from related simulations that would refine or extend my 'no new news' conclusions here, I'd be happy to see them.
Behaviour of Welch's t-test with unequal group sizes
This question is sufficiently broad that a comprehensive answer would require a simulation study (some key parts of which have undoubtedly been done), going far beyond our usual style of answers. A We
Behaviour of Welch's t-test with unequal group sizes This question is sufficiently broad that a comprehensive answer would require a simulation study (some key parts of which have undoubtedly been done), going far beyond our usual style of answers. A Welch 2-sample t test can't be exactly as good as a pooled t test if we know the two populations have the same variance. (1) A Welch test at "level 5%" with $n_1 = 5, n_2 = 100$ when variances are equal gave Type I error a little above 5% $(0.054 \pm 0.0014).$ I didn't explore samples sizes below 5. set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5), rnorm(100))[2:3])) mean(out[1,]) [1] 4.902672 # avg DF about 5 mean(out[2,] < .05) [1] 0.05443 # avg P-val about 5% In the histogram below of 100,000 simulated P-values, the maroon bar at left represents the true significance level (Type I error) of the test. Its width is 0.5 and its height is about 0.108, for a total area of about 0.054, (2) Pooled test at level 5% with $n_1 = 5, n_2 = 100$ when variances are equal. Here, DF = $n_1 + n_2 - 2 = 103$ for all tests. The significance level is very near the intended 5%. set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5), rnorm(100), var.eq=T)[2:3])) mean(out[1,]) [1] 103 # DF exactly n1 + n2 - 2 = 103 mean(out[2,] < .05) [1] 0.0493 # P-val exactly 5% from transparent theory (3) However, if the smaller sample has variance 4 and the larger has variance 1, then the pooled tests with a nominal 5% significance level actually has significance level almost 30% (rich in false discoveries). set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5,0,2), rnorm(100), var.eq=T)[2:3])) mean(out[1,]) [1] 103 mean(out[2,] < .05) [1] 0.2853 (4) The Welch test may not have exactly 5% Type I error. Even so, with average P-value near 5%, it is clearly preferable to the pooled test when variances are unequal, as in (3). set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5,0,2), rnorm(100))[2:3])) mean(out[1,]) [1] 4.209818 mean(out[2,] < .05) [1] 0.05158 (5) First of two simulations focused on power. If variances are unequal and sizes are grossly unbalanced $(n_1 = 5, n_2 =100$), the power to detect that $\mu_1 = 4$ differs from $\mu_2 = 0$ is about 90%. set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5,4,2), rnorm(100))[2:3])) mean(out[1,]) [1] 4.209818 mean(out[2,] < .05) [1] 0.90977 In the figure below, the green bar at left represents the power of the test. So now, a tall first bar is a good thing. (6) If we balance the data (both samples of size 5), the power of the Welch test is reduced very little if any, compared to the previous simulation. Changing the means to be equal, leaving sample sizes at 5, and leaving variances unequal, a similar simulation (not shown) gave significance level 4.9% set.seed(1234) out = replicate(10^5, as.numeric(t.test(rnorm(5,4,2), rnorm(5))[2:3])) mean(out[1,]) [1] 6.000759 mean(out[2,] < .05) [1] 0.89743 Summary comments. None of these simulations changes the advice to prefer the Welch test to the pooled test, except for tiny sample sizes, In that case prior information whether variances are equal might help to decide between Welch test and a pooled test. I have tried to probe in relevant directions with the few simulations above, but I certainly don't claim that they settle anything. If anyone has information from related simulations that would refine or extend my 'no new news' conclusions here, I'd be happy to see them.
Behaviour of Welch's t-test with unequal group sizes This question is sufficiently broad that a comprehensive answer would require a simulation study (some key parts of which have undoubtedly been done), going far beyond our usual style of answers. A We
46,005
Behaviour of Welch's t-test with unequal group sizes
The issue here does not appear to be due to imbalanced sample sizes per se, but rather, non-uniformity of the p-value seems to be occurring due to using very small sample sizes for one of the samples. Below I show high-resolution histograms showing simulated p-values for a comparison of one sample with $n_X = 100$ standard normal values and another sample with $n_Y = 2,...,10$ standard normal values. (Each comparison uses $K = 10^6$ simulations, and the histograms use bin widths $w = 0.01$ so you get high-resolution for the true distributions.) As you can see, for very small samples there is a non-uniform distribution with a higher than expected probability of a small p-value. I am not entirely certain where this phenomena originates, but I have some strong suspicions. It is well-known that the standard sample variance estimator yields an estimated standard deviation that is biased for small samples (see e.g., here). For normal data the sample standard deviation is biased downward by a known "correction factor" that has quite a heavy effect for very small samples. If one were to underestimate the true standard deviation of the smaller sample in Welch's T-test, then the natural expectation would be that this would underestimate the likely different between sample means under the null hypothesis of no difference, which would bias the p-value downward. Since this is exactly what we see in the histograms, my suspicion is that this phenomena occurs due to the downward bias of sample standard deviations as an estimate of true standard deviations. I suspect that if you were to apply the standard small sample "corrections" to the sample standard deviations, in the test statistic for Welch's test, then this phenomena would be substantially ameliorated. #Generate p-value simulations from Welch's test #First sample is set to sample size of 100 #Second sample has sample sizes from 2, ..., 30 set.seed(97142903) SIMS <- 10^6; PVALS <- matrix(NA, nrow = SIMS, ncol = 9); for (n in 2:10) { for (i in 1:SIMS) { PVALS[i, n-1] <- t.test(rnorm(100), rnorm(n))$p.value; } } #Generate data frame of p-value simulations DATA <- data.frame(n = rep(2:10, each = SIMS), p = as.vector(PVALS)); #Plot the KDEs of the p-value simulations library(ggplot2); THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'), plot.subtitle = element_text(hjust = 0.5, face = 'bold')); FIGURE <- ggplot(aes(x = p), data = DATA) + geom_histogram(binwidth = 0.01, boundary = 0, fill = 'blue') + facet_wrap(~ n, ncol = 1) + THEME + ggtitle('Histograms of p-value simulations') + labs(subtitle = '(Sample of 100 against sample of stated size)') + xlab('p value') + ylab('Count'); FIGURE;
Behaviour of Welch's t-test with unequal group sizes
The issue here does not appear to be due to imbalanced sample sizes per se, but rather, non-uniformity of the p-value seems to be occurring due to using very small sample sizes for one of the samples.
Behaviour of Welch's t-test with unequal group sizes The issue here does not appear to be due to imbalanced sample sizes per se, but rather, non-uniformity of the p-value seems to be occurring due to using very small sample sizes for one of the samples. Below I show high-resolution histograms showing simulated p-values for a comparison of one sample with $n_X = 100$ standard normal values and another sample with $n_Y = 2,...,10$ standard normal values. (Each comparison uses $K = 10^6$ simulations, and the histograms use bin widths $w = 0.01$ so you get high-resolution for the true distributions.) As you can see, for very small samples there is a non-uniform distribution with a higher than expected probability of a small p-value. I am not entirely certain where this phenomena originates, but I have some strong suspicions. It is well-known that the standard sample variance estimator yields an estimated standard deviation that is biased for small samples (see e.g., here). For normal data the sample standard deviation is biased downward by a known "correction factor" that has quite a heavy effect for very small samples. If one were to underestimate the true standard deviation of the smaller sample in Welch's T-test, then the natural expectation would be that this would underestimate the likely different between sample means under the null hypothesis of no difference, which would bias the p-value downward. Since this is exactly what we see in the histograms, my suspicion is that this phenomena occurs due to the downward bias of sample standard deviations as an estimate of true standard deviations. I suspect that if you were to apply the standard small sample "corrections" to the sample standard deviations, in the test statistic for Welch's test, then this phenomena would be substantially ameliorated. #Generate p-value simulations from Welch's test #First sample is set to sample size of 100 #Second sample has sample sizes from 2, ..., 30 set.seed(97142903) SIMS <- 10^6; PVALS <- matrix(NA, nrow = SIMS, ncol = 9); for (n in 2:10) { for (i in 1:SIMS) { PVALS[i, n-1] <- t.test(rnorm(100), rnorm(n))$p.value; } } #Generate data frame of p-value simulations DATA <- data.frame(n = rep(2:10, each = SIMS), p = as.vector(PVALS)); #Plot the KDEs of the p-value simulations library(ggplot2); THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'), plot.subtitle = element_text(hjust = 0.5, face = 'bold')); FIGURE <- ggplot(aes(x = p), data = DATA) + geom_histogram(binwidth = 0.01, boundary = 0, fill = 'blue') + facet_wrap(~ n, ncol = 1) + THEME + ggtitle('Histograms of p-value simulations') + labs(subtitle = '(Sample of 100 against sample of stated size)') + xlab('p value') + ylab('Count'); FIGURE;
Behaviour of Welch's t-test with unequal group sizes The issue here does not appear to be due to imbalanced sample sizes per se, but rather, non-uniformity of the p-value seems to be occurring due to using very small sample sizes for one of the samples.
46,006
Regression model with aggregated targets
Here's an approach for solving this type of problem using latent variable models. It's not a specific model, but a general way to formulate a model by breaking the description of the system into two parts: the relationship between individual inputs and (unobserved) individual outputs, and the relationship between individual outputs and (observed) aggregate group outputs. This gives a natural way to think about the problem that (hopefully somewhat) mirrors the data generating process, and makes assumptions explicit. Linear or nonlinear relationships can be accommodated, as well as various types of noise model. There's well-developed, general-purpose machinery for performing inference in latent variable models (mentioned below). Finally, explicitly including individual outputs in the model gives a principled way to make predictions about them. But, of course there's no free lunch--aggregating data destroys information. General approach The central idea is to treat the individual outputs as latent variables, since they're not directly observed. Suppose the individual inputs are $\{x_1, \dots, x_n\}$, where each $x_i \in \mathbb{R}^d$ contains both individual and group-level features for the $i$th individual (group-level features would be duplicated across individuals). Inputs are stored on the rows of matrix $X \in \mathbb{R}^{n \times d}$. The corresponding individual outputs are represented by $y = [y_1, \dots, y_n]^T$ where $y_i \in \mathbb{R}$. The first step is to postulate a relationship between the individual inputs and outputs, even though the individual outputs are not directly observed in the training data. This takes the form of a joint conditional distribution $p(y \mid X, \theta)$ where $\theta$ is a parameter vector. Of course, it factorizes as $\prod_{i=1}^n p(y_i \mid x_i, \theta)$ if the outputs are conditionally independent, given the inputs (e.g. if error terms are independent). Next, we relate the unobserved individual outputs to the observed aggregate group outputs $\bar{y} = [\bar{y}_1, \dots, \bar{y}_k]^T$ (for $k$ groups). In general, this takes the form of another conditional distribution $p(\bar{y} \mid y, \phi)$, since the observed group outputs may be a noisy function of the individual outputs (with parameters $\phi$). Note that $\bar{y}$ is conditionally independent of $X$, given $y$. If group outputs are a deterministic function of the individual outputs, then $p(\bar{y} \mid y)$ takes the form of a delta function. The joint likelihood of the individual and group outputs can then be written as: $$p(y, \bar{y} \mid X, \theta, \phi) = p(\bar{y} \mid y, \phi) p(y \mid X, \theta)$$ Since the individual outputs are latent variables, they must be integrated out of the joint likelihood to obtain the marginal likelihood for the observed group outputs: $$p(\bar{y} \mid X, \theta, \phi) = \int p(\bar{y} \mid y, \phi) p(y \mid X, \theta) dy$$ If group outputs are a known, deterministic function of the individual outputs, the marginal likelihood can be written directly without having to think about this integral (and $\phi$ can be ignored). Maximum likelihood estimation Maximum likelihood estimation of the parameters proceeds by maximizing the marginal likelihood: $$\theta_{ML}, \phi_{ML} \ = \ \arg \max_{\theta,\phi} \ p(\bar{y} \mid X, \theta, \phi)$$ If the above integral can be solved analytically, it's possible to directly optimize the resulting marginal likelihood (either analytically or numerically). However, the integral may be intractable, in which case the expectation maximization algorithm can be used. The maximum likelihood parameters $\theta_{ML}$ could be studied to learn about the data generating process, or used to predict individual outputs for out-of-sample data. For example, given a new individual input $x_*$, we have the predictive distribution $p(y_* \mid x_*, \theta_{ML})$ (whose form we already chose in the first step above). Note that this distribution doesn't account for uncertainty in estimating the parameters, unlike the Bayesian version below. But, one could construct frequentist prediction intervals (e.g. by bootstrapping). Care may be needed when making inferences about individuals based on aggregated data (e.g. see various forms of ecological fallacy). It's possible that these issues may be mitigated to some extent here, since individual inputs are known, and only the outputs are aggregated (and parameters are assumed to be common to all individuals). But, I don't want to make any strong statements about this without thinking about it more carefully. Bayesian inference Alternatively, we may be interested in the posterior distribution over parameters: $$p(\theta, \phi \mid \bar{y}, X) = \frac{1}{Z} p(\bar{y} \mid X, \theta, \phi) p(\theta, \phi)$$ where $Z$ is a normalizing constant. Note that this is based on the marginal likelihood, as above. It also requires that we specify a prior distribution over parameters $p(\theta, \phi)$. In some cases, it may be possible to find a closed form expression for the posterior. This requires an analytical solution to the integral in the marginal likelihood, as well as the integral in the normalizing constant. Otherwise, the posterior can be approximated, e.g. by sampling (as in MCMC) or variational methods. Given a new individual input $x_*$, we can make predictions about the output $y_*$ using the posterior predictive distribution. This is obtained by averaging the predictive distributions for each possible choice of parameters, weighted by the posterior probability of these parameters given the training data: $$p(y_* \mid x_*, X, \bar{y}) = \iint p(y_* \mid x_*, \theta) p(\theta, \phi \mid \bar{y}, X) d\theta d\phi$$ As above, approximations may be necessary. Example Here's an example showing how to apply the above approach with a simple, linear model, similar to that described in the question. One could naturally apply the same techniques using nonlinear functions, more complicated noise models, etc. Generating individual outputs Let's suppose the unobserved individual outputs are generated as a linear function of the inputs, plus i.i.d. Gaussian noise. Assume the inputs include a constant feature (i.e. $X$ contains a column of ones), so we don't need to worry about an extra intercept term. $$y_i = \beta \cdot x_i + \epsilon_i \quad \quad \epsilon_i \sim \mathcal{N}(0, \sigma^2)$$ Therefore, $y = [y_1, \dots, y_n]^T$ has a Gaussian conditional distribution: $$p(y \mid X, \beta, \sigma^2) = \mathcal{N}(y \mid X \beta, \sigma^2 I)$$ Generating aggregate group outputs Suppose there are $k$ non-overlapping groups, and the $i$th group contains $n_i$ known points. For simplicity, assume we observe the mean output for each group: $$\bar{y} = W y$$ where $W$ is a $k \times n$ weight matrix that performs averaging over individuals in each group. $W_{ij} = \frac{1}{n_i}$ if group $i$ contains point $j$, otherwise $0$. Alternatively, we might have assumed that observed group outputs are contaminated with additional noise (which would lead to a different expression for the marginal likelihood below). Marginal likelihood Note that $\bar{y}$ is a deterministic, linear transformation $y$, and $y$ has a Gaussian conditional distribution. Therefore, the conditional distribution of $\bar{y}$ (i.e. the marginal likelihood) is also Gaussian, with mean $W X \beta$ and covariance matrix $\sigma^2 W W^T$. Note that $W W^T = \text{diag}(\frac{1}{n_1}, \dots, \frac{1}{n_k})$, which follows from the structure of $W$ above. Let $\bar{X} = W X$ be a matrix whose $i$th row contains the mean of the inputs in the $i$th group. Then, the marginal likelihood can be written as: $$p(\bar{y} \mid X, \beta, \sigma^2) = \mathcal{N} \left( \bar{y} \ \Big| \ \bar{X} \beta, \ \sigma^2 \text{diag} \big( \frac{1}{n_1}, \dots, \frac{1}{n_k} \big) \right)$$ The covariance matrix is diagonal, so the observed outputs are conditionally independent. But, they're not identically distributed; the variances are scaled by the reciprocal of the number of points in each group. This reflects the fact that larger groups average out the noise to a greater extent. Maximum likelihood estimation Maximizing the likelihood is equivalent to minimizing the following loss function, which was obtained by writing out the negative log marginal likelihood and then discarding constant terms: $$\mathcal{L}(\beta, \sigma^2) = k \log(\sigma^2) + \frac{1}{\sigma^2} (\bar{y} - \bar{X} \beta)^T N (\bar{y} - \bar{X} \beta)$$ where $N = \text{diag}(n_1, \dots, n_k)$. From the loss function, it can be seen that the maximum likelihood weights $\beta_{ML}$ are equivalent to those obtained by a form of weighted least squares. Specifically, by regressing the group-average outputs $\bar{y}$ against the group-average inputs $\bar{X}$, with each group weighted by the number of points it contains. $$\beta_{ML} = (\bar{X}^T N \bar{X})^{-1} \bar{X}^T N \bar{y}$$ The estimated variance is given by a weighted sum of the squared residuals: $$\sigma^2_{ML} = \frac{1}{k} (\bar{y} - \bar{X} \beta_{ML})^T N (\bar{y} - \bar{X} \beta_{ML})$$ Prediction Given a new input $x_*$, the conditional distribution for the corresponding individual output $y_*$ is: $$p(y_* \mid x_*, \beta_{ML}, \sigma^2_{ML}) = \mathcal{N}(y_* \mid \beta_{ML} \cdot x_*, \sigma^2_{ML})$$ The conditional mean $\beta_{ML} \cdot x_*$ could be used as a point prediction. References Machine learning: A probabilistic perspective (Murphy 2012). I don't recall that it speaks specifically about aggregated data, but, it covers concepts related to latent variable models quite well.
Regression model with aggregated targets
Here's an approach for solving this type of problem using latent variable models. It's not a specific model, but a general way to formulate a model by breaking the description of the system into two p
Regression model with aggregated targets Here's an approach for solving this type of problem using latent variable models. It's not a specific model, but a general way to formulate a model by breaking the description of the system into two parts: the relationship between individual inputs and (unobserved) individual outputs, and the relationship between individual outputs and (observed) aggregate group outputs. This gives a natural way to think about the problem that (hopefully somewhat) mirrors the data generating process, and makes assumptions explicit. Linear or nonlinear relationships can be accommodated, as well as various types of noise model. There's well-developed, general-purpose machinery for performing inference in latent variable models (mentioned below). Finally, explicitly including individual outputs in the model gives a principled way to make predictions about them. But, of course there's no free lunch--aggregating data destroys information. General approach The central idea is to treat the individual outputs as latent variables, since they're not directly observed. Suppose the individual inputs are $\{x_1, \dots, x_n\}$, where each $x_i \in \mathbb{R}^d$ contains both individual and group-level features for the $i$th individual (group-level features would be duplicated across individuals). Inputs are stored on the rows of matrix $X \in \mathbb{R}^{n \times d}$. The corresponding individual outputs are represented by $y = [y_1, \dots, y_n]^T$ where $y_i \in \mathbb{R}$. The first step is to postulate a relationship between the individual inputs and outputs, even though the individual outputs are not directly observed in the training data. This takes the form of a joint conditional distribution $p(y \mid X, \theta)$ where $\theta$ is a parameter vector. Of course, it factorizes as $\prod_{i=1}^n p(y_i \mid x_i, \theta)$ if the outputs are conditionally independent, given the inputs (e.g. if error terms are independent). Next, we relate the unobserved individual outputs to the observed aggregate group outputs $\bar{y} = [\bar{y}_1, \dots, \bar{y}_k]^T$ (for $k$ groups). In general, this takes the form of another conditional distribution $p(\bar{y} \mid y, \phi)$, since the observed group outputs may be a noisy function of the individual outputs (with parameters $\phi$). Note that $\bar{y}$ is conditionally independent of $X$, given $y$. If group outputs are a deterministic function of the individual outputs, then $p(\bar{y} \mid y)$ takes the form of a delta function. The joint likelihood of the individual and group outputs can then be written as: $$p(y, \bar{y} \mid X, \theta, \phi) = p(\bar{y} \mid y, \phi) p(y \mid X, \theta)$$ Since the individual outputs are latent variables, they must be integrated out of the joint likelihood to obtain the marginal likelihood for the observed group outputs: $$p(\bar{y} \mid X, \theta, \phi) = \int p(\bar{y} \mid y, \phi) p(y \mid X, \theta) dy$$ If group outputs are a known, deterministic function of the individual outputs, the marginal likelihood can be written directly without having to think about this integral (and $\phi$ can be ignored). Maximum likelihood estimation Maximum likelihood estimation of the parameters proceeds by maximizing the marginal likelihood: $$\theta_{ML}, \phi_{ML} \ = \ \arg \max_{\theta,\phi} \ p(\bar{y} \mid X, \theta, \phi)$$ If the above integral can be solved analytically, it's possible to directly optimize the resulting marginal likelihood (either analytically or numerically). However, the integral may be intractable, in which case the expectation maximization algorithm can be used. The maximum likelihood parameters $\theta_{ML}$ could be studied to learn about the data generating process, or used to predict individual outputs for out-of-sample data. For example, given a new individual input $x_*$, we have the predictive distribution $p(y_* \mid x_*, \theta_{ML})$ (whose form we already chose in the first step above). Note that this distribution doesn't account for uncertainty in estimating the parameters, unlike the Bayesian version below. But, one could construct frequentist prediction intervals (e.g. by bootstrapping). Care may be needed when making inferences about individuals based on aggregated data (e.g. see various forms of ecological fallacy). It's possible that these issues may be mitigated to some extent here, since individual inputs are known, and only the outputs are aggregated (and parameters are assumed to be common to all individuals). But, I don't want to make any strong statements about this without thinking about it more carefully. Bayesian inference Alternatively, we may be interested in the posterior distribution over parameters: $$p(\theta, \phi \mid \bar{y}, X) = \frac{1}{Z} p(\bar{y} \mid X, \theta, \phi) p(\theta, \phi)$$ where $Z$ is a normalizing constant. Note that this is based on the marginal likelihood, as above. It also requires that we specify a prior distribution over parameters $p(\theta, \phi)$. In some cases, it may be possible to find a closed form expression for the posterior. This requires an analytical solution to the integral in the marginal likelihood, as well as the integral in the normalizing constant. Otherwise, the posterior can be approximated, e.g. by sampling (as in MCMC) or variational methods. Given a new individual input $x_*$, we can make predictions about the output $y_*$ using the posterior predictive distribution. This is obtained by averaging the predictive distributions for each possible choice of parameters, weighted by the posterior probability of these parameters given the training data: $$p(y_* \mid x_*, X, \bar{y}) = \iint p(y_* \mid x_*, \theta) p(\theta, \phi \mid \bar{y}, X) d\theta d\phi$$ As above, approximations may be necessary. Example Here's an example showing how to apply the above approach with a simple, linear model, similar to that described in the question. One could naturally apply the same techniques using nonlinear functions, more complicated noise models, etc. Generating individual outputs Let's suppose the unobserved individual outputs are generated as a linear function of the inputs, plus i.i.d. Gaussian noise. Assume the inputs include a constant feature (i.e. $X$ contains a column of ones), so we don't need to worry about an extra intercept term. $$y_i = \beta \cdot x_i + \epsilon_i \quad \quad \epsilon_i \sim \mathcal{N}(0, \sigma^2)$$ Therefore, $y = [y_1, \dots, y_n]^T$ has a Gaussian conditional distribution: $$p(y \mid X, \beta, \sigma^2) = \mathcal{N}(y \mid X \beta, \sigma^2 I)$$ Generating aggregate group outputs Suppose there are $k$ non-overlapping groups, and the $i$th group contains $n_i$ known points. For simplicity, assume we observe the mean output for each group: $$\bar{y} = W y$$ where $W$ is a $k \times n$ weight matrix that performs averaging over individuals in each group. $W_{ij} = \frac{1}{n_i}$ if group $i$ contains point $j$, otherwise $0$. Alternatively, we might have assumed that observed group outputs are contaminated with additional noise (which would lead to a different expression for the marginal likelihood below). Marginal likelihood Note that $\bar{y}$ is a deterministic, linear transformation $y$, and $y$ has a Gaussian conditional distribution. Therefore, the conditional distribution of $\bar{y}$ (i.e. the marginal likelihood) is also Gaussian, with mean $W X \beta$ and covariance matrix $\sigma^2 W W^T$. Note that $W W^T = \text{diag}(\frac{1}{n_1}, \dots, \frac{1}{n_k})$, which follows from the structure of $W$ above. Let $\bar{X} = W X$ be a matrix whose $i$th row contains the mean of the inputs in the $i$th group. Then, the marginal likelihood can be written as: $$p(\bar{y} \mid X, \beta, \sigma^2) = \mathcal{N} \left( \bar{y} \ \Big| \ \bar{X} \beta, \ \sigma^2 \text{diag} \big( \frac{1}{n_1}, \dots, \frac{1}{n_k} \big) \right)$$ The covariance matrix is diagonal, so the observed outputs are conditionally independent. But, they're not identically distributed; the variances are scaled by the reciprocal of the number of points in each group. This reflects the fact that larger groups average out the noise to a greater extent. Maximum likelihood estimation Maximizing the likelihood is equivalent to minimizing the following loss function, which was obtained by writing out the negative log marginal likelihood and then discarding constant terms: $$\mathcal{L}(\beta, \sigma^2) = k \log(\sigma^2) + \frac{1}{\sigma^2} (\bar{y} - \bar{X} \beta)^T N (\bar{y} - \bar{X} \beta)$$ where $N = \text{diag}(n_1, \dots, n_k)$. From the loss function, it can be seen that the maximum likelihood weights $\beta_{ML}$ are equivalent to those obtained by a form of weighted least squares. Specifically, by regressing the group-average outputs $\bar{y}$ against the group-average inputs $\bar{X}$, with each group weighted by the number of points it contains. $$\beta_{ML} = (\bar{X}^T N \bar{X})^{-1} \bar{X}^T N \bar{y}$$ The estimated variance is given by a weighted sum of the squared residuals: $$\sigma^2_{ML} = \frac{1}{k} (\bar{y} - \bar{X} \beta_{ML})^T N (\bar{y} - \bar{X} \beta_{ML})$$ Prediction Given a new input $x_*$, the conditional distribution for the corresponding individual output $y_*$ is: $$p(y_* \mid x_*, \beta_{ML}, \sigma^2_{ML}) = \mathcal{N}(y_* \mid \beta_{ML} \cdot x_*, \sigma^2_{ML})$$ The conditional mean $\beta_{ML} \cdot x_*$ could be used as a point prediction. References Machine learning: A probabilistic perspective (Murphy 2012). I don't recall that it speaks specifically about aggregated data, but, it covers concepts related to latent variable models quite well.
Regression model with aggregated targets Here's an approach for solving this type of problem using latent variable models. It's not a specific model, but a general way to formulate a model by breaking the description of the system into two p
46,007
Regression model with aggregated targets
To verify the solution suggested in the great answer by @user20160 I prepared a toy example that demonstrates it. As suggested by @user20160, I am posting the code as a supplement to the answer. For explanations of this approach, check the other answer. First, let's generate the independent variable and append the column of ones to it, to use matrix formulation of the model. set.seed(42) n <- 5000; k <- 50; m <- n/k x <- rnorm(n, mean = (1:n)*0.01, sd = 10) X <- cbind(Intercept=1, x) Next, let's generate the individual predictions $y = X\beta + \varepsilon$. beta <- rbind(3, 0.75) sigma <- 10 y <- rnorm(n, X %*% beta, sigma) To aggregate the results, we use the matrix $W$ of zeros and ones to indicate group membership of size $k \times n$. To estimate group means, we take $\bar y = \tfrac{1}{m}W y$ (same results as tapply(y, grp, mean)). grp <- factor(rep(1:k, each=m)) W <- t(model.matrix(~grp-1)) ybar <- as.vector((W/m) %*% y) What leads to the following results, where as expected, the conditional variability of $\bar y$ is much smaller then $y$. lm_loss <- function(pars) mean((mu_rep - as.vector(X %*% pars))^2) aggr_loss <- function(pars) mean((mu - as.vector((W/m) %*% (X %*% pars)))^2) Results from the regular regression model are pretty poor. init <- rbind(0, 0) (est1 <- optim(init, lm_loss))$par ## [,1] ## [1,] 9.058655 ## [2,] 0.502987 The "aggregated" model gives results that are really close to the true values of $\beta$. (est2 <- optim(init, aggr_loss))$par ## [,1] ## [1,] 3.1029468 ## [2,] 0.7424815 You can also see on the plot below, that besides that the input data was aggregated, if we use the "aggregated" model, we are able to recover the true regression line almost perfectly. Also if we compare mean squared error of predictions for the individual values given the estimated parameters, the "aggregated" model has smaller squared error. mean((y - as.vector(X %*% est1$par))^2) ## [1] 119.4491 mean((y - as.vector(X %*% est2$par))^2) ## [1] 101.4573 Same thing happens if we minimize the negative log-likelihood. Additionally, this lets us to estimate $\sigma$, and also gives much better result (43.95 for linear regression vs 8.02 for the "aggregated" model). lm_llik <- function(pars) -1 * sum(dnorm(mu_rep, as.vector(X %*% pars[1:2]), pars[3]/sqrt(k), log=TRUE)) aggr_llik <- function(pars) -1 * sum(dnorm(mu, as.vector((W/m) %*% (X %*% pars[1:2])), pars[3]/sqrt(k), log=TRUE))
Regression model with aggregated targets
To verify the solution suggested in the great answer by @user20160 I prepared a toy example that demonstrates it. As suggested by @user20160, I am posting the code as a supplement to the answer. For e
Regression model with aggregated targets To verify the solution suggested in the great answer by @user20160 I prepared a toy example that demonstrates it. As suggested by @user20160, I am posting the code as a supplement to the answer. For explanations of this approach, check the other answer. First, let's generate the independent variable and append the column of ones to it, to use matrix formulation of the model. set.seed(42) n <- 5000; k <- 50; m <- n/k x <- rnorm(n, mean = (1:n)*0.01, sd = 10) X <- cbind(Intercept=1, x) Next, let's generate the individual predictions $y = X\beta + \varepsilon$. beta <- rbind(3, 0.75) sigma <- 10 y <- rnorm(n, X %*% beta, sigma) To aggregate the results, we use the matrix $W$ of zeros and ones to indicate group membership of size $k \times n$. To estimate group means, we take $\bar y = \tfrac{1}{m}W y$ (same results as tapply(y, grp, mean)). grp <- factor(rep(1:k, each=m)) W <- t(model.matrix(~grp-1)) ybar <- as.vector((W/m) %*% y) What leads to the following results, where as expected, the conditional variability of $\bar y$ is much smaller then $y$. lm_loss <- function(pars) mean((mu_rep - as.vector(X %*% pars))^2) aggr_loss <- function(pars) mean((mu - as.vector((W/m) %*% (X %*% pars)))^2) Results from the regular regression model are pretty poor. init <- rbind(0, 0) (est1 <- optim(init, lm_loss))$par ## [,1] ## [1,] 9.058655 ## [2,] 0.502987 The "aggregated" model gives results that are really close to the true values of $\beta$. (est2 <- optim(init, aggr_loss))$par ## [,1] ## [1,] 3.1029468 ## [2,] 0.7424815 You can also see on the plot below, that besides that the input data was aggregated, if we use the "aggregated" model, we are able to recover the true regression line almost perfectly. Also if we compare mean squared error of predictions for the individual values given the estimated parameters, the "aggregated" model has smaller squared error. mean((y - as.vector(X %*% est1$par))^2) ## [1] 119.4491 mean((y - as.vector(X %*% est2$par))^2) ## [1] 101.4573 Same thing happens if we minimize the negative log-likelihood. Additionally, this lets us to estimate $\sigma$, and also gives much better result (43.95 for linear regression vs 8.02 for the "aggregated" model). lm_llik <- function(pars) -1 * sum(dnorm(mu_rep, as.vector(X %*% pars[1:2]), pars[3]/sqrt(k), log=TRUE)) aggr_llik <- function(pars) -1 * sum(dnorm(mu, as.vector((W/m) %*% (X %*% pars[1:2])), pars[3]/sqrt(k), log=TRUE))
Regression model with aggregated targets To verify the solution suggested in the great answer by @user20160 I prepared a toy example that demonstrates it. As suggested by @user20160, I am posting the code as a supplement to the answer. For e
46,008
Regression model with aggregated targets
Different approaches could be appropriate depending on your goal. I'll describe one approach in case your goal is group-level prediction. You could use the individual-level features to build a bunch of aggregated features for each group (mean, std, median, max, min, ...). You now have richer features for each group which are likely to perform well on the group level. I've seen this work thousands of times in Kaggle competitions. Also, don't stick to linear regression, gradient boosting works in many cases with tabular data, and can even help you weed out some features (make lots of them, you never know what will work). As a bonus, this also gives you a way of predicting individual scores by feeding the model a group of one (this feels a little shady though).
Regression model with aggregated targets
Different approaches could be appropriate depending on your goal. I'll describe one approach in case your goal is group-level prediction. You could use the individual-level features to build a bunch o
Regression model with aggregated targets Different approaches could be appropriate depending on your goal. I'll describe one approach in case your goal is group-level prediction. You could use the individual-level features to build a bunch of aggregated features for each group (mean, std, median, max, min, ...). You now have richer features for each group which are likely to perform well on the group level. I've seen this work thousands of times in Kaggle competitions. Also, don't stick to linear regression, gradient boosting works in many cases with tabular data, and can even help you weed out some features (make lots of them, you never know what will work). As a bonus, this also gives you a way of predicting individual scores by feeding the model a group of one (this feels a little shady though).
Regression model with aggregated targets Different approaches could be appropriate depending on your goal. I'll describe one approach in case your goal is group-level prediction. You could use the individual-level features to build a bunch o
46,009
Why does a positively correlated variable have a negative coefficient in a multiple regression?
You are comparing two very different things. In the first case, you are making pairwise comparisons when calculating the correlation coefficient between BodyFat and Weight. In the second, you are doing a multiple regression that also accounts for the variation in BodyFat that is explained by all your other variables. To oversimplify a bit: after accounting for the variation explained by the other variables, the relationship between Weight and BodyFat is negative. Since if you ignore the other variables, the relationship is positive, this implies that Weight covaries with one or more of the other variables (which you can also see in the correlation matrix). You can see that Abdomen is strongly positively correlated with both Weight ($r$ = 0.87) and BodyFat ($r$ = 0.83), so it is plausible that accounting for Abdomen undid the positive relationship between Weight and BodyFat. If you want to understand this better, calculate the residuals of the simple linear regression BodyFat ~ Abdomen. Then make 3 plots and examine them: BodyFat ~ Weight, BodyFat ~ Abdomen, and residuals(BodyFat ~ Abdomen) ~ Weight. I'll also note that doing a multiple regression with predictors that are this highly correlated is likely to lead to flawed inferences.
Why does a positively correlated variable have a negative coefficient in a multiple regression?
You are comparing two very different things. In the first case, you are making pairwise comparisons when calculating the correlation coefficient between BodyFat and Weight. In the second, you are doin
Why does a positively correlated variable have a negative coefficient in a multiple regression? You are comparing two very different things. In the first case, you are making pairwise comparisons when calculating the correlation coefficient between BodyFat and Weight. In the second, you are doing a multiple regression that also accounts for the variation in BodyFat that is explained by all your other variables. To oversimplify a bit: after accounting for the variation explained by the other variables, the relationship between Weight and BodyFat is negative. Since if you ignore the other variables, the relationship is positive, this implies that Weight covaries with one or more of the other variables (which you can also see in the correlation matrix). You can see that Abdomen is strongly positively correlated with both Weight ($r$ = 0.87) and BodyFat ($r$ = 0.83), so it is plausible that accounting for Abdomen undid the positive relationship between Weight and BodyFat. If you want to understand this better, calculate the residuals of the simple linear regression BodyFat ~ Abdomen. Then make 3 plots and examine them: BodyFat ~ Weight, BodyFat ~ Abdomen, and residuals(BodyFat ~ Abdomen) ~ Weight. I'll also note that doing a multiple regression with predictors that are this highly correlated is likely to lead to flawed inferences.
Why does a positively correlated variable have a negative coefficient in a multiple regression? You are comparing two very different things. In the first case, you are making pairwise comparisons when calculating the correlation coefficient between BodyFat and Weight. In the second, you are doin
46,010
Why does a positively correlated variable have a negative coefficient in a multiple regression?
To add to @mkt 's answer, which does capture all the most critical mathematical aspects, a few observations: The intercept CI spans from -70 to -33. Assuming that body fat is a percentage, then this means that the baseline amount of fat in the cohort is very variable. If the distribution of BodyFat is left or right tailed then the mean will be skewed relative to the median. This would influence inference of the coefficients Any effect sizes smaller than the mean effect size across variables will have negative coefficients because they provide less effect than the other variables. See the point about standardisation below. the weight is the only coefficient that has CI that do not cross 0, this may reflects inadequate pre-processing rather than anything meaningful at this stage. There is no evidence of any use of height/length to normalise the variables, despite most having a strong relationship with height (tall people/long organisms compared to short people/organisms of the same relative build will have higher weight, larger chest, bigger abdomen, wider hips, thicker thighs and biceps). This could account for a high proportion of the covariance highlighted by @mkt. Correlation between inputs leads to unstable coefficient estimation (the model has no way of knowing the causality). Data reduction (e.g. PCA, PLS) or shrinkage methods (LASSO, Ridge, Elastic Net) could improve the orthogonality of the inputs into the model and improve interpret-ability. Gender also usually influences the covariance of the listed independent variables and so should be included as a factor for a more complete inference. Do you standardise the variables? I don't see that in your code. The variables appear to be on different scales which would also make interpretation of coefficients more difficult.
Why does a positively correlated variable have a negative coefficient in a multiple regression?
To add to @mkt 's answer, which does capture all the most critical mathematical aspects, a few observations: The intercept CI spans from -70 to -33. Assuming that body fat is a percentage, then this
Why does a positively correlated variable have a negative coefficient in a multiple regression? To add to @mkt 's answer, which does capture all the most critical mathematical aspects, a few observations: The intercept CI spans from -70 to -33. Assuming that body fat is a percentage, then this means that the baseline amount of fat in the cohort is very variable. If the distribution of BodyFat is left or right tailed then the mean will be skewed relative to the median. This would influence inference of the coefficients Any effect sizes smaller than the mean effect size across variables will have negative coefficients because they provide less effect than the other variables. See the point about standardisation below. the weight is the only coefficient that has CI that do not cross 0, this may reflects inadequate pre-processing rather than anything meaningful at this stage. There is no evidence of any use of height/length to normalise the variables, despite most having a strong relationship with height (tall people/long organisms compared to short people/organisms of the same relative build will have higher weight, larger chest, bigger abdomen, wider hips, thicker thighs and biceps). This could account for a high proportion of the covariance highlighted by @mkt. Correlation between inputs leads to unstable coefficient estimation (the model has no way of knowing the causality). Data reduction (e.g. PCA, PLS) or shrinkage methods (LASSO, Ridge, Elastic Net) could improve the orthogonality of the inputs into the model and improve interpret-ability. Gender also usually influences the covariance of the listed independent variables and so should be included as a factor for a more complete inference. Do you standardise the variables? I don't see that in your code. The variables appear to be on different scales which would also make interpretation of coefficients more difficult.
Why does a positively correlated variable have a negative coefficient in a multiple regression? To add to @mkt 's answer, which does capture all the most critical mathematical aspects, a few observations: The intercept CI spans from -70 to -33. Assuming that body fat is a percentage, then this
46,011
Questions regarding proof of probability integral transform
The notation is getting in the way, so let's simplify it. Let $X:\Omega\to\mathbb{R}$ be a random variable with a distribution function $F_X$ defined by $$F_X(x) = \Pr(X \le x) = \Pr(\{\omega\in\Omega\mid X(\omega)\le x\})$$ for all real numbers $x.$ The axioms of probability imply $F_X$ is non-decreasing and at any point of discontinuity its value is the limit from the right of its values (from left to right, its graph jumps up to its value rather than up from its value). Consider any measurable function $h:\mathbb{R}\to\mathbb{R}$ with these properties (whether or not it actually is a probability function), as graphed here: Because $h$ is measurable, the composition $Y = h \circ X:\Omega\to\mathbb R$ is also a random variable. When $X$ has the value $x,$ $Y$ has the value $h(x):$ you can read it directly off the graph. We will want to go backwards from values of $Y$ to corresponding values of $X$ by inverting $h.$ Two possible behaviors make this problematic, as shown by the dotted colored lines in the figure. Where $h$ has a jump from a value $a$ to a value $b$ at an argument $x,$ define the inverse of $h$ (written $h^{-1}$) at any point in the interval $[a,b)$ to be the limiting height of all points strictly to the left of $x.$ For instance, for any $q_1$ with $a \le q_1 \lt b$ in the figure, the values of $h^{-1}(q_1)$ are all the same, equal to the height of the open circle (the "base" of the jump). Wherever $h$ is horizontal at a height of $q_2,$ there is an entire closed interval $[a,b]$ of values for which $h(x) = q_2$ whenever $a \le q_2 \le b.$ Define $h^{-1}(q_2)$ to be the largest such value (or infinity if there is no largest value). These definitions imply $$h(h^{-1}(y))=y\tag{*}$$ whenever $y$ is in the image of $h$ and otherwise $h(h^{-1}(y)) \ge y.$ The definitions are arranged so that--as the figure clearly shows--whenever $y$ is a possible value of $Y,$ $$\Pr(Y\le y) = \Pr(h(X)\le y) = \Pr(X \le h^{-1}(y)) = F_X(h^{-1}(y))\tag{**}$$ and otherwise (where $y$ is in the middle of a jump), $$\Pr(Y\le y) = \Pr(h(X)\le y) = \Pr(X \lt h^{-1}(y)).$$ In particular, the mere substitution of $F_X$ for $h$ (whose values lie in the interval $[0,1]$) in $(*)$ and $(**)$ shows that for any $p$ in the image of $F_X,$ $$\Pr(Y \le p) = \Pr(F_X(X)\le p) = F_X(F_X^{-1}(p)) = p.$$ (I hope this makes it clear that the subscript "$X$" on $F$ is not acting as a random variable in these expressions, which perhaps is the most confusing aspect of the notation; $F_X$ is a completely determinate, non-random function.) When $F_X$ is everywhere continuous (that is, $X$ is a continuous random variable), this is true for all $p\in [0,1]$. The equation $\Pr(Y\le p) = p$ for $0\le p \le 1$ defines the uniform distribution on $[0,1].$ We have concluded: Transforming the continuous random variable $X$ via its probability function $F_X$ creates a random variable $Y=F_X(X)$ that has the uniform distribution on the interval $[0,1].$ This is the probability integral transform, or PIT. Although no integration was needed to define it, notice that absolutely continuous random variables $X$ have densities $f_X$ with $f_X(x)\mathrm{d}x = \mathrm{d}F_X(x),$ whence substituting $y = F_X(x)$ in the integral for the expectation of any measurable function $g$ gives $$E_X[g(X)] = \int_{\mathbb R} g(x) f_X(x) \mathrm{d}x = \int_{\mathbb R} g\left(F_X^{-1}(y)\right) \mathrm{d} y = E_Y\left[g\circ F_X^{-1}(Y)\right].$$ In other words, the PIT converts integration with respect to the density $f_X(x)\mathrm{d}x$ into integration with respect to $\mathrm{d}y.$
Questions regarding proof of probability integral transform
The notation is getting in the way, so let's simplify it. Let $X:\Omega\to\mathbb{R}$ be a random variable with a distribution function $F_X$ defined by $$F_X(x) = \Pr(X \le x) = \Pr(\{\omega\in\Omeg
Questions regarding proof of probability integral transform The notation is getting in the way, so let's simplify it. Let $X:\Omega\to\mathbb{R}$ be a random variable with a distribution function $F_X$ defined by $$F_X(x) = \Pr(X \le x) = \Pr(\{\omega\in\Omega\mid X(\omega)\le x\})$$ for all real numbers $x.$ The axioms of probability imply $F_X$ is non-decreasing and at any point of discontinuity its value is the limit from the right of its values (from left to right, its graph jumps up to its value rather than up from its value). Consider any measurable function $h:\mathbb{R}\to\mathbb{R}$ with these properties (whether or not it actually is a probability function), as graphed here: Because $h$ is measurable, the composition $Y = h \circ X:\Omega\to\mathbb R$ is also a random variable. When $X$ has the value $x,$ $Y$ has the value $h(x):$ you can read it directly off the graph. We will want to go backwards from values of $Y$ to corresponding values of $X$ by inverting $h.$ Two possible behaviors make this problematic, as shown by the dotted colored lines in the figure. Where $h$ has a jump from a value $a$ to a value $b$ at an argument $x,$ define the inverse of $h$ (written $h^{-1}$) at any point in the interval $[a,b)$ to be the limiting height of all points strictly to the left of $x.$ For instance, for any $q_1$ with $a \le q_1 \lt b$ in the figure, the values of $h^{-1}(q_1)$ are all the same, equal to the height of the open circle (the "base" of the jump). Wherever $h$ is horizontal at a height of $q_2,$ there is an entire closed interval $[a,b]$ of values for which $h(x) = q_2$ whenever $a \le q_2 \le b.$ Define $h^{-1}(q_2)$ to be the largest such value (or infinity if there is no largest value). These definitions imply $$h(h^{-1}(y))=y\tag{*}$$ whenever $y$ is in the image of $h$ and otherwise $h(h^{-1}(y)) \ge y.$ The definitions are arranged so that--as the figure clearly shows--whenever $y$ is a possible value of $Y,$ $$\Pr(Y\le y) = \Pr(h(X)\le y) = \Pr(X \le h^{-1}(y)) = F_X(h^{-1}(y))\tag{**}$$ and otherwise (where $y$ is in the middle of a jump), $$\Pr(Y\le y) = \Pr(h(X)\le y) = \Pr(X \lt h^{-1}(y)).$$ In particular, the mere substitution of $F_X$ for $h$ (whose values lie in the interval $[0,1]$) in $(*)$ and $(**)$ shows that for any $p$ in the image of $F_X,$ $$\Pr(Y \le p) = \Pr(F_X(X)\le p) = F_X(F_X^{-1}(p)) = p.$$ (I hope this makes it clear that the subscript "$X$" on $F$ is not acting as a random variable in these expressions, which perhaps is the most confusing aspect of the notation; $F_X$ is a completely determinate, non-random function.) When $F_X$ is everywhere continuous (that is, $X$ is a continuous random variable), this is true for all $p\in [0,1]$. The equation $\Pr(Y\le p) = p$ for $0\le p \le 1$ defines the uniform distribution on $[0,1].$ We have concluded: Transforming the continuous random variable $X$ via its probability function $F_X$ creates a random variable $Y=F_X(X)$ that has the uniform distribution on the interval $[0,1].$ This is the probability integral transform, or PIT. Although no integration was needed to define it, notice that absolutely continuous random variables $X$ have densities $f_X$ with $f_X(x)\mathrm{d}x = \mathrm{d}F_X(x),$ whence substituting $y = F_X(x)$ in the integral for the expectation of any measurable function $g$ gives $$E_X[g(X)] = \int_{\mathbb R} g(x) f_X(x) \mathrm{d}x = \int_{\mathbb R} g\left(F_X^{-1}(y)\right) \mathrm{d} y = E_Y\left[g\circ F_X^{-1}(Y)\right].$$ In other words, the PIT converts integration with respect to the density $f_X(x)\mathrm{d}x$ into integration with respect to $\mathrm{d}y.$
Questions regarding proof of probability integral transform The notation is getting in the way, so let's simplify it. Let $X:\Omega\to\mathbb{R}$ be a random variable with a distribution function $F_X$ defined by $$F_X(x) = \Pr(X \le x) = \Pr(\{\omega\in\Omeg
46,012
Questions regarding proof of probability integral transform
After having done some homework on the subject, I think I've got a better handle on the proof that I find in [1]. I wanted to take an opportunity to set down my understanding for pedagogic purposes. Scope: I'm going to limit this answer to the case of a strictly monotonic cumulative distribution function. Its my understanding that, in his answer to this post, @whuber considers a more general situation. In addition, this is not a formal proof, just the outline of my understanding. So certain details are likely omitted. Rudimentary derivation: By $X$ I denote a real-valued random variable. By $x$ I denote a real variable. By $F_X(x)$ I denote the cumulative marginal density function of the random variable $X$, where $$F_X(x) = P(X\leq x).\quad \textrm{Eq. 1}$$ By $Y$ I denote a new random variable defined in terms of $X$ as $$Y = F_X(X). \quad \textrm{Eq. 2}$$ By $y$ I denote a real number in the interval $[0,1]$. By $F_Y(y)$ I denote the cumulative marginal density function of the random variable $Y$, where $$F_Y(y) = P(Y\leq y).$$ From Eq. 2, I can subtitute $F_X(X)$ in place of $Y$. I find $$F_Y(y) = P(F_X(X) \leq y).$$ Since $F_X$ is assumed to be strictly increasing, it is invertible. When I apply the inverse $F^{-1}_X$ to both sides of the inequality in the argument I find $F^{-1}_X(F_X(X)) \leq F^{-1}_X(y)$. Again, since $F_X$ is invertible, therefore $F^{-1}_X(F_X(X)) = X$. I continue with my central train of thought and write $$F_Y(y) = P( X \leq F^{-1}_X(y)). \quad \textrm{Eq. 3}$$ Next, by comparing Eq. 3 with Eq. 1, I find that $$F_Y(y) = F_X(F_X^{-1}(y)). $$ Once again, since $F_X$ is invertible $$F_Y(y) = y. \quad \textrm{Eq. 4} $$ As I write in the scope, there are some details missing. Nonetheless, if one compares the result in Eq. 4 with the cumulative distribution function (CDF) given in the first table in [5], which reads $$\text{CDF} : \begin{cases} 0,&\text{for}~y<a, \\ \frac{y-a}{b-a},& \text{for}~y\in[a,b],~\text{and} \\ 1,& \text{for}~y>b ; \end{cases} $$ then one may see that Eq. 4 describes the cumulative distribution function of random variable with a standard-uniform distribution (i.e., $a=0$ and $b=1$). Thus, the random variable $Y$, which is given by $Y = F_X(X)$, has a standard-uniform distribution. Bibliography [5] https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)
Questions regarding proof of probability integral transform
After having done some homework on the subject, I think I've got a better handle on the proof that I find in [1]. I wanted to take an opportunity to set down my understanding for pedagogic purposes.
Questions regarding proof of probability integral transform After having done some homework on the subject, I think I've got a better handle on the proof that I find in [1]. I wanted to take an opportunity to set down my understanding for pedagogic purposes. Scope: I'm going to limit this answer to the case of a strictly monotonic cumulative distribution function. Its my understanding that, in his answer to this post, @whuber considers a more general situation. In addition, this is not a formal proof, just the outline of my understanding. So certain details are likely omitted. Rudimentary derivation: By $X$ I denote a real-valued random variable. By $x$ I denote a real variable. By $F_X(x)$ I denote the cumulative marginal density function of the random variable $X$, where $$F_X(x) = P(X\leq x).\quad \textrm{Eq. 1}$$ By $Y$ I denote a new random variable defined in terms of $X$ as $$Y = F_X(X). \quad \textrm{Eq. 2}$$ By $y$ I denote a real number in the interval $[0,1]$. By $F_Y(y)$ I denote the cumulative marginal density function of the random variable $Y$, where $$F_Y(y) = P(Y\leq y).$$ From Eq. 2, I can subtitute $F_X(X)$ in place of $Y$. I find $$F_Y(y) = P(F_X(X) \leq y).$$ Since $F_X$ is assumed to be strictly increasing, it is invertible. When I apply the inverse $F^{-1}_X$ to both sides of the inequality in the argument I find $F^{-1}_X(F_X(X)) \leq F^{-1}_X(y)$. Again, since $F_X$ is invertible, therefore $F^{-1}_X(F_X(X)) = X$. I continue with my central train of thought and write $$F_Y(y) = P( X \leq F^{-1}_X(y)). \quad \textrm{Eq. 3}$$ Next, by comparing Eq. 3 with Eq. 1, I find that $$F_Y(y) = F_X(F_X^{-1}(y)). $$ Once again, since $F_X$ is invertible $$F_Y(y) = y. \quad \textrm{Eq. 4} $$ As I write in the scope, there are some details missing. Nonetheless, if one compares the result in Eq. 4 with the cumulative distribution function (CDF) given in the first table in [5], which reads $$\text{CDF} : \begin{cases} 0,&\text{for}~y<a, \\ \frac{y-a}{b-a},& \text{for}~y\in[a,b],~\text{and} \\ 1,& \text{for}~y>b ; \end{cases} $$ then one may see that Eq. 4 describes the cumulative distribution function of random variable with a standard-uniform distribution (i.e., $a=0$ and $b=1$). Thus, the random variable $Y$, which is given by $Y = F_X(X)$, has a standard-uniform distribution. Bibliography [5] https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)
Questions regarding proof of probability integral transform After having done some homework on the subject, I think I've got a better handle on the proof that I find in [1]. I wanted to take an opportunity to set down my understanding for pedagogic purposes.
46,013
Probability distribution of the number of infected people in a room
Let $\mathbf{K} = \{ K_h | h \in \mathbb{N}_{0+} \}$ denote the stochastic time-series showing the number of infected people after each handshake, and let $K_0 = 1$ at the start of the series. This is a Markov chain that falls within the category of discrete "pure birth" processes". A single random handshake gives the transition probabilities: $$p_{k,k+r} \equiv \mathbb{P}( K_{h+1} = k+r | K_h = k ) = \begin{cases} 1-\frac{2k(N-k)}{N(N-1)} & & \text{if } r=0, \\[6pt] \frac{2k(N-k)}{N(N-1)} & & \text{if } r=1, \\[8pt] 0 & & \text{otherwise}. \\[6pt] \end{cases}$$ Thus, the transition matrix for the chain is: $$\mathbf{P} \equiv \begin{bmatrix} 1-\frac{2}{N} & \frac{2}{N} & \cdots & 0 & 0 & 0 \\ 0 & \frac{4(N-2)}{N(N-1)} & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & 1-\frac{4(N-2)}{N(N-1)} & \frac{4(N-2)}{N(N-1)} & 0 \\ 0 & 0 & \cdots & 0 & 1-\frac{2}{N} & \frac{2}{N} \\ 0 & 0 & \cdots & 0 & 0 & 1 \\ \end{bmatrix}.$$ After $h$ random handshakes, the probability that $k$ people are infected is: $$\mathbb{P}(K_{h} = k) = [\mathbf{P}^{h}]_{1,k}.$$ You can compute this probability by programming the transition probability matrix into an appropriate piece of mathematical software (e.g., R) and then obtaining the first row of the appropriate power of the matrix. If you would like to try to get a closed-form expression for the probability, I would recommend deriving the eigen-decomposition or Jordan decomposition of the matrix to see if this simplifies the problem. Computing the probability vector: It is quite simple to program this Markov chain in R. In the code below I create a function to compute the vector of probabilities (or log-probabilities) for arbitrary input values for the number of people and the number of handshakes. #Load required library library(expm); #Create a function to compute the probability vector COMPUTE_PROBS <- function(N, h, log.p = FALSE) { #Define the transition probability matrix P <- matrix(0, nrow = N, ncol = N); for (k in 1:N) { P[k,k] <- 1 - 2*k*(N-k)/(N*(N-1)); } for (k in 1:(N-1)) { P[k,k+1] <- 1 - P[k,k]; } #Compute probability vector PPP <- expm::'%^%'(P,h); if (log.p) { PPP <- log(PPP); } PPP[1, ]; } We can use this function to compute the vector of probailities (or log-probabilities) for arbitrary values of N and h. Here is an example using some chosen values for the parameters. #Compute an example of this probability vector N <- 40; h <- 80; PROBS <- COMPUTE_PROBS(N,h); #Plot the probability mass function library(ggplot2); DATA <- data.frame(Infected = 1:N, Probability = PROBS); THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'), plot.subtitle = element_text(hjust = 0.5, face = 'bold')); ggplot(aes(x = Infected, y = Probability), data = DATA) + geom_bar(stat = 'identity', fill = 'blue') + THEME + ggtitle('PMF of Number of Infected People') + labs(subtitle = paste0('(', N, ' people and ', h, ' handshakes)')) + xlab('Number of Infected People'); Monte-Carlo simulation: We can confirm that the above result is correct by comparing the theoretical probabilities to Monte-Carlo simulations of the process. To do this, we can program a simulation function in R. (Hat tip to user2974951 for suggesting this approach, and writing the initial code.) In the code below I create a function to simulate outcomes of the chain and take empirical estimates of the vector of probabilities for arbitrary input values for the number of people and the number of handshakes. #Create a function to simulate the chain SIMULATE_CHAIN <- function(N, h, times = 10^5) { #Set the simulation vector SIM <- rep(0, times); #Run simulations for (s in 1:times) { #Compute initial vector of infected people INFECTED <- c(1, rep(0, N-1)); #Implement random handshakes for (i in 1:h) { H <- sample(1:N, size = 2, replace = FALSE); if (INFECTED[H[1]] == 0 & INFECTED[H[2]] == 1) { INFECTED[H[1]] <- 1 } if (INFECTED[H[1]] == 1 & INFECTED[H[2]] == 0) { INFECTED[H[2]] <- 1 } } SIM[s] <- sum(INFECTED); } SIM; } Using this function we can simulate the Markov chain and take empirical estimates of the probabilities of each outcome. The plot confirms the same shape we obtained in our theoretical analysis, which confirms that the calculations are correct. #Simulate the chain set.seed(1) SIMS <- SIMULATE_CHAIN(N,h); #Estimate the probability vector PROBS_EST <- rep(0,N); for (i in 1:N) { PROBS_EST[i] <- sum(SIMS == i)/length(SIMS); } #Plot the probability mass function DATA <- data.frame(Infected = 1:N, Probability = PROBS_EST); THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'), plot.subtitle = element_text(hjust = 0.5, face = 'bold')); ggplot(aes(x = Infected, y = Probability), data = DATA) + geom_bar(stat = 'identity', fill = 'red') + THEME + ggtitle('Monte-Carlo estimate of PMF of Number of Infected People') + labs(subtitle = paste0('(', N, ' people and ', h, ' handshakes)')) + xlab('Number of Infected People') + ylab('Estimated Probability');
Probability distribution of the number of infected people in a room
Let $\mathbf{K} = \{ K_h | h \in \mathbb{N}_{0+} \}$ denote the stochastic time-series showing the number of infected people after each handshake, and let $K_0 = 1$ at the start of the series. This i
Probability distribution of the number of infected people in a room Let $\mathbf{K} = \{ K_h | h \in \mathbb{N}_{0+} \}$ denote the stochastic time-series showing the number of infected people after each handshake, and let $K_0 = 1$ at the start of the series. This is a Markov chain that falls within the category of discrete "pure birth" processes". A single random handshake gives the transition probabilities: $$p_{k,k+r} \equiv \mathbb{P}( K_{h+1} = k+r | K_h = k ) = \begin{cases} 1-\frac{2k(N-k)}{N(N-1)} & & \text{if } r=0, \\[6pt] \frac{2k(N-k)}{N(N-1)} & & \text{if } r=1, \\[8pt] 0 & & \text{otherwise}. \\[6pt] \end{cases}$$ Thus, the transition matrix for the chain is: $$\mathbf{P} \equiv \begin{bmatrix} 1-\frac{2}{N} & \frac{2}{N} & \cdots & 0 & 0 & 0 \\ 0 & \frac{4(N-2)}{N(N-1)} & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & 1-\frac{4(N-2)}{N(N-1)} & \frac{4(N-2)}{N(N-1)} & 0 \\ 0 & 0 & \cdots & 0 & 1-\frac{2}{N} & \frac{2}{N} \\ 0 & 0 & \cdots & 0 & 0 & 1 \\ \end{bmatrix}.$$ After $h$ random handshakes, the probability that $k$ people are infected is: $$\mathbb{P}(K_{h} = k) = [\mathbf{P}^{h}]_{1,k}.$$ You can compute this probability by programming the transition probability matrix into an appropriate piece of mathematical software (e.g., R) and then obtaining the first row of the appropriate power of the matrix. If you would like to try to get a closed-form expression for the probability, I would recommend deriving the eigen-decomposition or Jordan decomposition of the matrix to see if this simplifies the problem. Computing the probability vector: It is quite simple to program this Markov chain in R. In the code below I create a function to compute the vector of probabilities (or log-probabilities) for arbitrary input values for the number of people and the number of handshakes. #Load required library library(expm); #Create a function to compute the probability vector COMPUTE_PROBS <- function(N, h, log.p = FALSE) { #Define the transition probability matrix P <- matrix(0, nrow = N, ncol = N); for (k in 1:N) { P[k,k] <- 1 - 2*k*(N-k)/(N*(N-1)); } for (k in 1:(N-1)) { P[k,k+1] <- 1 - P[k,k]; } #Compute probability vector PPP <- expm::'%^%'(P,h); if (log.p) { PPP <- log(PPP); } PPP[1, ]; } We can use this function to compute the vector of probailities (or log-probabilities) for arbitrary values of N and h. Here is an example using some chosen values for the parameters. #Compute an example of this probability vector N <- 40; h <- 80; PROBS <- COMPUTE_PROBS(N,h); #Plot the probability mass function library(ggplot2); DATA <- data.frame(Infected = 1:N, Probability = PROBS); THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'), plot.subtitle = element_text(hjust = 0.5, face = 'bold')); ggplot(aes(x = Infected, y = Probability), data = DATA) + geom_bar(stat = 'identity', fill = 'blue') + THEME + ggtitle('PMF of Number of Infected People') + labs(subtitle = paste0('(', N, ' people and ', h, ' handshakes)')) + xlab('Number of Infected People'); Monte-Carlo simulation: We can confirm that the above result is correct by comparing the theoretical probabilities to Monte-Carlo simulations of the process. To do this, we can program a simulation function in R. (Hat tip to user2974951 for suggesting this approach, and writing the initial code.) In the code below I create a function to simulate outcomes of the chain and take empirical estimates of the vector of probabilities for arbitrary input values for the number of people and the number of handshakes. #Create a function to simulate the chain SIMULATE_CHAIN <- function(N, h, times = 10^5) { #Set the simulation vector SIM <- rep(0, times); #Run simulations for (s in 1:times) { #Compute initial vector of infected people INFECTED <- c(1, rep(0, N-1)); #Implement random handshakes for (i in 1:h) { H <- sample(1:N, size = 2, replace = FALSE); if (INFECTED[H[1]] == 0 & INFECTED[H[2]] == 1) { INFECTED[H[1]] <- 1 } if (INFECTED[H[1]] == 1 & INFECTED[H[2]] == 0) { INFECTED[H[2]] <- 1 } } SIM[s] <- sum(INFECTED); } SIM; } Using this function we can simulate the Markov chain and take empirical estimates of the probabilities of each outcome. The plot confirms the same shape we obtained in our theoretical analysis, which confirms that the calculations are correct. #Simulate the chain set.seed(1) SIMS <- SIMULATE_CHAIN(N,h); #Estimate the probability vector PROBS_EST <- rep(0,N); for (i in 1:N) { PROBS_EST[i] <- sum(SIMS == i)/length(SIMS); } #Plot the probability mass function DATA <- data.frame(Infected = 1:N, Probability = PROBS_EST); THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'), plot.subtitle = element_text(hjust = 0.5, face = 'bold')); ggplot(aes(x = Infected, y = Probability), data = DATA) + geom_bar(stat = 'identity', fill = 'red') + THEME + ggtitle('Monte-Carlo estimate of PMF of Number of Infected People') + labs(subtitle = paste0('(', N, ' people and ', h, ' handshakes)')) + xlab('Number of Infected People') + ylab('Estimated Probability');
Probability distribution of the number of infected people in a room Let $\mathbf{K} = \{ K_h | h \in \mathbb{N}_{0+} \}$ denote the stochastic time-series showing the number of infected people after each handshake, and let $K_0 = 1$ at the start of the series. This i
46,014
Probability distribution of the number of infected people in a room
Here is an alternative to Ben's answer using simulations in R, using his parameters. Edit: fixed the bug. N=40 #number of people h=80 #handshakes k=1 #number of infected people at the start n=1e5 #number of simulations result=rep(NA,n) for (r in 1:n) { initial=rep(0,N) #N healthy people initial[1:k]=1 #k infected for (t in 1:h) { random2=sample(1:N,2) #two random people if (initial[random2[1]]==1 | initial[random2[2]]==1) { initial[random2[1]]=initial[random2[2]]=1 #now both infected } } result[r]=sum(initial) } which looks like this
Probability distribution of the number of infected people in a room
Here is an alternative to Ben's answer using simulations in R, using his parameters. Edit: fixed the bug. N=40 #number of people h=80 #handshakes k=1 #number of infected people at the start n=1e5 #num
Probability distribution of the number of infected people in a room Here is an alternative to Ben's answer using simulations in R, using his parameters. Edit: fixed the bug. N=40 #number of people h=80 #handshakes k=1 #number of infected people at the start n=1e5 #number of simulations result=rep(NA,n) for (r in 1:n) { initial=rep(0,N) #N healthy people initial[1:k]=1 #k infected for (t in 1:h) { random2=sample(1:N,2) #two random people if (initial[random2[1]]==1 | initial[random2[2]]==1) { initial[random2[1]]=initial[random2[2]]=1 #now both infected } } result[r]=sum(initial) } which looks like this
Probability distribution of the number of infected people in a room Here is an alternative to Ben's answer using simulations in R, using his parameters. Edit: fixed the bug. N=40 #number of people h=80 #handshakes k=1 #number of infected people at the start n=1e5 #num
46,015
use MCMC posterior as prior for future inference
Strictly speaking, you have to rerun your MCMC algorithm from scratch to approximate the new posterior. MCMC algorithms are not sequential, which means that you cannot update their output with new data to update your estimate of the posterior. You just have to redo it. However, you can use importance sampling to recursively update your posterior approximation with new data. Here are two approaches: Quick and Dirty (and not quite right) You already have the output $\{\theta^{(i)}\}_{i=1}^M$ from an MCMC algorithm that targets $p(\theta\,|\,y_{1:t-1})$. You then observe $y_t$, and you want to somehow recycle $\{\theta^{(i)}\}_{i=1}^M$ to approximate $p(\theta\,|\,y_{1:t})$ without having to re-do everything. As I said, in order to be doing things 100% correctly, you should rerun the MCMC from scratch. But if you were hellbent on not doing that, you could do the following. Pretend that $\{\theta^{(i)}\}_{i=1}^M$ are iid draws from $p(\theta\,|\,y_{1:t-1})$. Then treat them as proposal draws for an importance sampling approximation to $p(\theta\,|\,y_{1:t})$. The importance weights will be $$w_i\propto\frac{p(\theta^{(i)}\,|\,y_{1:t})}{p(\theta^{(i)}\,|\,y_{1:t-1})}\propto p(y_t\,|y_{1:t-1},\,\theta^{(i)}).$$ The leap of faith here is treating the MCMC draws like they were evenly-weighted, iid draws from the source density $p(\theta\,|\,y_{1:t-1})$. But for private, exploratory purposes, it's not an insane thing to do when you already have the MCMC draws lying around and you want to update the approximation based on one or two new observations. Best Practices If you know in advance that you'll want to be recursively updating your posterior approximation when you observe new data, the best thing to do from the outset is to use sequential Monte Carlo (SMC) to approximate the posterior. Here are some papers: Chopin (2002 Biometrika); Chopin (2004 Annals of Statistics). Like the other approach, SMC is an importance sampling based method that allows you to iteratively update your posterior approximation as new data arrive. You start with a sample of iid draws from the prior, and then you recursively re-weight the sample to reflect the new information. Along the way, you also use MCMC to move each draw in your sample to a location in the parameter space that better reflects the influence of new data.
use MCMC posterior as prior for future inference
Strictly speaking, you have to rerun your MCMC algorithm from scratch to approximate the new posterior. MCMC algorithms are not sequential, which means that you cannot update their output with new dat
use MCMC posterior as prior for future inference Strictly speaking, you have to rerun your MCMC algorithm from scratch to approximate the new posterior. MCMC algorithms are not sequential, which means that you cannot update their output with new data to update your estimate of the posterior. You just have to redo it. However, you can use importance sampling to recursively update your posterior approximation with new data. Here are two approaches: Quick and Dirty (and not quite right) You already have the output $\{\theta^{(i)}\}_{i=1}^M$ from an MCMC algorithm that targets $p(\theta\,|\,y_{1:t-1})$. You then observe $y_t$, and you want to somehow recycle $\{\theta^{(i)}\}_{i=1}^M$ to approximate $p(\theta\,|\,y_{1:t})$ without having to re-do everything. As I said, in order to be doing things 100% correctly, you should rerun the MCMC from scratch. But if you were hellbent on not doing that, you could do the following. Pretend that $\{\theta^{(i)}\}_{i=1}^M$ are iid draws from $p(\theta\,|\,y_{1:t-1})$. Then treat them as proposal draws for an importance sampling approximation to $p(\theta\,|\,y_{1:t})$. The importance weights will be $$w_i\propto\frac{p(\theta^{(i)}\,|\,y_{1:t})}{p(\theta^{(i)}\,|\,y_{1:t-1})}\propto p(y_t\,|y_{1:t-1},\,\theta^{(i)}).$$ The leap of faith here is treating the MCMC draws like they were evenly-weighted, iid draws from the source density $p(\theta\,|\,y_{1:t-1})$. But for private, exploratory purposes, it's not an insane thing to do when you already have the MCMC draws lying around and you want to update the approximation based on one or two new observations. Best Practices If you know in advance that you'll want to be recursively updating your posterior approximation when you observe new data, the best thing to do from the outset is to use sequential Monte Carlo (SMC) to approximate the posterior. Here are some papers: Chopin (2002 Biometrika); Chopin (2004 Annals of Statistics). Like the other approach, SMC is an importance sampling based method that allows you to iteratively update your posterior approximation as new data arrive. You start with a sample of iid draws from the prior, and then you recursively re-weight the sample to reflect the new information. Along the way, you also use MCMC to move each draw in your sample to a location in the parameter space that better reflects the influence of new data.
use MCMC posterior as prior for future inference Strictly speaking, you have to rerun your MCMC algorithm from scratch to approximate the new posterior. MCMC algorithms are not sequential, which means that you cannot update their output with new dat
46,016
Are the errors in this formulation of the simple linear regression model random variables?
I looked up your citation (4th edition, page 21) because I found it very alarming and was relieved to find is actually given as: $$ \hat{e}_i = y_i − \widehat{E}(Y|X=x_i) = y_i - (\hat{\beta}_0 + \hat{\beta}_1) \tag{2.3} $$ Which is still confusing, I grant you, and the difference isn't actually germane to your question, but at least it isn't patently false. I'll explain why I found it alarming before discussing your (unrelated, I think) question. The "hat" indicates "estimated", usually by MLE in the context of linear regression, and there is a crucial distinction between "true errors" which are denoted $\epsilon_i$ and are normally distributed and i.i.d., and "residuals which are denoted $e_i$ and are not i.i.d. The formula without the hats would imply the two are exactly equal which is not the case. On to your real question, which boils down to, "are the given data $x_i$ and $y_i$ random or not?" If you believe the pairs $(x_i, y_i)$ are known and not-random, e.g. that is, if you believe that $\forall\; 1 \leq i \leq n,\, (x_i, y_i) \in \mathbb{R} \times \mathbb{R} $, then the residuals $e_i$ are also known and non-random, e.g. $\forall\; 1 \leq i \leq n,\, e_i \in \mathbb{R}$. This is because there is a deterministic function for the "best" parameters $\hat{\beta_0}$ and $\hat{\beta_1}$ from those observations, and then a deterministic function for the residuals in terms of those parameters. This point of view is useful and allows us to derive the MLE estimators of $\beta$, for example. It is also the most intuitive view to take when your sitting in front of a concrete, real-world dataset. However, it kind of puts the cart before the horse and basically shuts down certain kinds of statistical analysis. For example, we cannot talk about the "distribution" of $\hat{\beta}_1$ because it is not a random variable and therefore has no distribution! How can we then talk something like the Wald test? Likewise, how do we talk about the "distribution" of residuals so that we can say whether one is an outlier or not? The way this is done is treating the dataset itself as random. When we want to do statistical inference on a known dataset, we can then treat the known values as a realization of the random dataset. The exact construction is a little bit pedantic but and is often omitted but it helps to go through it at least once. First, we say that $X$ and $Y$ are two random variables with some joint probability distribution $F_{X,Y}(\mathbf{\beta}, \sigma^2)$ with parameters $\mathbf{\beta} = [\beta_0, \beta_1]^T $ and $\sigma$. $F_{X,Y}$ is specified by the model $Y = X\beta_0 + \beta_1 + \epsilon, \epsilon \sim \mathcal(0, \sigma^2)$. Now, imagine that we have $n$ i.i.d. copies of $F_{X,Y}$ that we combine into one big joint probability function $F_{X_1,Y_1,X_2,Y_2,...,X_n,Y_n}$. Now we can imagine the dataset $(x_i, y_i)$ for $i=1,...,n$ not merely as some known set of numbers, but as a realization sampled from $F_{X_1,Y_1,X_2,Y_2,...,X_n,Y_n}$. Each time we sample, we don't just get one pair of numbers, we get $n$ pairs of numbers: a brand new dataset. But that means the parameters $\hat{\beta}$ get new estimates, and we then calculate new residuals $e_i$, right? Instead of thinking of this as repeated sampling, which is somewhat crude, we can express this entirely in the algebra of random variables. It can be expressed as two $n$-dimensional random vectors $\vec{X}$ and $\vec{Y}$ drawn from $F_{X_1,Y_1,X_2,Y_2,...,X_n,Y_n}$. Now $\hat{\beta}_0$ and $\hat{\beta}_1$ are random variables because they are functions of $(\vec{X}, \vec{Y})$. Likewise, all the $e_i$ are random variables because they are functions of $(\vec{X}, \vec{Y})$. This state of affairs is much better, because now we can make statements like "The set of residuals $e_i$ cannot be independent because they always sum exactly to zero" or "the standard error of $\hat{\beta}_1$ follows a t-distribution." without talking literal nonsense. (Both of these statements only make sense if their subjects are random variables.) In the real world we can't always go and get a brand-new, randomly sampled dataset. We can approximate this with something like the bootstrap, of course, but doing it for real isn't usually practical. But doing it conceptually allows us to think clearly about how randomness during sampling would affect our regression. You'll note that I did not introduce new notation for $e_i$ and $\hat{\beta}$ but simply said, "now these things, which we previously thought of a concrete realizations, will now be treated as random variables." As far as I can tell, you just have to be on your toes for this kind of signposting - the same kind you found in your textbook - to indicate whether symbols are referring to random or non-random variables because while there are conventions (such as using uppercase roman letters for random variables) they are not consistently applied. If the author tells you $e_i$ is a random variable, he is telling you is also viewing $x_i$ and $y_i$ as random variables.
Are the errors in this formulation of the simple linear regression model random variables?
I looked up your citation (4th edition, page 21) because I found it very alarming and was relieved to find is actually given as: $$ \hat{e}_i = y_i − \widehat{E}(Y|X=x_i) = y_i - (\hat{\beta}_0 + \hat
Are the errors in this formulation of the simple linear regression model random variables? I looked up your citation (4th edition, page 21) because I found it very alarming and was relieved to find is actually given as: $$ \hat{e}_i = y_i − \widehat{E}(Y|X=x_i) = y_i - (\hat{\beta}_0 + \hat{\beta}_1) \tag{2.3} $$ Which is still confusing, I grant you, and the difference isn't actually germane to your question, but at least it isn't patently false. I'll explain why I found it alarming before discussing your (unrelated, I think) question. The "hat" indicates "estimated", usually by MLE in the context of linear regression, and there is a crucial distinction between "true errors" which are denoted $\epsilon_i$ and are normally distributed and i.i.d., and "residuals which are denoted $e_i$ and are not i.i.d. The formula without the hats would imply the two are exactly equal which is not the case. On to your real question, which boils down to, "are the given data $x_i$ and $y_i$ random or not?" If you believe the pairs $(x_i, y_i)$ are known and not-random, e.g. that is, if you believe that $\forall\; 1 \leq i \leq n,\, (x_i, y_i) \in \mathbb{R} \times \mathbb{R} $, then the residuals $e_i$ are also known and non-random, e.g. $\forall\; 1 \leq i \leq n,\, e_i \in \mathbb{R}$. This is because there is a deterministic function for the "best" parameters $\hat{\beta_0}$ and $\hat{\beta_1}$ from those observations, and then a deterministic function for the residuals in terms of those parameters. This point of view is useful and allows us to derive the MLE estimators of $\beta$, for example. It is also the most intuitive view to take when your sitting in front of a concrete, real-world dataset. However, it kind of puts the cart before the horse and basically shuts down certain kinds of statistical analysis. For example, we cannot talk about the "distribution" of $\hat{\beta}_1$ because it is not a random variable and therefore has no distribution! How can we then talk something like the Wald test? Likewise, how do we talk about the "distribution" of residuals so that we can say whether one is an outlier or not? The way this is done is treating the dataset itself as random. When we want to do statistical inference on a known dataset, we can then treat the known values as a realization of the random dataset. The exact construction is a little bit pedantic but and is often omitted but it helps to go through it at least once. First, we say that $X$ and $Y$ are two random variables with some joint probability distribution $F_{X,Y}(\mathbf{\beta}, \sigma^2)$ with parameters $\mathbf{\beta} = [\beta_0, \beta_1]^T $ and $\sigma$. $F_{X,Y}$ is specified by the model $Y = X\beta_0 + \beta_1 + \epsilon, \epsilon \sim \mathcal(0, \sigma^2)$. Now, imagine that we have $n$ i.i.d. copies of $F_{X,Y}$ that we combine into one big joint probability function $F_{X_1,Y_1,X_2,Y_2,...,X_n,Y_n}$. Now we can imagine the dataset $(x_i, y_i)$ for $i=1,...,n$ not merely as some known set of numbers, but as a realization sampled from $F_{X_1,Y_1,X_2,Y_2,...,X_n,Y_n}$. Each time we sample, we don't just get one pair of numbers, we get $n$ pairs of numbers: a brand new dataset. But that means the parameters $\hat{\beta}$ get new estimates, and we then calculate new residuals $e_i$, right? Instead of thinking of this as repeated sampling, which is somewhat crude, we can express this entirely in the algebra of random variables. It can be expressed as two $n$-dimensional random vectors $\vec{X}$ and $\vec{Y}$ drawn from $F_{X_1,Y_1,X_2,Y_2,...,X_n,Y_n}$. Now $\hat{\beta}_0$ and $\hat{\beta}_1$ are random variables because they are functions of $(\vec{X}, \vec{Y})$. Likewise, all the $e_i$ are random variables because they are functions of $(\vec{X}, \vec{Y})$. This state of affairs is much better, because now we can make statements like "The set of residuals $e_i$ cannot be independent because they always sum exactly to zero" or "the standard error of $\hat{\beta}_1$ follows a t-distribution." without talking literal nonsense. (Both of these statements only make sense if their subjects are random variables.) In the real world we can't always go and get a brand-new, randomly sampled dataset. We can approximate this with something like the bootstrap, of course, but doing it for real isn't usually practical. But doing it conceptually allows us to think clearly about how randomness during sampling would affect our regression. You'll note that I did not introduce new notation for $e_i$ and $\hat{\beta}$ but simply said, "now these things, which we previously thought of a concrete realizations, will now be treated as random variables." As far as I can tell, you just have to be on your toes for this kind of signposting - the same kind you found in your textbook - to indicate whether symbols are referring to random or non-random variables because while there are conventions (such as using uppercase roman letters for random variables) they are not consistently applied. If the author tells you $e_i$ is a random variable, he is telling you is also viewing $x_i$ and $y_i$ as random variables.
Are the errors in this formulation of the simple linear regression model random variables? I looked up your citation (4th edition, page 21) because I found it very alarming and was relieved to find is actually given as: $$ \hat{e}_i = y_i − \widehat{E}(Y|X=x_i) = y_i - (\hat{\beta}_0 + \hat
46,017
Are the errors in this formulation of the simple linear regression model random variables?
In simple linear regression, we assume that the observations are randomly perturbed from the conditional expected value, i.e. $E[Y|X=x_i]$; so, each of your observations are assumed to be generated from a model of the form: $$Y=\beta_0+\beta_1X+\epsilon \ \ , \epsilon\sim N(0,\sigma^2)$$ This makes each $\epsilon_i$ a RV by definition. Think about a box where you give $x_i$ and get $y_i$, and you never know what's inside, how much error is introduced by the box etc. Even if we really know that the relation is of the form given above, we don't know the true $\beta_0,\beta_1$. If we had known those quantities, we would easily recover $\epsilon_i$. Instead, we estimate those, and get residuals.
Are the errors in this formulation of the simple linear regression model random variables?
In simple linear regression, we assume that the observations are randomly perturbed from the conditional expected value, i.e. $E[Y|X=x_i]$; so, each of your observations are assumed to be generated fr
Are the errors in this formulation of the simple linear regression model random variables? In simple linear regression, we assume that the observations are randomly perturbed from the conditional expected value, i.e. $E[Y|X=x_i]$; so, each of your observations are assumed to be generated from a model of the form: $$Y=\beta_0+\beta_1X+\epsilon \ \ , \epsilon\sim N(0,\sigma^2)$$ This makes each $\epsilon_i$ a RV by definition. Think about a box where you give $x_i$ and get $y_i$, and you never know what's inside, how much error is introduced by the box etc. Even if we really know that the relation is of the form given above, we don't know the true $\beta_0,\beta_1$. If we had known those quantities, we would easily recover $\epsilon_i$. Instead, we estimate those, and get residuals.
Are the errors in this formulation of the simple linear regression model random variables? In simple linear regression, we assume that the observations are randomly perturbed from the conditional expected value, i.e. $E[Y|X=x_i]$; so, each of your observations are assumed to be generated fr
46,018
Don't understand why glmm random effect variance is zero. Have reviewed similar questions still dont get it
I tried with the glmmADMB package, an alternative of lme4 for linear mixed modelling. You can install this package with this code: install.packages("R2admb") install.packages("glmmADMB", repos=c("http://glmmadmb.r-forge.r-project.org/repos", getOption("repos")), type="source") Then you go: library(glmmADMB) helpmeobiwan <-list(NestPlot = c(1, 0, 0, 0, 0 ,0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0, 0),NumDeadJun = c( 0.1409216, -0.1932639,-0.5274494,-0.5274494, 0.1409216, -0.5274494, -0.5274494 , 0.4751071, -0.5274494 , 2.1460347 ,-0.5274494, -0.1932639, 0.8092926, -0.5274494, -0.5274494 ,-0.5274494 ,-0.1932639, 0.1409216, -0.5274494, -0.5274494 ,-0.5274494, -0.5274494 ,-0.5274494, 0.1409216,-0.5274494, -0.5274494 ,-0.5274494, 0.1409216, -0.5274494, 0.1409216, -0.5274494, -0.5274494, -0.5274494, -0.1932639, -0.1932639, -0.5274494, 0.4751071 , 0.1409216 ,-0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.1932639, -0.5274494, -0.5274494 ,-0.5274494 ,-0.5274494, 0.1409216, -0.5274494, -0.5274494, -0.1932639, -0.5274494, -0.5274494, -0.5274494, 0.1409216, -0.5274494, -0.5274494 ,3.1485912 , 2.4802202, 1.4776637, -0.5274494 , 2.8144057, -0.5274494, -0.5274494, 1.1434781, 3.8169623, 3.8169623 ,-0.1932639, -0.5274494 ,1.4776637 , 1.8118492, -0.5274494),RandomPair = c( "Madera2" , "Starfire1", "Madera2" , "Madera3" , "Starfire1" ,"Starfire1", "Starfire2", "Madera1" , "Madera3" ,"Starfire2" ,"Starfire2", "Madera1", "Madera2", "Starfire1", "Starfire1" ,"Starfire1", "Madera1", "Madera2" , "Starfire1", "Starfire1", "Starfire1", "Madera1" , "Starfire1", "Starfire1", "Madera1", "Madera1" , "Starfire1", "Madera2" , "Madera1", "Madera2" , "Madera1" , "Madera1" , "Starfire1" ,"Starfire1", "Starfire1" ,"Starfire1" ,"Madera2" , "Madera2", "Starfire2" ,"Starfire2", "Starfire2" ,"Madera3" , "Madera3" , "Madera3" , "Madera3" , "Madera3" , "Starfire2", "Starfire2", "Starfire2", "Starfire2" ,"Starfire2", "Madera3", "Madera3" , "Starfire2", "Madera3" , "Madera1" , "Starfire2" ,"Starfire1", "Madera2" , "Madera3" , "Madera3" , "Madera2" , "Madera3" ,"Starfire2", "Madera3", "Starfire1", "Madera3" , "Starfire2", "Starfire1", "Madera3", "Starfire1", "Starfire2" ,"Madera1" , "Starfire2", "Starfire2", "Madera1" )) dontworryLeia <- helpmeobiwan dontworryLeia$RandomPair <- as.factor(dontworryLeia$RandomPair) attach(dontworryLeia) mod <- glmmadmb(NestPlot ~ NumDeadJun + (1|RandomPair), family='binomial', data=dontworryLeia) mod summary(mod) drop1(mod) First RandomPair was not considered as a factor, which explains this transformation. I guess you meant NestPlot and not NestPair in your m1 model. Anyway this should work!
Don't understand why glmm random effect variance is zero. Have reviewed similar questions still dont
I tried with the glmmADMB package, an alternative of lme4 for linear mixed modelling. You can install this package with this code: install.packages("R2admb") install.packages("glmmADMB", repos=c("htt
Don't understand why glmm random effect variance is zero. Have reviewed similar questions still dont get it I tried with the glmmADMB package, an alternative of lme4 for linear mixed modelling. You can install this package with this code: install.packages("R2admb") install.packages("glmmADMB", repos=c("http://glmmadmb.r-forge.r-project.org/repos", getOption("repos")), type="source") Then you go: library(glmmADMB) helpmeobiwan <-list(NestPlot = c(1, 0, 0, 0, 0 ,0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0, 0),NumDeadJun = c( 0.1409216, -0.1932639,-0.5274494,-0.5274494, 0.1409216, -0.5274494, -0.5274494 , 0.4751071, -0.5274494 , 2.1460347 ,-0.5274494, -0.1932639, 0.8092926, -0.5274494, -0.5274494 ,-0.5274494 ,-0.1932639, 0.1409216, -0.5274494, -0.5274494 ,-0.5274494, -0.5274494 ,-0.5274494, 0.1409216,-0.5274494, -0.5274494 ,-0.5274494, 0.1409216, -0.5274494, 0.1409216, -0.5274494, -0.5274494, -0.5274494, -0.1932639, -0.1932639, -0.5274494, 0.4751071 , 0.1409216 ,-0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.1932639, -0.5274494, -0.5274494 ,-0.5274494 ,-0.5274494, 0.1409216, -0.5274494, -0.5274494, -0.1932639, -0.5274494, -0.5274494, -0.5274494, 0.1409216, -0.5274494, -0.5274494 ,3.1485912 , 2.4802202, 1.4776637, -0.5274494 , 2.8144057, -0.5274494, -0.5274494, 1.1434781, 3.8169623, 3.8169623 ,-0.1932639, -0.5274494 ,1.4776637 , 1.8118492, -0.5274494),RandomPair = c( "Madera2" , "Starfire1", "Madera2" , "Madera3" , "Starfire1" ,"Starfire1", "Starfire2", "Madera1" , "Madera3" ,"Starfire2" ,"Starfire2", "Madera1", "Madera2", "Starfire1", "Starfire1" ,"Starfire1", "Madera1", "Madera2" , "Starfire1", "Starfire1", "Starfire1", "Madera1" , "Starfire1", "Starfire1", "Madera1", "Madera1" , "Starfire1", "Madera2" , "Madera1", "Madera2" , "Madera1" , "Madera1" , "Starfire1" ,"Starfire1", "Starfire1" ,"Starfire1" ,"Madera2" , "Madera2", "Starfire2" ,"Starfire2", "Starfire2" ,"Madera3" , "Madera3" , "Madera3" , "Madera3" , "Madera3" , "Starfire2", "Starfire2", "Starfire2", "Starfire2" ,"Starfire2", "Madera3", "Madera3" , "Starfire2", "Madera3" , "Madera1" , "Starfire2" ,"Starfire1", "Madera2" , "Madera3" , "Madera3" , "Madera2" , "Madera3" ,"Starfire2", "Madera3", "Starfire1", "Madera3" , "Starfire2", "Starfire1", "Madera3", "Starfire1", "Starfire2" ,"Madera1" , "Starfire2", "Starfire2", "Madera1" )) dontworryLeia <- helpmeobiwan dontworryLeia$RandomPair <- as.factor(dontworryLeia$RandomPair) attach(dontworryLeia) mod <- glmmadmb(NestPlot ~ NumDeadJun + (1|RandomPair), family='binomial', data=dontworryLeia) mod summary(mod) drop1(mod) First RandomPair was not considered as a factor, which explains this transformation. I guess you meant NestPlot and not NestPair in your m1 model. Anyway this should work!
Don't understand why glmm random effect variance is zero. Have reviewed similar questions still dont I tried with the glmmADMB package, an alternative of lme4 for linear mixed modelling. You can install this package with this code: install.packages("R2admb") install.packages("glmmADMB", repos=c("htt
46,019
Don't understand why glmm random effect variance is zero. Have reviewed similar questions still dont get it
Function glmer() uses by default the Laplace approximation, which is not optimal for dichotomous data. A better alternative is the adaptive Gaussian quadrature. You can use this method by setting argument nAGQ of glmer() to a higher number (e.g., 11 or 15) or alternatively using the GLMMadaptive package. In your example, it gives: library("GLMMadaptive") helpmeobiwan <- list(NestPlot = c(1, 0, 0, 0, 0 ,0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0, 0),NumDeadJun = c( 0.1409216, -0.1932639,-0.5274494,-0.5274494, 0.1409216, -0.5274494, -0.5274494 , 0.4751071, -0.5274494 , 2.1460347 ,-0.5274494, -0.1932639, 0.8092926, -0.5274494, -0.5274494 ,-0.5274494 ,-0.1932639, 0.1409216, -0.5274494, -0.5274494 ,-0.5274494, -0.5274494 ,-0.5274494, 0.1409216,-0.5274494, -0.5274494 ,-0.5274494, 0.1409216, -0.5274494, 0.1409216, -0.5274494, -0.5274494, -0.5274494, -0.1932639, -0.1932639, -0.5274494, 0.4751071 , 0.1409216 ,-0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.1932639, -0.5274494, -0.5274494 ,-0.5274494 ,-0.5274494, 0.1409216, -0.5274494, -0.5274494, -0.1932639, -0.5274494, -0.5274494, -0.5274494, 0.1409216, -0.5274494, -0.5274494 ,3.1485912 , 2.4802202, 1.4776637, -0.5274494 , 2.8144057, -0.5274494, -0.5274494, 1.1434781, 3.8169623, 3.8169623 ,-0.1932639, -0.5274494 ,1.4776637 , 1.8118492, -0.5274494),RandomPair = c( "Madera2" , "Starfire1", "Madera2" , "Madera3" , "Starfire1" ,"Starfire1", "Starfire2", "Madera1" , "Madera3" ,"Starfire2" ,"Starfire2", "Madera1", "Madera2", "Starfire1", "Starfire1" ,"Starfire1", "Madera1", "Madera2" , "Starfire1", "Starfire1", "Starfire1", "Madera1" , "Starfire1", "Starfire1", "Madera1", "Madera1" , "Starfire1", "Madera2" , "Madera1", "Madera2" , "Madera1" , "Madera1" , "Starfire1" ,"Starfire1", "Starfire1" ,"Starfire1" ,"Madera2" , "Madera2", "Starfire2" ,"Starfire2", "Starfire2" ,"Madera3" , "Madera3" , "Madera3" , "Madera3" , "Madera3" , "Starfire2", "Starfire2", "Starfire2", "Starfire2" ,"Starfire2", "Madera3", "Madera3" , "Starfire2", "Madera3" , "Madera1" , "Starfire2" ,"Starfire1", "Madera2" , "Madera3" , "Madera3" , "Madera2" , "Madera3" ,"Starfire2", "Madera3", "Starfire1", "Madera3" , "Starfire2", "Starfire1", "Madera3", "Starfire1", "Starfire2" ,"Madera1" , "Starfire2", "Starfire2", "Madera1" )) helpmeobiwan <- as.data.frame(helpmeobiwan) fm <- mixed_model(NestPlot ~ NumDeadJun, random = ~ 1 | RandomPair, family = binomial(), data = helpmeobiwan) summary(fm) #> #> Call: #> mixed_model(fixed = NestPlot ~ NumDeadJun, random = ~1 | RandomPair, #> data = helpmeobiwan, family = binomial()) #> #> Data Descriptives: #> Number of Observations: 76 #> Number of Groups: 5 #> #> Model: #> family: binomial #> link: logit #> #> Fit statistics: #> log.Lik AIC BIC #> -46.2248 98.44959 97.27791 #> #> Random effects covariance matrix: #> StdDev #> (Intercept) 0.0477673 #> #> Fixed effects: #> Estimate Std.Err z-value p-value #> (Intercept) -0.1568 0.2829 -0.5544 0.579304 #> NumDeadJun -1.2274 0.4917 -2.4961 0.012558 #> #> Integration: #> method: adaptive Gauss-Hermite quadrature rule #> quadrature points: 11 #> #> Optimization: #> method: hybrid EM and quasi-Newton #> converged: TRUE
Don't understand why glmm random effect variance is zero. Have reviewed similar questions still dont
Function glmer() uses by default the Laplace approximation, which is not optimal for dichotomous data. A better alternative is the adaptive Gaussian quadrature. You can use this method by setting argu
Don't understand why glmm random effect variance is zero. Have reviewed similar questions still dont get it Function glmer() uses by default the Laplace approximation, which is not optimal for dichotomous data. A better alternative is the adaptive Gaussian quadrature. You can use this method by setting argument nAGQ of glmer() to a higher number (e.g., 11 or 15) or alternatively using the GLMMadaptive package. In your example, it gives: library("GLMMadaptive") helpmeobiwan <- list(NestPlot = c(1, 0, 0, 0, 0 ,0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0, 0),NumDeadJun = c( 0.1409216, -0.1932639,-0.5274494,-0.5274494, 0.1409216, -0.5274494, -0.5274494 , 0.4751071, -0.5274494 , 2.1460347 ,-0.5274494, -0.1932639, 0.8092926, -0.5274494, -0.5274494 ,-0.5274494 ,-0.1932639, 0.1409216, -0.5274494, -0.5274494 ,-0.5274494, -0.5274494 ,-0.5274494, 0.1409216,-0.5274494, -0.5274494 ,-0.5274494, 0.1409216, -0.5274494, 0.1409216, -0.5274494, -0.5274494, -0.5274494, -0.1932639, -0.1932639, -0.5274494, 0.4751071 , 0.1409216 ,-0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.5274494, -0.1932639, -0.5274494, -0.5274494 ,-0.5274494 ,-0.5274494, 0.1409216, -0.5274494, -0.5274494, -0.1932639, -0.5274494, -0.5274494, -0.5274494, 0.1409216, -0.5274494, -0.5274494 ,3.1485912 , 2.4802202, 1.4776637, -0.5274494 , 2.8144057, -0.5274494, -0.5274494, 1.1434781, 3.8169623, 3.8169623 ,-0.1932639, -0.5274494 ,1.4776637 , 1.8118492, -0.5274494),RandomPair = c( "Madera2" , "Starfire1", "Madera2" , "Madera3" , "Starfire1" ,"Starfire1", "Starfire2", "Madera1" , "Madera3" ,"Starfire2" ,"Starfire2", "Madera1", "Madera2", "Starfire1", "Starfire1" ,"Starfire1", "Madera1", "Madera2" , "Starfire1", "Starfire1", "Starfire1", "Madera1" , "Starfire1", "Starfire1", "Madera1", "Madera1" , "Starfire1", "Madera2" , "Madera1", "Madera2" , "Madera1" , "Madera1" , "Starfire1" ,"Starfire1", "Starfire1" ,"Starfire1" ,"Madera2" , "Madera2", "Starfire2" ,"Starfire2", "Starfire2" ,"Madera3" , "Madera3" , "Madera3" , "Madera3" , "Madera3" , "Starfire2", "Starfire2", "Starfire2", "Starfire2" ,"Starfire2", "Madera3", "Madera3" , "Starfire2", "Madera3" , "Madera1" , "Starfire2" ,"Starfire1", "Madera2" , "Madera3" , "Madera3" , "Madera2" , "Madera3" ,"Starfire2", "Madera3", "Starfire1", "Madera3" , "Starfire2", "Starfire1", "Madera3", "Starfire1", "Starfire2" ,"Madera1" , "Starfire2", "Starfire2", "Madera1" )) helpmeobiwan <- as.data.frame(helpmeobiwan) fm <- mixed_model(NestPlot ~ NumDeadJun, random = ~ 1 | RandomPair, family = binomial(), data = helpmeobiwan) summary(fm) #> #> Call: #> mixed_model(fixed = NestPlot ~ NumDeadJun, random = ~1 | RandomPair, #> data = helpmeobiwan, family = binomial()) #> #> Data Descriptives: #> Number of Observations: 76 #> Number of Groups: 5 #> #> Model: #> family: binomial #> link: logit #> #> Fit statistics: #> log.Lik AIC BIC #> -46.2248 98.44959 97.27791 #> #> Random effects covariance matrix: #> StdDev #> (Intercept) 0.0477673 #> #> Fixed effects: #> Estimate Std.Err z-value p-value #> (Intercept) -0.1568 0.2829 -0.5544 0.579304 #> NumDeadJun -1.2274 0.4917 -2.4961 0.012558 #> #> Integration: #> method: adaptive Gauss-Hermite quadrature rule #> quadrature points: 11 #> #> Optimization: #> method: hybrid EM and quasi-Newton #> converged: TRUE
Don't understand why glmm random effect variance is zero. Have reviewed similar questions still dont Function glmer() uses by default the Laplace approximation, which is not optimal for dichotomous data. A better alternative is the adaptive Gaussian quadrature. You can use this method by setting argu
46,020
Using indistinguishable subjects as predictors/random effects
Perhaps something it is not clear to me, but the following simulation for your setting seems to work with the random effects structure you suggested, i.e., set.seed(2019) N <- 50 # number of subjects # id indicators for pairs ids <- t(combn(N, 2)) id_i <- ids[, 1] id_j <- ids[, 2] # random effects b_i <- rnorm(N, sd = 2) b_j <- rnorm(N, sd = 4) # simulate normal outcome data from the mixed model # that has two random effects for i and j y <- 10 + b_i[id_i] + b_j[id_j] + rnorm(nrow(ids), sd = 0.5) DF <- data.frame(y = y, id_i = id_i, id_j = id_j) library("lme4") #> Loading required package: Matrix lmer(y ~ 1 + (1 | id_i) + (1 | id_j), data = DF) #> Linear mixed model fit by REML ['lmerMod'] #> Formula: y ~ 1 + (1 | id_i) + (1 | id_j) #> Data: DF #> REML criterion at convergence: 2403.071 #> Random effects: #> Groups Name Std.Dev. #> id_i (Intercept) 2.1070 #> id_j (Intercept) 3.0956 #> Residual 0.5053 #> Number of obs: 1225, groups: id_i, 49; id_j, 49 #> Fixed Effects: #> (Intercept) #> 9.562 However, I think you will not be able to include covariates at the subject level because you only have data at the pair level. EDIT: Based on the comments, the symmetric nature of the data has become more clear. As far as I know, the current implementation of lmer() does not allow for such data. The code below simulates and fits a model for such data using STAN. set.seed(2019) N <- 50 # number of subjects # id indicators for pairs ids <- expand.grid(i = seq_len(N), j = seq_len(N)) ids <- ids[ids$i != ids$j, ] id_i <- ids$i id_j <- ids$j # random effects b <- rnorm(N, sd = 2) # simulate normal outcome data from the mixed model # that has one random effect but accounts for the pairs i and j y <- 10 + b[id_i] + b[id_j] + rnorm(nrow(ids), sd = 0.5) library("rstan") Data <- list(N = nrow(DF), n = length(unique(id_i)), id_i = id_i, id_j = id_j, y = y) model <- " data { int n; int N; int id_i[N]; int id_j[N]; vector[N] y; } parameters { vector[n] b; real beta; real<lower = 0> sigma_b; real<lower = 0> sigma; } transformed parameters { vector[N] eta; for (k in 1:N) { eta[k] = beta + b[id_i[k]] + b[id_j[k]]; } } model { sigma_b ~ student_t(3, 0, 10); for (i in 1:n) { b[i] ~ normal(0.0, sigma_b); } beta ~ normal(0.0, 10); sigma ~ student_t(3, 0, 10); y ~ normal(eta, sigma); } " fit <- stan(model_code = model, data = Data, pars = c("beta", "sigma_b", "sigma")) summary(fit)
Using indistinguishable subjects as predictors/random effects
Perhaps something it is not clear to me, but the following simulation for your setting seems to work with the random effects structure you suggested, i.e., set.seed(2019) N <- 50 # number of subjects
Using indistinguishable subjects as predictors/random effects Perhaps something it is not clear to me, but the following simulation for your setting seems to work with the random effects structure you suggested, i.e., set.seed(2019) N <- 50 # number of subjects # id indicators for pairs ids <- t(combn(N, 2)) id_i <- ids[, 1] id_j <- ids[, 2] # random effects b_i <- rnorm(N, sd = 2) b_j <- rnorm(N, sd = 4) # simulate normal outcome data from the mixed model # that has two random effects for i and j y <- 10 + b_i[id_i] + b_j[id_j] + rnorm(nrow(ids), sd = 0.5) DF <- data.frame(y = y, id_i = id_i, id_j = id_j) library("lme4") #> Loading required package: Matrix lmer(y ~ 1 + (1 | id_i) + (1 | id_j), data = DF) #> Linear mixed model fit by REML ['lmerMod'] #> Formula: y ~ 1 + (1 | id_i) + (1 | id_j) #> Data: DF #> REML criterion at convergence: 2403.071 #> Random effects: #> Groups Name Std.Dev. #> id_i (Intercept) 2.1070 #> id_j (Intercept) 3.0956 #> Residual 0.5053 #> Number of obs: 1225, groups: id_i, 49; id_j, 49 #> Fixed Effects: #> (Intercept) #> 9.562 However, I think you will not be able to include covariates at the subject level because you only have data at the pair level. EDIT: Based on the comments, the symmetric nature of the data has become more clear. As far as I know, the current implementation of lmer() does not allow for such data. The code below simulates and fits a model for such data using STAN. set.seed(2019) N <- 50 # number of subjects # id indicators for pairs ids <- expand.grid(i = seq_len(N), j = seq_len(N)) ids <- ids[ids$i != ids$j, ] id_i <- ids$i id_j <- ids$j # random effects b <- rnorm(N, sd = 2) # simulate normal outcome data from the mixed model # that has one random effect but accounts for the pairs i and j y <- 10 + b[id_i] + b[id_j] + rnorm(nrow(ids), sd = 0.5) library("rstan") Data <- list(N = nrow(DF), n = length(unique(id_i)), id_i = id_i, id_j = id_j, y = y) model <- " data { int n; int N; int id_i[N]; int id_j[N]; vector[N] y; } parameters { vector[n] b; real beta; real<lower = 0> sigma_b; real<lower = 0> sigma; } transformed parameters { vector[N] eta; for (k in 1:N) { eta[k] = beta + b[id_i[k]] + b[id_j[k]]; } } model { sigma_b ~ student_t(3, 0, 10); for (i in 1:n) { b[i] ~ normal(0.0, sigma_b); } beta ~ normal(0.0, 10); sigma ~ student_t(3, 0, 10); y ~ normal(eta, sigma); } " fit <- stan(model_code = model, data = Data, pars = c("beta", "sigma_b", "sigma")) summary(fit)
Using indistinguishable subjects as predictors/random effects Perhaps something it is not clear to me, but the following simulation for your setting seems to work with the random effects structure you suggested, i.e., set.seed(2019) N <- 50 # number of subjects
46,021
Using indistinguishable subjects as predictors/random effects
For people looking for a more little theory, Peter Hoff at the Duke/University of Washington has worked on this. His 2005 JASA paper "Bilinear Mixed-Effects Models for Dyadic Data" (pdf and code) describes an MCMC approach that is very similar to @Dimitris Rizopoulos’ answer above). That rough code was made into a more polished R package called AMEN (Additive Multiplicative Effects Models), and this 2017 preprint/tech report is effectively a tutorial with a few worked examples. He calls this the "Social Relations Regression Model."
Using indistinguishable subjects as predictors/random effects
For people looking for a more little theory, Peter Hoff at the Duke/University of Washington has worked on this. His 2005 JASA paper "Bilinear Mixed-Effects Models for Dyadic Data" (pdf and code) desc
Using indistinguishable subjects as predictors/random effects For people looking for a more little theory, Peter Hoff at the Duke/University of Washington has worked on this. His 2005 JASA paper "Bilinear Mixed-Effects Models for Dyadic Data" (pdf and code) describes an MCMC approach that is very similar to @Dimitris Rizopoulos’ answer above). That rough code was made into a more polished R package called AMEN (Additive Multiplicative Effects Models), and this 2017 preprint/tech report is effectively a tutorial with a few worked examples. He calls this the "Social Relations Regression Model."
Using indistinguishable subjects as predictors/random effects For people looking for a more little theory, Peter Hoff at the Duke/University of Washington has worked on this. His 2005 JASA paper "Bilinear Mixed-Effects Models for Dyadic Data" (pdf and code) desc
46,022
Interpretation of standardized (z-score rescaled) linear model coefficients
Noting for readers who might have missed it that you standardized (i.e. rescaled by z-score) only the predictors and not your response variable. The linear model coefficients can be interpreted as the change in the response (i.e. dependent variable) for a 1 standard deviation increase in the predictor (i.e. independent variable). In your case, for example: a 1 standard deviation increase in Total N in soil is associated with a decrease (because of the negative coefficient value) in the vegetation change index by ~0.03 units. Suppose instead that you had standardized all your data i.e. both predictors and the response variable. In that case, the coefficients could be interpreted as the change in the response variable (in standard deviations) for a 1 standard deviation change in the predictor.
Interpretation of standardized (z-score rescaled) linear model coefficients
Noting for readers who might have missed it that you standardized (i.e. rescaled by z-score) only the predictors and not your response variable. The linear model coefficients can be interpreted as the
Interpretation of standardized (z-score rescaled) linear model coefficients Noting for readers who might have missed it that you standardized (i.e. rescaled by z-score) only the predictors and not your response variable. The linear model coefficients can be interpreted as the change in the response (i.e. dependent variable) for a 1 standard deviation increase in the predictor (i.e. independent variable). In your case, for example: a 1 standard deviation increase in Total N in soil is associated with a decrease (because of the negative coefficient value) in the vegetation change index by ~0.03 units. Suppose instead that you had standardized all your data i.e. both predictors and the response variable. In that case, the coefficients could be interpreted as the change in the response variable (in standard deviations) for a 1 standard deviation change in the predictor.
Interpretation of standardized (z-score rescaled) linear model coefficients Noting for readers who might have missed it that you standardized (i.e. rescaled by z-score) only the predictors and not your response variable. The linear model coefficients can be interpreted as the
46,023
What is the difference between probabilistic forecasting and quantile forecasting?
In a sense, you are right: if we generate forecasts for the 0.001, 0.002, ..., 0.998 and 0.999 quantile, then we pretty much already have a full probabilistic forecast. Essentially, the predicted density would be a histogram with 998 bins. However, I have rarely seen this. (One of the rare examples is the GEFCom2014 competition; Hong et al., 2016, IJF, which required submitting 99 quantile forecasts.) More frequently, one sees people doing it the other way around: predicting a density and deriving quantile forecasts from that. One potential problem is that quantile forecasts for very close quantiles may be inconsistent: the 0.998 quantile forecast should always be lower than the 0.999 quantile forecast, but if you don't take particular care, it may be the other way around for some time points in the future. This problem also afflicts quantile regression and prediction. Of course, this problem will be more prevalent if your quantiles are close together. Incidentally, and just to help search engines, related terms are density forecasting and predictive densities or predictive distributions (the latter being the output from the former).
What is the difference between probabilistic forecasting and quantile forecasting?
In a sense, you are right: if we generate forecasts for the 0.001, 0.002, ..., 0.998 and 0.999 quantile, then we pretty much already have a full probabilistic forecast. Essentially, the predicted dens
What is the difference between probabilistic forecasting and quantile forecasting? In a sense, you are right: if we generate forecasts for the 0.001, 0.002, ..., 0.998 and 0.999 quantile, then we pretty much already have a full probabilistic forecast. Essentially, the predicted density would be a histogram with 998 bins. However, I have rarely seen this. (One of the rare examples is the GEFCom2014 competition; Hong et al., 2016, IJF, which required submitting 99 quantile forecasts.) More frequently, one sees people doing it the other way around: predicting a density and deriving quantile forecasts from that. One potential problem is that quantile forecasts for very close quantiles may be inconsistent: the 0.998 quantile forecast should always be lower than the 0.999 quantile forecast, but if you don't take particular care, it may be the other way around for some time points in the future. This problem also afflicts quantile regression and prediction. Of course, this problem will be more prevalent if your quantiles are close together. Incidentally, and just to help search engines, related terms are density forecasting and predictive densities or predictive distributions (the latter being the output from the former).
What is the difference between probabilistic forecasting and quantile forecasting? In a sense, you are right: if we generate forecasts for the 0.001, 0.002, ..., 0.998 and 0.999 quantile, then we pretty much already have a full probabilistic forecast. Essentially, the predicted dens
46,024
How to fit a longitudinal GAM mixed model (GAMM)
What @Roland is getting at is to use a random spline basis for the time by id random part. So your model would become: m <- gam(y ~ s(V1) + V2 + s(time, id, bs = 'fs'), family=gaussian, data=dat, method = "REML") This model says that the effect of time is smooth and varies by id, with a separate smooth being estimated for each id but each smooth is assumed to have the same wiggliness (a single smoothness penalty is estimated for all the time smoothers) but can differ in shape. To estimate a separate "global" effect the model could be m <- gam(y ~ s(V1) + V2 + s(time) + s(time, id, bs = 'fs'), family=gaussian, data=dat, method = "REML") If you want similar models but where each smooth can have different wiggliness as well as shape, then the by smoothers can be used: ## without a "global" effect m <- gam(y ~ s(V1) + V2 + s(id, bs = 're') + s(time, by = id), family=gaussian, data=dat, method = "REML") ## with a "global" effect m <- gam(y ~ s(V1) + V2 + s(id, bs = 're') + s(time) + s(time, by = id, m = 1), family=gaussian, data=dat, method = "REML") The m=1 means that the smoother uses a penalty on the squared first derivative, which penalises departure from a flat function of no effect. As this is on the subject specific smooths, the model is penalising deviations from the "global" smooth. Some colleagues and I have described these models in some detail in a paper submitted to PeerJ, which is available as a preprint. A new version in response to reviewers comments should be up in a few days (we've submitted it to the journal).
How to fit a longitudinal GAM mixed model (GAMM)
What @Roland is getting at is to use a random spline basis for the time by id random part. So your model would become: m <- gam(y ~ s(V1) + V2 + s(time, id, bs = 'fs'), family=gaussian, data=
How to fit a longitudinal GAM mixed model (GAMM) What @Roland is getting at is to use a random spline basis for the time by id random part. So your model would become: m <- gam(y ~ s(V1) + V2 + s(time, id, bs = 'fs'), family=gaussian, data=dat, method = "REML") This model says that the effect of time is smooth and varies by id, with a separate smooth being estimated for each id but each smooth is assumed to have the same wiggliness (a single smoothness penalty is estimated for all the time smoothers) but can differ in shape. To estimate a separate "global" effect the model could be m <- gam(y ~ s(V1) + V2 + s(time) + s(time, id, bs = 'fs'), family=gaussian, data=dat, method = "REML") If you want similar models but where each smooth can have different wiggliness as well as shape, then the by smoothers can be used: ## without a "global" effect m <- gam(y ~ s(V1) + V2 + s(id, bs = 're') + s(time, by = id), family=gaussian, data=dat, method = "REML") ## with a "global" effect m <- gam(y ~ s(V1) + V2 + s(id, bs = 're') + s(time) + s(time, by = id, m = 1), family=gaussian, data=dat, method = "REML") The m=1 means that the smoother uses a penalty on the squared first derivative, which penalises departure from a flat function of no effect. As this is on the subject specific smooths, the model is penalising deviations from the "global" smooth. Some colleagues and I have described these models in some detail in a paper submitted to PeerJ, which is available as a preprint. A new version in response to reviewers comments should be up in a few days (we've submitted it to the journal).
How to fit a longitudinal GAM mixed model (GAMM) What @Roland is getting at is to use a random spline basis for the time by id random part. So your model would become: m <- gam(y ~ s(V1) + V2 + s(time, id, bs = 'fs'), family=gaussian, data=
46,025
PCA principal components in sklearn not matching eigen-vectors of covariance calculated by numpy
While this is a pure python related question which is not fitted here for CrossValidated, let me help you anyway. Both procedures find the correct eigenvectors. The difference is in its representation. While PCA() lists the entries of an eigenvectors rowwise, np.linalg.eig() lists the entries of the eigenvectors columnwise. Remember that eigenvectors are only unique up to a sign. Indeed, a simple check yields: print(abs(eig_vec.T.round(10))==abs(pca.components_.round(10))) [[ True, True, True, True], [ True, True, True, True], [ True, True, True, True], [ True, True, True, True]])
PCA principal components in sklearn not matching eigen-vectors of covariance calculated by numpy
While this is a pure python related question which is not fitted here for CrossValidated, let me help you anyway. Both procedures find the correct eigenvectors. The difference is in its representation
PCA principal components in sklearn not matching eigen-vectors of covariance calculated by numpy While this is a pure python related question which is not fitted here for CrossValidated, let me help you anyway. Both procedures find the correct eigenvectors. The difference is in its representation. While PCA() lists the entries of an eigenvectors rowwise, np.linalg.eig() lists the entries of the eigenvectors columnwise. Remember that eigenvectors are only unique up to a sign. Indeed, a simple check yields: print(abs(eig_vec.T.round(10))==abs(pca.components_.round(10))) [[ True, True, True, True], [ True, True, True, True], [ True, True, True, True], [ True, True, True, True]])
PCA principal components in sklearn not matching eigen-vectors of covariance calculated by numpy While this is a pure python related question which is not fitted here for CrossValidated, let me help you anyway. Both procedures find the correct eigenvectors. The difference is in its representation
46,026
What is the definition of the geometric mean of a random variable?
You can define the geometric mean of a strictly positive random variable readily enough; an easy method would be to take: $$GM(X)=\exp(E[\log(X)])$$ For a discrete variable you can write the geometric mean as $\prod_i x_i^{p_i}$ where $p_i={p(X=x_i)}$.
What is the definition of the geometric mean of a random variable?
You can define the geometric mean of a strictly positive random variable readily enough; an easy method would be to take: $$GM(X)=\exp(E[\log(X)])$$ For a discrete variable you can write the geometric
What is the definition of the geometric mean of a random variable? You can define the geometric mean of a strictly positive random variable readily enough; an easy method would be to take: $$GM(X)=\exp(E[\log(X)])$$ For a discrete variable you can write the geometric mean as $\prod_i x_i^{p_i}$ where $p_i={p(X=x_i)}$.
What is the definition of the geometric mean of a random variable? You can define the geometric mean of a strictly positive random variable readily enough; an easy method would be to take: $$GM(X)=\exp(E[\log(X)])$$ For a discrete variable you can write the geometric
46,027
What is the definition of the geometric mean of a random variable?
I have no idea what the "official" answer is, if any, and this might just be a repeat of Glen_b's answer with more calculus-y language (I don't know), but the whole idea is to be analogous to the continuous version of the arithmetic mean, right? Well the arithmetic mean of a finite number of elements can be defined as: $$\sum_{i=1}^n \frac{1}{n} x_i$$ For the arithmetic mean of the values of f(x) of a continuous region of real-number x values starting from a and ending in b, we instead write: $$\frac{\int_a^b f(x)dx}{b-a}$$ The logic I'm missing here is explaining exactly what x values we're summing together, which is something to do with Riemann sums, but the point is that we can do it. Similarly the geometric mean could be written as: $$\prod_{i=1}^n x_i^{\frac{1}{n}} = e^{\ln\left(\prod_{i=1}^n x_i^{\frac{1}{n}}\right)} = e^{\sum_{i=1}^n \frac{1}{n} \ln(x_i)} = \exp\left( \sum_{i=1}^n \frac{1}{n} ln(x_i) \right)$$ This is just the definition of an arithmetic mean stuck in an exponent. Thus, it would be intuitive at least to generalize and say that the "geometric mean" of the values of f(x) over a continuous region of real-number x values starting with a and ending in b is: $$\exp\left(\frac{\int_a^b \ln(f(x))dx}{b-a}\right)$$ As Glen_b rightly pointed out, this only works if f(x) is always non-negative, because ln(x) is undefined for negative x values, but how often to people use geometric means on negative numbers anyway? It's only even possible if there are an odd number of values you're finding the mean of or if their product is positive, despite some being negative, and even then, I'm not sure what significance the number you would get would have. To generalize to higher dimensions, i.e. functions of more than one variable, we can either observe that a-b is the length of the line we integrated across, or observe that: $$\frac{\int_a^b f(x)dx}{b-a} = \frac{\int_a^b f(x)dx}{\int_a^b dx}$$ Thus for a double, triple, quadruple, etc. integrals of the values of a function of multiple variables over a continuous region, we divide by the area of that integral by area, volume, or whatever sort of hypervolume of the region we're integrating across: $$\frac{\iint_A f(x,y) dx dy}{\iint_A dx dy}$$ $$\frac{\iiint_V f(x,y,z) dx dy dz}{\iiint_V dx dy dz}$$ $$\frac{\iiiint_{HV} f(x_1,x_2,x_3,x_4) dx dy dz}{\iiiint_{HV} dx_1 dx_2 dx_3 dx_4}$$ $$etc.$$ Thus, we can just do the same with continuous "geometric means": $$\exp\left(\frac{\iint_A \ln(f(x,y)) dx dy}{\iint_A dx dy}\right)$$ $$\exp\left(\frac{\iiint_V \ln{f(x,y,z)} dx dy dz}{\iiint_V dx dy dz}\right)$$ $$\exp\left(\frac{\iiiint_{HV} \ln(f(x_1,x_2,x_3,x_4)) dx dy dz}{\iiiint_{HV} dx_1 dx_2 dx_3 dx_4}\right)$$ $$etc.$$
What is the definition of the geometric mean of a random variable?
I have no idea what the "official" answer is, if any, and this might just be a repeat of Glen_b's answer with more calculus-y language (I don't know), but the whole idea is to be analogous to the cont
What is the definition of the geometric mean of a random variable? I have no idea what the "official" answer is, if any, and this might just be a repeat of Glen_b's answer with more calculus-y language (I don't know), but the whole idea is to be analogous to the continuous version of the arithmetic mean, right? Well the arithmetic mean of a finite number of elements can be defined as: $$\sum_{i=1}^n \frac{1}{n} x_i$$ For the arithmetic mean of the values of f(x) of a continuous region of real-number x values starting from a and ending in b, we instead write: $$\frac{\int_a^b f(x)dx}{b-a}$$ The logic I'm missing here is explaining exactly what x values we're summing together, which is something to do with Riemann sums, but the point is that we can do it. Similarly the geometric mean could be written as: $$\prod_{i=1}^n x_i^{\frac{1}{n}} = e^{\ln\left(\prod_{i=1}^n x_i^{\frac{1}{n}}\right)} = e^{\sum_{i=1}^n \frac{1}{n} \ln(x_i)} = \exp\left( \sum_{i=1}^n \frac{1}{n} ln(x_i) \right)$$ This is just the definition of an arithmetic mean stuck in an exponent. Thus, it would be intuitive at least to generalize and say that the "geometric mean" of the values of f(x) over a continuous region of real-number x values starting with a and ending in b is: $$\exp\left(\frac{\int_a^b \ln(f(x))dx}{b-a}\right)$$ As Glen_b rightly pointed out, this only works if f(x) is always non-negative, because ln(x) is undefined for negative x values, but how often to people use geometric means on negative numbers anyway? It's only even possible if there are an odd number of values you're finding the mean of or if their product is positive, despite some being negative, and even then, I'm not sure what significance the number you would get would have. To generalize to higher dimensions, i.e. functions of more than one variable, we can either observe that a-b is the length of the line we integrated across, or observe that: $$\frac{\int_a^b f(x)dx}{b-a} = \frac{\int_a^b f(x)dx}{\int_a^b dx}$$ Thus for a double, triple, quadruple, etc. integrals of the values of a function of multiple variables over a continuous region, we divide by the area of that integral by area, volume, or whatever sort of hypervolume of the region we're integrating across: $$\frac{\iint_A f(x,y) dx dy}{\iint_A dx dy}$$ $$\frac{\iiint_V f(x,y,z) dx dy dz}{\iiint_V dx dy dz}$$ $$\frac{\iiiint_{HV} f(x_1,x_2,x_3,x_4) dx dy dz}{\iiiint_{HV} dx_1 dx_2 dx_3 dx_4}$$ $$etc.$$ Thus, we can just do the same with continuous "geometric means": $$\exp\left(\frac{\iint_A \ln(f(x,y)) dx dy}{\iint_A dx dy}\right)$$ $$\exp\left(\frac{\iiint_V \ln{f(x,y,z)} dx dy dz}{\iiint_V dx dy dz}\right)$$ $$\exp\left(\frac{\iiiint_{HV} \ln(f(x_1,x_2,x_3,x_4)) dx dy dz}{\iiiint_{HV} dx_1 dx_2 dx_3 dx_4}\right)$$ $$etc.$$
What is the definition of the geometric mean of a random variable? I have no idea what the "official" answer is, if any, and this might just be a repeat of Glen_b's answer with more calculus-y language (I don't know), but the whole idea is to be analogous to the cont
46,028
Unbiased estimator of binomial PMF
Since, for a Binomial $\text{B}(n,p)$ variable $X$, and $k\le n$, the factorial moment is given by $$\mathbb{E}_p[X(X-1)\cdots(X-k+1)] = n(n-1)\cdots(n-k+1)p^k,$$ the $s$ Bernoulli rvs $\lbrace X_i\rbrace_{i=1}^{s}$ can easily return independent unbiased estimates of both $p^k$ and $(1-p)^{n-k}$ if $k+(n-k)\le s$, that is, if $n\le s$. And hence of $\mathbb{P}(X=k)\propto p^k(1-p)^{n-k}$. It sounds likely that an unbiased estimator of the above does not exist when $n>s$, because, for instance, developing $(X_1+\ldots+X_s)^k$ shows that the maximum number of terms in a product is $s$, with expectation $p^s$. No higher power of $p$ or $(1-p)$ can appear for this reason. Actually, the proof is straightforward: consider there exists such an unbiased estimator, denoted by $G(X_1+\ldots+X_s)$ since by sufficiency there exists an unbiased estimator based on the sum. Then it satisfies $$\mathbb{E}_p[G(X_1+\ldots+X_s)]=\sum_{j=1}^s \underbrace{G(j){s \choose j}}_\text{independent from $p$}p^j(1-p)^{s-j}=p^k(1-p)^{n-k}$$ or $$\sum_{j=1}^s \overbrace{G(j){s \choose j}}^{\text{non-negative}}p^{j-k}(1-p)^{s-j-n+k}=1$$ Letting $p$ tend to $0$ or $1$ leads to explosive terms when $j-k<0$ and when $s-j-n+k<0$, unless the coefficient $$G(j){s \choose j}p^{j-k}$$ is equal to zero. If $n>s$, then, for all $0\le j\le s$ and all $0\le k\le n$, either $j<k$ or $s-j<n-k$, which leads to an impossibility since $G(m)=0$ for all $0\le m\le s$. Therefore there is no unbiased estimator of $p^k(1-p)^{n-k}$ when $n>s$.
Unbiased estimator of binomial PMF
Since, for a Binomial $\text{B}(n,p)$ variable $X$, and $k\le n$, the factorial moment is given by $$\mathbb{E}_p[X(X-1)\cdots(X-k+1)] = n(n-1)\cdots(n-k+1)p^k,$$ the $s$ Bernoulli rvs $\lbrace X_i\rb
Unbiased estimator of binomial PMF Since, for a Binomial $\text{B}(n,p)$ variable $X$, and $k\le n$, the factorial moment is given by $$\mathbb{E}_p[X(X-1)\cdots(X-k+1)] = n(n-1)\cdots(n-k+1)p^k,$$ the $s$ Bernoulli rvs $\lbrace X_i\rbrace_{i=1}^{s}$ can easily return independent unbiased estimates of both $p^k$ and $(1-p)^{n-k}$ if $k+(n-k)\le s$, that is, if $n\le s$. And hence of $\mathbb{P}(X=k)\propto p^k(1-p)^{n-k}$. It sounds likely that an unbiased estimator of the above does not exist when $n>s$, because, for instance, developing $(X_1+\ldots+X_s)^k$ shows that the maximum number of terms in a product is $s$, with expectation $p^s$. No higher power of $p$ or $(1-p)$ can appear for this reason. Actually, the proof is straightforward: consider there exists such an unbiased estimator, denoted by $G(X_1+\ldots+X_s)$ since by sufficiency there exists an unbiased estimator based on the sum. Then it satisfies $$\mathbb{E}_p[G(X_1+\ldots+X_s)]=\sum_{j=1}^s \underbrace{G(j){s \choose j}}_\text{independent from $p$}p^j(1-p)^{s-j}=p^k(1-p)^{n-k}$$ or $$\sum_{j=1}^s \overbrace{G(j){s \choose j}}^{\text{non-negative}}p^{j-k}(1-p)^{s-j-n+k}=1$$ Letting $p$ tend to $0$ or $1$ leads to explosive terms when $j-k<0$ and when $s-j-n+k<0$, unless the coefficient $$G(j){s \choose j}p^{j-k}$$ is equal to zero. If $n>s$, then, for all $0\le j\le s$ and all $0\le k\le n$, either $j<k$ or $s-j<n-k$, which leads to an impossibility since $G(m)=0$ for all $0\le m\le s$. Therefore there is no unbiased estimator of $p^k(1-p)^{n-k}$ when $n>s$.
Unbiased estimator of binomial PMF Since, for a Binomial $\text{B}(n,p)$ variable $X$, and $k\le n$, the factorial moment is given by $$\mathbb{E}_p[X(X-1)\cdots(X-k+1)] = n(n-1)\cdots(n-k+1)p^k,$$ the $s$ Bernoulli rvs $\lbrace X_i\rb
46,029
Why does Naive Bayes work better when the number of features >> sample size compared to more sophisticated ML algorithms?
What the author is getting at is that Naive Bayes implicitly treats all features as being independent of one another, and therefore the sorts of curse-of-dimensionality problems which typically rear their head when dealing with high-dimensional data do not apply. If your data has $k$ dimensions, then a fully general ML algorithm which attempts to learn all possible correlations between these features has to deal with $2^k$ possible feature interactions, and therefore needs on the order of $2^k$ many data points to be performant. However because Naive Bayes assumes independence between features, it only needs on the order of $k$ many data points, exponentially fewer. However this comes at the cost of only being able to capture much simpler mappings between the input variables and the output class, and as such Naive Bayes could never compete with something like a large neural network trained on a large dataset when it comes to tasks like image recognition, although it might perform better on very small datasets.
Why does Naive Bayes work better when the number of features >> sample size compared to more sophist
What the author is getting at is that Naive Bayes implicitly treats all features as being independent of one another, and therefore the sorts of curse-of-dimensionality problems which typically rear t
Why does Naive Bayes work better when the number of features >> sample size compared to more sophisticated ML algorithms? What the author is getting at is that Naive Bayes implicitly treats all features as being independent of one another, and therefore the sorts of curse-of-dimensionality problems which typically rear their head when dealing with high-dimensional data do not apply. If your data has $k$ dimensions, then a fully general ML algorithm which attempts to learn all possible correlations between these features has to deal with $2^k$ possible feature interactions, and therefore needs on the order of $2^k$ many data points to be performant. However because Naive Bayes assumes independence between features, it only needs on the order of $k$ many data points, exponentially fewer. However this comes at the cost of only being able to capture much simpler mappings between the input variables and the output class, and as such Naive Bayes could never compete with something like a large neural network trained on a large dataset when it comes to tasks like image recognition, although it might perform better on very small datasets.
Why does Naive Bayes work better when the number of features >> sample size compared to more sophist What the author is getting at is that Naive Bayes implicitly treats all features as being independent of one another, and therefore the sorts of curse-of-dimensionality problems which typically rear t
46,030
Uncorrelatedness + Joint Normality = Independence. Why? Intuition and mechanics
The the joint probability density function (pdf) of bivariate normal distribution is: $$f(x_1,x_2)=\frac 1{2\pi\sigma_1\sigma_2\sqrt{1-\rho^2}}\exp[-\frac z{2(1-\rho^2)}], $$ where $$z=\frac{(x_1-\mu_1)^2}{\sigma_1^2}-\frac{2\rho(x_1-\mu_1)(x_2-\mu_2)}{\sigma_1\sigma_2}+\frac{(x_2-\mu_2)^2}{\sigma_2^2}.$$ When $\rho = 0$, $$\begin{align}f(x_1,x_2) &=\frac 1{2\pi\sigma_1\sigma_2}\exp[-\frac 12\left\{\frac{(x_1-\mu_1)^2}{\sigma_1^2}+\frac{(x_2-\mu_2)^2}{\sigma_2^2}\right\} ]\\ & = \frac 1{\sqrt{2\pi}\sigma_1}\exp[-\frac 12\left\{\frac{(x_1-\mu_1)^2}{\sigma_1^2}\right\}] \frac 1{\sqrt{2\pi}\sigma_2}\exp[-\frac 12\left\{\frac{(x_2-\mu_2)^2}{\sigma_2^2}\right\}]\\ &= f(x_1)f(x_2)\end{align}$$. So they are independent.
Uncorrelatedness + Joint Normality = Independence. Why? Intuition and mechanics
The the joint probability density function (pdf) of bivariate normal distribution is: $$f(x_1,x_2)=\frac 1{2\pi\sigma_1\sigma_2\sqrt{1-\rho^2}}\exp[-\frac z{2(1-\rho^2)}], $$ where $$z=\frac{(x_1-\
Uncorrelatedness + Joint Normality = Independence. Why? Intuition and mechanics The the joint probability density function (pdf) of bivariate normal distribution is: $$f(x_1,x_2)=\frac 1{2\pi\sigma_1\sigma_2\sqrt{1-\rho^2}}\exp[-\frac z{2(1-\rho^2)}], $$ where $$z=\frac{(x_1-\mu_1)^2}{\sigma_1^2}-\frac{2\rho(x_1-\mu_1)(x_2-\mu_2)}{\sigma_1\sigma_2}+\frac{(x_2-\mu_2)^2}{\sigma_2^2}.$$ When $\rho = 0$, $$\begin{align}f(x_1,x_2) &=\frac 1{2\pi\sigma_1\sigma_2}\exp[-\frac 12\left\{\frac{(x_1-\mu_1)^2}{\sigma_1^2}+\frac{(x_2-\mu_2)^2}{\sigma_2^2}\right\} ]\\ & = \frac 1{\sqrt{2\pi}\sigma_1}\exp[-\frac 12\left\{\frac{(x_1-\mu_1)^2}{\sigma_1^2}\right\}] \frac 1{\sqrt{2\pi}\sigma_2}\exp[-\frac 12\left\{\frac{(x_2-\mu_2)^2}{\sigma_2^2}\right\}]\\ &= f(x_1)f(x_2)\end{align}$$. So they are independent.
Uncorrelatedness + Joint Normality = Independence. Why? Intuition and mechanics The the joint probability density function (pdf) of bivariate normal distribution is: $$f(x_1,x_2)=\frac 1{2\pi\sigma_1\sigma_2\sqrt{1-\rho^2}}\exp[-\frac z{2(1-\rho^2)}], $$ where $$z=\frac{(x_1-\
46,031
Uncorrelatedness + Joint Normality = Independence. Why? Intuition and mechanics
Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways: For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution. There are random variables $Z_1,Z_2\sim\operatorname{\text{i.i.d.}} \operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$\begin{align} X & = aZ_1+bZ_2 \\ \text{and } Y & = cZ_1 + dZ_2. \end{align}$$ That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . . If the second one it true, then $\operatorname{cov}(X,Y) = ac + bd.$ If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$ Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$
Uncorrelatedness + Joint Normality = Independence. Why? Intuition and mechanics
Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways: For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution. There
Uncorrelatedness + Joint Normality = Independence. Why? Intuition and mechanics Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways: For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution. There are random variables $Z_1,Z_2\sim\operatorname{\text{i.i.d.}} \operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$\begin{align} X & = aZ_1+bZ_2 \\ \text{and } Y & = cZ_1 + dZ_2. \end{align}$$ That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . . If the second one it true, then $\operatorname{cov}(X,Y) = ac + bd.$ If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$ Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$
Uncorrelatedness + Joint Normality = Independence. Why? Intuition and mechanics Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways: For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution. There
46,032
Scaling separately in train and test set? [duplicate]
Any kind of transformation of the data representation that "takes" information from the data should only be "fitted" on the training data. This is because: If you were using all data you would have a information leakage from the validation or test (also called: holdout) data into your model. This is forbidden! As a result your validation/test score estimates will be skewed. The model should also be only trained on a specific data representation. The data representation transformation should be applied like in the training stages in most cases (example of an exception: some kind of online settings). So in the usual cases of batch training with ERM evaluation or stochastic optimization in deep learning, this kind of normalization should only be done on the training stage. This is also why this transformation is grouped into a pipeline together with the model in most ML library designs. Because then they can be fitted together as well as deployed as one. Of course this can lead to breaking of assumptions during runtime. Say you min-max-normalize, you would expect that attribute to fall into $[0, 1]$ after. Say the max was $m$, then it could very well be that new data has the very attribute with a value $x > m$, thus applying the min-max-normalization you would get a transformed value $\tilde{x} > 1$. This does not work so well in some cases, so you would do some kind of truncation and setting the value to $1$. If you expect many outliers you may want to take a look at RobustScaler in scikit-learn for example.
Scaling separately in train and test set? [duplicate]
Any kind of transformation of the data representation that "takes" information from the data should only be "fitted" on the training data. This is because: If you were using all data you would have a
Scaling separately in train and test set? [duplicate] Any kind of transformation of the data representation that "takes" information from the data should only be "fitted" on the training data. This is because: If you were using all data you would have a information leakage from the validation or test (also called: holdout) data into your model. This is forbidden! As a result your validation/test score estimates will be skewed. The model should also be only trained on a specific data representation. The data representation transformation should be applied like in the training stages in most cases (example of an exception: some kind of online settings). So in the usual cases of batch training with ERM evaluation or stochastic optimization in deep learning, this kind of normalization should only be done on the training stage. This is also why this transformation is grouped into a pipeline together with the model in most ML library designs. Because then they can be fitted together as well as deployed as one. Of course this can lead to breaking of assumptions during runtime. Say you min-max-normalize, you would expect that attribute to fall into $[0, 1]$ after. Say the max was $m$, then it could very well be that new data has the very attribute with a value $x > m$, thus applying the min-max-normalization you would get a transformed value $\tilde{x} > 1$. This does not work so well in some cases, so you would do some kind of truncation and setting the value to $1$. If you expect many outliers you may want to take a look at RobustScaler in scikit-learn for example.
Scaling separately in train and test set? [duplicate] Any kind of transformation of the data representation that "takes" information from the data should only be "fitted" on the training data. This is because: If you were using all data you would have a
46,033
Scaling separately in train and test set? [duplicate]
You should use train data's mean and std. deviation. For example, in scikit-learn library, StdScaler class, you'll first use fit() function with training data and transform() function with the test data, which transforms the given test data using fitted m and s values calculated from training data you have. If you have significantly different mean and std. dev. values for training and test sets you've, then your training set might not be a good representative for your overall population, which may result in more serious problems not limited to standard scaling you're asking. Or, your test set is very skewed. If it is skewed, when you standard scale it, there is a danger that your test samples might be of similar nature with your training samples, e.g. situations that are anomaly might seem normal. Also, what were you going to do if I give you one test sample every day, would you wait for enough samples to calculate the test mean?
Scaling separately in train and test set? [duplicate]
You should use train data's mean and std. deviation. For example, in scikit-learn library, StdScaler class, you'll first use fit() function with training data and transform() function with the test da
Scaling separately in train and test set? [duplicate] You should use train data's mean and std. deviation. For example, in scikit-learn library, StdScaler class, you'll first use fit() function with training data and transform() function with the test data, which transforms the given test data using fitted m and s values calculated from training data you have. If you have significantly different mean and std. dev. values for training and test sets you've, then your training set might not be a good representative for your overall population, which may result in more serious problems not limited to standard scaling you're asking. Or, your test set is very skewed. If it is skewed, when you standard scale it, there is a danger that your test samples might be of similar nature with your training samples, e.g. situations that are anomaly might seem normal. Also, what were you going to do if I give you one test sample every day, would you wait for enough samples to calculate the test mean?
Scaling separately in train and test set? [duplicate] You should use train data's mean and std. deviation. For example, in scikit-learn library, StdScaler class, you'll first use fit() function with training data and transform() function with the test da
46,034
Convergence to a Uniform Distribution
I want to make two points. One is that although the concepts in this exercise are relatively sophisticated, a completely elementary simple solution is available. The other is that an appropriate visualization of the problem can lead us in natural steps through a rigorous proof. I will demonstrate that, by pointing out visually evident patterns in a set of graphs and then proving that those patterns are correct, using nothing more than the most basic properties of numbers and limits. Because convergence in distribution is defined in terms of the (pointwise) convergence of the distribution functions, let's understand the latter. Define $F_n$ to be the distribution of $X_n.$ That is, $$F_n(x) = {\Pr}(X_n\le x).$$ Because $X_n$ can take on only a finite set of values--namely, $1/n, 2/n, \ldots, (n-1)/n, n/n=1$--it is necessarily discrete. This allows us to express its distribution $F_n(x)$ as the sum of probabilities of all numbers less than or equal to $x.$ Because all the possible values of all $X_n$ are greater than $0$ and less than or equal to $1,$ we immediately deduce that $F_n(x)=0$ when $x\le 0$ and $F_n(x)=1$ when $x \ge 1.$ What about intermediate values, $x\in(0,1)$? To find these, we may simply multiply $x$ by $n.$ This converts $i/n$ to $i$, whence $$F_n(x) = \frac{1}{n}\text{ times the number of integers in }\{1,2,3,\ldots, n\}\text{ less than or equal to } nx.$$ The expression in words describes the floor function (aka greatest integer function), which associates with any real number $x$ the largest integer less than or equal to $x.$ This is more compactly written $$F_n(x) = \lfloor xn \rfloor / n.\tag{1}$$ Let's plot a few of these functions, shown in the first three red graphs from left to right. The blue graph maps $x$ to $x$ for $0\lt x\lt 1$ and otherwise is $0$ for negative $x$ and equal to $1$ for $x\ge 1.$ It is shown for reference. Let $F_\infty$ denote the function of which it is the graph. Another pattern emerges: the red curves never rise above the reference blue curve. That means $F_n(x) \le x$ for all $x.$ This is an immediate consequence of $(1),$ because the floor function is defined as the largest of a set of smaller values. Visually, the red graphs grow closer to the blue graph as $n$ increases. It's tempting to leap to the conclusion that the blue graph must be their limit. Be careful, though, because they don't do this consistently at every point. For instance, for $n=3$ the red graph for $F_n$ touches the blue one at $x=1/3$ and $x=2/3,$ but when we increase $n$ to $4,$ its graph will no longer touch the blue one at those points. The situation isn't all that complicated, though. You can see that the red graphs depart no further than $1/n$ from the blue graph at any point. Algebraically, in terms of the notation $(1),$ this claim is established by noting that the floor of any number $x$ is never more than $1$ away from $x$ itself, whence $$|F_n(x) - F_\infty(x)| = |x - \lfloor nx \rfloor / n| = |nx - \lfloor nx \rfloor|/n \le \frac{1}{n}.$$ A basic fact of real numbers is that they are Archimedian: the sizes of the integers $1,2,3,\ldots, n,\ldots$ increase without bound. Therefore the sizes of the foregoing differences $1/n$ converge to $0$ as $n$ increases without bound. That is, by definition, what it means for $$\lim_{n\to\infty} F_n(x) = F_\infty(x).$$ We have just proven that for $0\lt x\lt 1,$ $F_n(x)$ converges (uniformly!) to $F_\infty(x).$ For all other $x$, $F_n(x) = F_\infty(x).$ Consequently the convergence occurs everywhere. Because $F_\infty$ is the uniform distribution function for the interval $[0,1],$ we are done.
Convergence to a Uniform Distribution
I want to make two points. One is that although the concepts in this exercise are relatively sophisticated, a completely elementary simple solution is available. The other is that an appropriate vis
Convergence to a Uniform Distribution I want to make two points. One is that although the concepts in this exercise are relatively sophisticated, a completely elementary simple solution is available. The other is that an appropriate visualization of the problem can lead us in natural steps through a rigorous proof. I will demonstrate that, by pointing out visually evident patterns in a set of graphs and then proving that those patterns are correct, using nothing more than the most basic properties of numbers and limits. Because convergence in distribution is defined in terms of the (pointwise) convergence of the distribution functions, let's understand the latter. Define $F_n$ to be the distribution of $X_n.$ That is, $$F_n(x) = {\Pr}(X_n\le x).$$ Because $X_n$ can take on only a finite set of values--namely, $1/n, 2/n, \ldots, (n-1)/n, n/n=1$--it is necessarily discrete. This allows us to express its distribution $F_n(x)$ as the sum of probabilities of all numbers less than or equal to $x.$ Because all the possible values of all $X_n$ are greater than $0$ and less than or equal to $1,$ we immediately deduce that $F_n(x)=0$ when $x\le 0$ and $F_n(x)=1$ when $x \ge 1.$ What about intermediate values, $x\in(0,1)$? To find these, we may simply multiply $x$ by $n.$ This converts $i/n$ to $i$, whence $$F_n(x) = \frac{1}{n}\text{ times the number of integers in }\{1,2,3,\ldots, n\}\text{ less than or equal to } nx.$$ The expression in words describes the floor function (aka greatest integer function), which associates with any real number $x$ the largest integer less than or equal to $x.$ This is more compactly written $$F_n(x) = \lfloor xn \rfloor / n.\tag{1}$$ Let's plot a few of these functions, shown in the first three red graphs from left to right. The blue graph maps $x$ to $x$ for $0\lt x\lt 1$ and otherwise is $0$ for negative $x$ and equal to $1$ for $x\ge 1.$ It is shown for reference. Let $F_\infty$ denote the function of which it is the graph. Another pattern emerges: the red curves never rise above the reference blue curve. That means $F_n(x) \le x$ for all $x.$ This is an immediate consequence of $(1),$ because the floor function is defined as the largest of a set of smaller values. Visually, the red graphs grow closer to the blue graph as $n$ increases. It's tempting to leap to the conclusion that the blue graph must be their limit. Be careful, though, because they don't do this consistently at every point. For instance, for $n=3$ the red graph for $F_n$ touches the blue one at $x=1/3$ and $x=2/3,$ but when we increase $n$ to $4,$ its graph will no longer touch the blue one at those points. The situation isn't all that complicated, though. You can see that the red graphs depart no further than $1/n$ from the blue graph at any point. Algebraically, in terms of the notation $(1),$ this claim is established by noting that the floor of any number $x$ is never more than $1$ away from $x$ itself, whence $$|F_n(x) - F_\infty(x)| = |x - \lfloor nx \rfloor / n| = |nx - \lfloor nx \rfloor|/n \le \frac{1}{n}.$$ A basic fact of real numbers is that they are Archimedian: the sizes of the integers $1,2,3,\ldots, n,\ldots$ increase without bound. Therefore the sizes of the foregoing differences $1/n$ converge to $0$ as $n$ increases without bound. That is, by definition, what it means for $$\lim_{n\to\infty} F_n(x) = F_\infty(x).$$ We have just proven that for $0\lt x\lt 1,$ $F_n(x)$ converges (uniformly!) to $F_\infty(x).$ For all other $x$, $F_n(x) = F_\infty(x).$ Consequently the convergence occurs everywhere. Because $F_\infty$ is the uniform distribution function for the interval $[0,1],$ we are done.
Convergence to a Uniform Distribution I want to make two points. One is that although the concepts in this exercise are relatively sophisticated, a completely elementary simple solution is available. The other is that an appropriate vis
46,035
Convergence to a Uniform Distribution
By the way the information is given, namely that it tells us that the probability of $X_n$ taking a specific value is strictly positive, we learn that $X_n$ is certainly a discrete random variable, otherwise we would have $P(X_n = i/n)=0$ (and I guess we can assume that mixed distributions, part discrete part continuous, are not considered here). Moreover, given $P(X_n = i/n)=1/n$, we learn that for every finite $n$, $X_n$ is a discrete uniform random variable with support $\{1,2,...,n\}$. What the exercise asks you to examine is whether, as $n \to \infty$, $X_n$ becomes at the limit a continuous uniform random variable. Does it? The issue is trickier to examine than it may look at first, because the Uniform distribution, discrete or continuous, is defined in a bounded support (domain). If we let $n\to \infty$ the domain becomes infinite from the one side. Certainly, for $n$ "very large" but finite, the distribution intuitively will be practically indistinguishable from the truly continuous uniform distribution. But to formalize "practically indistinguishable" without officially using $n\to \infty$, it appears you may need to use careful epsilon-delta limit arguments.
Convergence to a Uniform Distribution
By the way the information is given, namely that it tells us that the probability of $X_n$ taking a specific value is strictly positive, we learn that $X_n$ is certainly a discrete random variable, ot
Convergence to a Uniform Distribution By the way the information is given, namely that it tells us that the probability of $X_n$ taking a specific value is strictly positive, we learn that $X_n$ is certainly a discrete random variable, otherwise we would have $P(X_n = i/n)=0$ (and I guess we can assume that mixed distributions, part discrete part continuous, are not considered here). Moreover, given $P(X_n = i/n)=1/n$, we learn that for every finite $n$, $X_n$ is a discrete uniform random variable with support $\{1,2,...,n\}$. What the exercise asks you to examine is whether, as $n \to \infty$, $X_n$ becomes at the limit a continuous uniform random variable. Does it? The issue is trickier to examine than it may look at first, because the Uniform distribution, discrete or continuous, is defined in a bounded support (domain). If we let $n\to \infty$ the domain becomes infinite from the one side. Certainly, for $n$ "very large" but finite, the distribution intuitively will be practically indistinguishable from the truly continuous uniform distribution. But to formalize "practically indistinguishable" without officially using $n\to \infty$, it appears you may need to use careful epsilon-delta limit arguments.
Convergence to a Uniform Distribution By the way the information is given, namely that it tells us that the probability of $X_n$ taking a specific value is strictly positive, we learn that $X_n$ is certainly a discrete random variable, ot
46,036
Can an optimal weighted average ever have negative weights?
In addition to the other excellent answer. Yes, it can give negative weights, in some cases, even if that can look counterintuitive. Let us see. First, I will go through solving the minimization problem (and I will simplify your notation, replacing $\vec{w}$ with $w$ and so on). Introducing the Lagrange multiplier $\lambda$, define $$ L(w)= w^T C w -2 \lambda (w^T 1-1). $$ Then we find the partial derivatives $$ \frac{\partial L}{\partial \lambda}= w^T 1 - 1 \\ \frac{\partial L}{\partial w} = 2 C w - 2 \lambda 1 $$ Setting this equal to zero, and solving assuming $C$ is positive definite (so invertible) we find $$w = \lambda C^{-1} 1. $$To find $\lambda$ start with $w^T = \lambda 1^T C^{-1}$ and postmultiply with vector $1$ gives $\lambda=\frac1{1^T C^{-1} 1}$ so finally $$ w = \frac{C^{-1}1}{1^T C^{-1} 1}. $$ If $C$ is only positive-semidefinite (so inverse do not exist) then the variance is minimized (and equal to zero) by any $w$ in the nullspace of $C$. Let us look at this in the case of two measurements of the same real-world quantity $X$. If the covariance matrix (of measurement errors) have nullspace with dimension 1, this means that the two error terms are linearly dependent, a highly impractical case. But then we have the model $$ y_1 = X+\epsilon \\ y_2=X+c\epsilon $$ for some error variable $\epsilon$ with expectation zero and variance $\sigma^2$. This means that with probability 1 the error term $\epsilon_1, \epsilon_2$ belongs to the linear subspace $\{ (x_1, x_2)\in \mathbb{R}^2 \colon x_2=c x_1\}$. The orthogonal complement of this space (which is the nullspace of $C$) is $\{(w_1, w_2)\in\mathbb{R}^2 \colon w_2=-w_1/c\}$, and this subspace then contains optimal weights$^\dagger$. Then we can calculate that $$ (y_1, y_2)^T w = (X+\epsilon) w_1 + (X+c\epsilon)w_2 = X(w_1+w_2)+(\epsilon w_1 + c \epsilon w_2) =X(w_1+w_2)+\epsilon(w_1-c w_1/c) = X$$ when we in the last step has used that $w_1+w_2=1$. So the negative weights in this case makes it possible to filter the error term completely, to leave an estimate without error! But negative weight do not occur only in this unnatural case. By a continuity argument, if the correlation is sufficiently close to 1, we could expect something similar to occur even if $C$ has complete range, so is positive definite. Continuing the above case with $n=2$, write $C$ in general form as $$ C=\begin{pmatrix} \sigma_1^2 & \rho \sigma_1 \sigma_2 \\ \rho\sigma_1\sigma_2 & \sigma_2^2\end{pmatrix} $$ Then we can find the inverse as $$ C^{-1}=\frac1{(1-\rho^2)\sigma_1\sigma_2}\begin{pmatrix} \frac{\sigma_2}{\sigma_1} & -\rho \\ -\rho & \frac{\sigma_1}{\sigma_2}\end{pmatrix} $$ and then we can find the weights are proportional to $$ (w_1,w_2) \propto \frac1{(1-\rho^2) \sigma_1 \sigma_2} \left(\frac{\sigma_2}{\sigma_1}-\rho, -\rho+\frac{\sigma_1}{\sigma_2}\right) $$ Now, if $\sigma_1=\sigma_2$, the weights are equal and positive, but when these are different, for some correlations sufficiently close to 1, one of the weights is negative. For instance, if $\sigma_1 < \sigma_2$, then we get $w_2<0$. So the weight of the least precise measurement is negative. The intuitive explanation is that for correlation $\rho$ sufficiently close to 1, both error terms will with high probability have the same sign, so it makes sense that one of the weights should be negative. $^\dagger$ Note that in this argument we have really assumed that $c>0$ (and if $c=1$ the argument will need modification). If $c<0$ much the same will be true, but we must replace "correlation close to 1" with "correlation close to $-1$".
Can an optimal weighted average ever have negative weights?
In addition to the other excellent answer. Yes, it can give negative weights, in some cases, even if that can look counterintuitive. Let us see. First, I will go through solving the minimization prob
Can an optimal weighted average ever have negative weights? In addition to the other excellent answer. Yes, it can give negative weights, in some cases, even if that can look counterintuitive. Let us see. First, I will go through solving the minimization problem (and I will simplify your notation, replacing $\vec{w}$ with $w$ and so on). Introducing the Lagrange multiplier $\lambda$, define $$ L(w)= w^T C w -2 \lambda (w^T 1-1). $$ Then we find the partial derivatives $$ \frac{\partial L}{\partial \lambda}= w^T 1 - 1 \\ \frac{\partial L}{\partial w} = 2 C w - 2 \lambda 1 $$ Setting this equal to zero, and solving assuming $C$ is positive definite (so invertible) we find $$w = \lambda C^{-1} 1. $$To find $\lambda$ start with $w^T = \lambda 1^T C^{-1}$ and postmultiply with vector $1$ gives $\lambda=\frac1{1^T C^{-1} 1}$ so finally $$ w = \frac{C^{-1}1}{1^T C^{-1} 1}. $$ If $C$ is only positive-semidefinite (so inverse do not exist) then the variance is minimized (and equal to zero) by any $w$ in the nullspace of $C$. Let us look at this in the case of two measurements of the same real-world quantity $X$. If the covariance matrix (of measurement errors) have nullspace with dimension 1, this means that the two error terms are linearly dependent, a highly impractical case. But then we have the model $$ y_1 = X+\epsilon \\ y_2=X+c\epsilon $$ for some error variable $\epsilon$ with expectation zero and variance $\sigma^2$. This means that with probability 1 the error term $\epsilon_1, \epsilon_2$ belongs to the linear subspace $\{ (x_1, x_2)\in \mathbb{R}^2 \colon x_2=c x_1\}$. The orthogonal complement of this space (which is the nullspace of $C$) is $\{(w_1, w_2)\in\mathbb{R}^2 \colon w_2=-w_1/c\}$, and this subspace then contains optimal weights$^\dagger$. Then we can calculate that $$ (y_1, y_2)^T w = (X+\epsilon) w_1 + (X+c\epsilon)w_2 = X(w_1+w_2)+(\epsilon w_1 + c \epsilon w_2) =X(w_1+w_2)+\epsilon(w_1-c w_1/c) = X$$ when we in the last step has used that $w_1+w_2=1$. So the negative weights in this case makes it possible to filter the error term completely, to leave an estimate without error! But negative weight do not occur only in this unnatural case. By a continuity argument, if the correlation is sufficiently close to 1, we could expect something similar to occur even if $C$ has complete range, so is positive definite. Continuing the above case with $n=2$, write $C$ in general form as $$ C=\begin{pmatrix} \sigma_1^2 & \rho \sigma_1 \sigma_2 \\ \rho\sigma_1\sigma_2 & \sigma_2^2\end{pmatrix} $$ Then we can find the inverse as $$ C^{-1}=\frac1{(1-\rho^2)\sigma_1\sigma_2}\begin{pmatrix} \frac{\sigma_2}{\sigma_1} & -\rho \\ -\rho & \frac{\sigma_1}{\sigma_2}\end{pmatrix} $$ and then we can find the weights are proportional to $$ (w_1,w_2) \propto \frac1{(1-\rho^2) \sigma_1 \sigma_2} \left(\frac{\sigma_2}{\sigma_1}-\rho, -\rho+\frac{\sigma_1}{\sigma_2}\right) $$ Now, if $\sigma_1=\sigma_2$, the weights are equal and positive, but when these are different, for some correlations sufficiently close to 1, one of the weights is negative. For instance, if $\sigma_1 < \sigma_2$, then we get $w_2<0$. So the weight of the least precise measurement is negative. The intuitive explanation is that for correlation $\rho$ sufficiently close to 1, both error terms will with high probability have the same sign, so it makes sense that one of the weights should be negative. $^\dagger$ Note that in this argument we have really assumed that $c>0$ (and if $c=1$ the argument will need modification). If $c<0$ much the same will be true, but we must replace "correlation close to 1" with "correlation close to $-1$".
Can an optimal weighted average ever have negative weights? In addition to the other excellent answer. Yes, it can give negative weights, in some cases, even if that can look counterintuitive. Let us see. First, I will go through solving the minimization prob
46,037
Can an optimal weighted average ever have negative weights?
In abstract theory, yes, because of the correlation structure. If you had uncorrelated measurements (with positive variance), then the weights could only be positive. Example: Let $\mu$ denote the true value. Let noisy measurement $X_1 \sim \mathcal{N}(\mu, 1)$. Let $X_2 = 2X_1 - \mu$, hence $\operatorname{E}[X_2] = \mu$ but $X_2$ is perfectly correlated with $X_1$. The covariance matrix is $ C = \begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix} $. The solution is $\mathbf{w} = \begin{bmatrix} 2 \\ -1 \end{bmatrix} $. Observe $2X_1 - X_2 = \mu$ and that linear combination has no variance (because $\mathbf{w}$ lies in the null space of $C$). (As @kjetil b halvorsen points out and explores more deeply in his answer, negative weights aren't limited to degenerate cases like this.) An equivalent finance problem: An almost equivalent problem is solving for the minimum variance portfolio in finance. Let $R$ denote a vector of $n$ returns. Let $C = \operatorname{Cov}(R)$ denote the covariance matrix of $R$. Let $\mathbf{w}$ denote a vector of portfolio weights. The minimum variance portfolio is found by solving: \begin{equation} \begin{array}{*2{>{\displaystyle}r}} \mbox{minimize (over $w_i$)} & \mathbf{w}'\Sigma \mathbf{w} \\ \mbox{subject to} & \mathbf{w}'\mathbf{1} = 1 \end{array} \end{equation} This is exactly the same problem and it has exactly the same solution. For invertible $C$: $$ \mathbf{w}_{mvp} = \frac{C^{-1} \mathbf{1}}{\mathbf{1}'C^{-1}\mathbf{1}}$$ Perhaps in the portfolio context, it's more intuitive that the minimum variance portfolio may involve both going long and short assets? (Note: before you run off and try to start an investment fund realize that estimating $C$ has big time problems.) Some linear algebra interpretation Let $U'\Lambda U = C$ be the eigenvalue decomposition of $C$. (This is basically PCA). Then $Y = R U$ is a random vector of uncorrelated random variables whose variance is given by the diagonal matrix of eigenvalues $\Lambda$. The minimum variance portfolio will give you positive weights on these random variables $Y_1, \ldots, Y_n$ but since these components are themselves linear combinations of security returns, you may get positive and negative weights in security weight space.
Can an optimal weighted average ever have negative weights?
In abstract theory, yes, because of the correlation structure. If you had uncorrelated measurements (with positive variance), then the weights could only be positive. Example: Let $\mu$ denote the tru
Can an optimal weighted average ever have negative weights? In abstract theory, yes, because of the correlation structure. If you had uncorrelated measurements (with positive variance), then the weights could only be positive. Example: Let $\mu$ denote the true value. Let noisy measurement $X_1 \sim \mathcal{N}(\mu, 1)$. Let $X_2 = 2X_1 - \mu$, hence $\operatorname{E}[X_2] = \mu$ but $X_2$ is perfectly correlated with $X_1$. The covariance matrix is $ C = \begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix} $. The solution is $\mathbf{w} = \begin{bmatrix} 2 \\ -1 \end{bmatrix} $. Observe $2X_1 - X_2 = \mu$ and that linear combination has no variance (because $\mathbf{w}$ lies in the null space of $C$). (As @kjetil b halvorsen points out and explores more deeply in his answer, negative weights aren't limited to degenerate cases like this.) An equivalent finance problem: An almost equivalent problem is solving for the minimum variance portfolio in finance. Let $R$ denote a vector of $n$ returns. Let $C = \operatorname{Cov}(R)$ denote the covariance matrix of $R$. Let $\mathbf{w}$ denote a vector of portfolio weights. The minimum variance portfolio is found by solving: \begin{equation} \begin{array}{*2{>{\displaystyle}r}} \mbox{minimize (over $w_i$)} & \mathbf{w}'\Sigma \mathbf{w} \\ \mbox{subject to} & \mathbf{w}'\mathbf{1} = 1 \end{array} \end{equation} This is exactly the same problem and it has exactly the same solution. For invertible $C$: $$ \mathbf{w}_{mvp} = \frac{C^{-1} \mathbf{1}}{\mathbf{1}'C^{-1}\mathbf{1}}$$ Perhaps in the portfolio context, it's more intuitive that the minimum variance portfolio may involve both going long and short assets? (Note: before you run off and try to start an investment fund realize that estimating $C$ has big time problems.) Some linear algebra interpretation Let $U'\Lambda U = C$ be the eigenvalue decomposition of $C$. (This is basically PCA). Then $Y = R U$ is a random vector of uncorrelated random variables whose variance is given by the diagonal matrix of eigenvalues $\Lambda$. The minimum variance portfolio will give you positive weights on these random variables $Y_1, \ldots, Y_n$ but since these components are themselves linear combinations of security returns, you may get positive and negative weights in security weight space.
Can an optimal weighted average ever have negative weights? In abstract theory, yes, because of the correlation structure. If you had uncorrelated measurements (with positive variance), then the weights could only be positive. Example: Let $\mu$ denote the tru
46,038
An X% trimmed mean means?
Neither is "right" or "wrong"; it's just that usage is not universal. However, I've seen Wilcox's definition used more than the other. Wikipedia agrees with him, as do several other sites I browsed to, and so do SAS, and R.
An X% trimmed mean means?
Neither is "right" or "wrong"; it's just that usage is not universal. However, I've seen Wilcox's definition used more than the other. Wikipedia agrees with him, as do several other sites I browsed t
An X% trimmed mean means? Neither is "right" or "wrong"; it's just that usage is not universal. However, I've seen Wilcox's definition used more than the other. Wikipedia agrees with him, as do several other sites I browsed to, and so do SAS, and R.
An X% trimmed mean means? Neither is "right" or "wrong"; it's just that usage is not universal. However, I've seen Wilcox's definition used more than the other. Wikipedia agrees with him, as do several other sites I browsed t
46,039
An X% trimmed mean means?
As Peter correctly points out, the conventions on usage of this term differ, and the definition used by Wilcox seems (unfortunately) to be the more common. I disagree with the view that neither is right or wrong. The definition that removes X% from each side of the ordered data vector, but refers to this as an "X% trimmed mean" is a zombie definition --- it seems to be impossible to kill despite obvious and serious flaws: Under this definition you are actually removing twice as much of the data as the "headline" amount you refer to in your description of the statistic. In particular, a "50% trim" removes all the data! That is contrary to the basic meaning of language, and it is highly misleading to the reader, who would expect removal of all the data to be described as a "100% trim". Use of this term, without explicit elaboration on its idiosyncrasy, is highly misleading. This definition is also completely inconsistent with analogous usage of significance levels for hypothesis tests and confidence intervals in statistical discussion. In those contexts, if you have a significance level $\alpha$ and you create a two-sided test/interval, the value $\alpha$ refers to the total area on both sides. So, for example, an equal-tail $1-\alpha$ confidence interval excludes an area of $\alpha/2$ from either side, and a two-sided symmetric hypothesis test at $\alpha$ significance level constructs the rejection region by allocating a null rejection probability of $\alpha/2$ to each side. In both cases the terminology respects the fact that the significance level is fixed as a total. The definition fails on both counts: it is contrary to ordinary language and it is inconsistent with well-established (an linguistically appropriate) conventions for statistical description in other core areas of the subject. If you are going to report trimmed means in your own analysis for any purpose, please to not feed the zombies. Please use this term in its more appropriate meaning, where an X% trimmed mean refers to the removal of X% of the data. If you are concerned about interpretation, leave a footnote explaining your usage of the term.
An X% trimmed mean means?
As Peter correctly points out, the conventions on usage of this term differ, and the definition used by Wilcox seems (unfortunately) to be the more common. I disagree with the view that neither is ri
An X% trimmed mean means? As Peter correctly points out, the conventions on usage of this term differ, and the definition used by Wilcox seems (unfortunately) to be the more common. I disagree with the view that neither is right or wrong. The definition that removes X% from each side of the ordered data vector, but refers to this as an "X% trimmed mean" is a zombie definition --- it seems to be impossible to kill despite obvious and serious flaws: Under this definition you are actually removing twice as much of the data as the "headline" amount you refer to in your description of the statistic. In particular, a "50% trim" removes all the data! That is contrary to the basic meaning of language, and it is highly misleading to the reader, who would expect removal of all the data to be described as a "100% trim". Use of this term, without explicit elaboration on its idiosyncrasy, is highly misleading. This definition is also completely inconsistent with analogous usage of significance levels for hypothesis tests and confidence intervals in statistical discussion. In those contexts, if you have a significance level $\alpha$ and you create a two-sided test/interval, the value $\alpha$ refers to the total area on both sides. So, for example, an equal-tail $1-\alpha$ confidence interval excludes an area of $\alpha/2$ from either side, and a two-sided symmetric hypothesis test at $\alpha$ significance level constructs the rejection region by allocating a null rejection probability of $\alpha/2$ to each side. In both cases the terminology respects the fact that the significance level is fixed as a total. The definition fails on both counts: it is contrary to ordinary language and it is inconsistent with well-established (an linguistically appropriate) conventions for statistical description in other core areas of the subject. If you are going to report trimmed means in your own analysis for any purpose, please to not feed the zombies. Please use this term in its more appropriate meaning, where an X% trimmed mean refers to the removal of X% of the data. If you are concerned about interpretation, leave a footnote explaining your usage of the term.
An X% trimmed mean means? As Peter correctly points out, the conventions on usage of this term differ, and the definition used by Wilcox seems (unfortunately) to be the more common. I disagree with the view that neither is ri
46,040
Fitting a GARCH(1, 1) model
I explain how to get the log-likelihood function for the GARCH(1,1) model in the answer to this question. The GARCH model is specified in a particular way, but notation may differ between papers and applications. The log-likelihood may differ due to constants being omitted (they are irrelevant when maximizing). The MLE is typically found using a numerical optimization routine. A quick implementation example in python: define relevant packages: define algorithm check if output is reasonable
Fitting a GARCH(1, 1) model
I explain how to get the log-likelihood function for the GARCH(1,1) model in the answer to this question. The GARCH model is specified in a particular way, but notation may differ between papers and
Fitting a GARCH(1, 1) model I explain how to get the log-likelihood function for the GARCH(1,1) model in the answer to this question. The GARCH model is specified in a particular way, but notation may differ between papers and applications. The log-likelihood may differ due to constants being omitted (they are irrelevant when maximizing). The MLE is typically found using a numerical optimization routine. A quick implementation example in python: define relevant packages: define algorithm check if output is reasonable
Fitting a GARCH(1, 1) model I explain how to get the log-likelihood function for the GARCH(1,1) model in the answer to this question. The GARCH model is specified in a particular way, but notation may differ between papers and
46,041
Bayesian Statistics. Please help me to find an example where posterior variance is greater than prior variance
Gamma-Poisson: Suppose your prior for Poisson data is $\lambda \sim \mathsf{Gamma}(\text{shape}=4, \text{rate}=1/4).$ This distribution has mean 16 and variance 64. It's 95th percentile is about 31. qgamma(.95, 4, .25) [1] 31.01463 But your first Poisson observation is $x = 500.$ Then your posterior distribution is $\mathsf{Gamma}(\text{shape}=4+500, \text{rate}=1/4+1) = \mathsf{Gamma}(\text{shape}=504, \text{rate}=1.25),$ which has mean $504/1.25 = 403.2$ and variance $504/1.25^2 = 322.56 > 64.$ Beta-binomial: For a beta-binomial example (along lines suggested by @whuber), tossing a coin. Suppose your prior for $P(\text{Head})=\theta$ is $\mathsf{Beta}(10,1),$ for a coin heavily biased in favor of Heads. Then on four tosses you get two Heads and two tails. Find the variances of the prior and posterior distributions. Note: In both examples, the idea is to have an informative prior and then a small amount of data that doesn't match the prior. With a large amount of data, the data can overwhelm the prior, yielding a posterior with small variance. Now I hope you can find examples of your own.
Bayesian Statistics. Please help me to find an example where posterior variance is greater than prio
Gamma-Poisson: Suppose your prior for Poisson data is $\lambda \sim \mathsf{Gamma}(\text{shape}=4, \text{rate}=1/4).$ This distribution has mean 16 and variance 64. It's 95th percentile is about 31. q
Bayesian Statistics. Please help me to find an example where posterior variance is greater than prior variance Gamma-Poisson: Suppose your prior for Poisson data is $\lambda \sim \mathsf{Gamma}(\text{shape}=4, \text{rate}=1/4).$ This distribution has mean 16 and variance 64. It's 95th percentile is about 31. qgamma(.95, 4, .25) [1] 31.01463 But your first Poisson observation is $x = 500.$ Then your posterior distribution is $\mathsf{Gamma}(\text{shape}=4+500, \text{rate}=1/4+1) = \mathsf{Gamma}(\text{shape}=504, \text{rate}=1.25),$ which has mean $504/1.25 = 403.2$ and variance $504/1.25^2 = 322.56 > 64.$ Beta-binomial: For a beta-binomial example (along lines suggested by @whuber), tossing a coin. Suppose your prior for $P(\text{Head})=\theta$ is $\mathsf{Beta}(10,1),$ for a coin heavily biased in favor of Heads. Then on four tosses you get two Heads and two tails. Find the variances of the prior and posterior distributions. Note: In both examples, the idea is to have an informative prior and then a small amount of data that doesn't match the prior. With a large amount of data, the data can overwhelm the prior, yielding a posterior with small variance. Now I hope you can find examples of your own.
Bayesian Statistics. Please help me to find an example where posterior variance is greater than prio Gamma-Poisson: Suppose your prior for Poisson data is $\lambda \sim \mathsf{Gamma}(\text{shape}=4, \text{rate}=1/4).$ This distribution has mean 16 and variance 64. It's 95th percentile is about 31. q
46,042
DQN with XGBoost
The downside of using XGBoost compared to a neural network, is that a neural network can be trained partially whereas an XGBoost regression model will have to be trained from scratch for every update. This is because an XGBoost model uses sequential trees fitted on the residuals of the previous trees so iterative updates to the model are not really possible. Someone did attempt this but didn't get very good results: but I don't see why it could not work in theory.
DQN with XGBoost
The downside of using XGBoost compared to a neural network, is that a neural network can be trained partially whereas an XGBoost regression model will have to be trained from scratch for every update.
DQN with XGBoost The downside of using XGBoost compared to a neural network, is that a neural network can be trained partially whereas an XGBoost regression model will have to be trained from scratch for every update. This is because an XGBoost model uses sequential trees fitted on the residuals of the previous trees so iterative updates to the model are not really possible. Someone did attempt this but didn't get very good results: but I don't see why it could not work in theory.
DQN with XGBoost The downside of using XGBoost compared to a neural network, is that a neural network can be trained partially whereas an XGBoost regression model will have to be trained from scratch for every update.
46,043
DQN with XGBoost
In Q learning, it is possible to use practically any regression model that can be updated incrementally. In fitted Q learning, any regression model can be used, including tree-based approaches, see e.g. page 70 of this book: https://orbi.uliege.be/bitstream/2268/27963/1/book-FA-RL-DP.pdf However, we can hardly speak about Deep Q learning that relates to deep neural networks by nature, i.e. including automated feature extraction from non-trivial objects (images, texts).
DQN with XGBoost
In Q learning, it is possible to use practically any regression model that can be updated incrementally. In fitted Q learning, any regression model can be used, including tree-based approaches, see e.
DQN with XGBoost In Q learning, it is possible to use practically any regression model that can be updated incrementally. In fitted Q learning, any regression model can be used, including tree-based approaches, see e.g. page 70 of this book: https://orbi.uliege.be/bitstream/2268/27963/1/book-FA-RL-DP.pdf However, we can hardly speak about Deep Q learning that relates to deep neural networks by nature, i.e. including automated feature extraction from non-trivial objects (images, texts).
DQN with XGBoost In Q learning, it is possible to use practically any regression model that can be updated incrementally. In fitted Q learning, any regression model can be used, including tree-based approaches, see e.
46,044
Regression as a way to determine variable importance
Does anyone have any thoughts on the validity of this method? The main issue I can see is that the linear regression proposes to split the sample space in two (the two "sides" of a hyperplane), whereas the nearest neighbors will split it in regions (maybe a lot), depending on how the different classes are sampled on the sample space. The worst case scenario I can see is the following. Your data shows a distribution that KNN can handle well, but that a linear regression will perform miserably at, like a chessboard on the following picture (the code is at the bottom of the post). Though this is a caricature, many other datasets will exhibit the same behavior. A linear regression would discard both features if you select your features based on the p-values : Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.538955 0.041424 37.151 <2e-16 *** x1 -0.037032 0.027543 -1.345 0.179 x2 -0.006612 0.027526 -0.240 0.810 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 If you decide to weight the features based on the weights of the linear regression, note that the weights may be order of magnitudes different, though there is no reason for them to be ! This would be equivalent to project your data on a single axe, on which the KNN could not learn anything relevant (think about the colors of the dots if you look at them from a single dimension). Therefore, I would not use such an approach to calibrate the weights of a KNN (or select the attributes). Edit You can increase any goodness of fit measure adding variables showing a linear dependency with respect to the target. In this case, the goodness of fit would not be trivial any longer, but you would lose the information that comes from the first two variables. On weighted KNN However, there is a possibility to change the distance function, from an Euclidean distance to a distance where the dimensions have different weights. This approach is long to tune because you have to evaluate the model each time you change the weight of one attribute. If $p$ is your number of features and you want to replace a feature $x$ by $x/2$ and $x\times2$, you have $3^p$ models to evaluate (with a naive strategy). Note that you may have a significant improvement if you treat the features independently (not every combination of them), running only $3\times p$ models. Other improvements for KNN J. Wang, P. Neskovic, and L. N. Cooper, “Improving nearest neighbor rule with a simple adaptive distance measure,” Pattern Recognition Letters, vol. 28, no. 2, pp. 207–213, Jan. 2007. "Random KNN feature selection - a fast and stable alternative to Random Forests" Shengqiao Li, E James Harner and Donald A Adjeroh, Bioinformatics 2011 12:450 Code N <- 2000 decision_function <- function(x){ (((x[1]%%1-0.5)*(-x[2]%%1-0.5))>0)+1 } x <- abs(matrix(runif(n = N, min = 0, max = 2), ncol = 2)) y <- apply(X = x, MARGIN = 1, FUN = decision_function) plot(x, col=y) model <- lm(y ~ x, data = cbind.data.frame(x,y)) summary(model)
Regression as a way to determine variable importance
Does anyone have any thoughts on the validity of this method? The main issue I can see is that the linear regression proposes to split the sample space in two (the two "sides" of a hyperplane), wherea
Regression as a way to determine variable importance Does anyone have any thoughts on the validity of this method? The main issue I can see is that the linear regression proposes to split the sample space in two (the two "sides" of a hyperplane), whereas the nearest neighbors will split it in regions (maybe a lot), depending on how the different classes are sampled on the sample space. The worst case scenario I can see is the following. Your data shows a distribution that KNN can handle well, but that a linear regression will perform miserably at, like a chessboard on the following picture (the code is at the bottom of the post). Though this is a caricature, many other datasets will exhibit the same behavior. A linear regression would discard both features if you select your features based on the p-values : Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.538955 0.041424 37.151 <2e-16 *** x1 -0.037032 0.027543 -1.345 0.179 x2 -0.006612 0.027526 -0.240 0.810 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 If you decide to weight the features based on the weights of the linear regression, note that the weights may be order of magnitudes different, though there is no reason for them to be ! This would be equivalent to project your data on a single axe, on which the KNN could not learn anything relevant (think about the colors of the dots if you look at them from a single dimension). Therefore, I would not use such an approach to calibrate the weights of a KNN (or select the attributes). Edit You can increase any goodness of fit measure adding variables showing a linear dependency with respect to the target. In this case, the goodness of fit would not be trivial any longer, but you would lose the information that comes from the first two variables. On weighted KNN However, there is a possibility to change the distance function, from an Euclidean distance to a distance where the dimensions have different weights. This approach is long to tune because you have to evaluate the model each time you change the weight of one attribute. If $p$ is your number of features and you want to replace a feature $x$ by $x/2$ and $x\times2$, you have $3^p$ models to evaluate (with a naive strategy). Note that you may have a significant improvement if you treat the features independently (not every combination of them), running only $3\times p$ models. Other improvements for KNN J. Wang, P. Neskovic, and L. N. Cooper, “Improving nearest neighbor rule with a simple adaptive distance measure,” Pattern Recognition Letters, vol. 28, no. 2, pp. 207–213, Jan. 2007. "Random KNN feature selection - a fast and stable alternative to Random Forests" Shengqiao Li, E James Harner and Donald A Adjeroh, Bioinformatics 2011 12:450 Code N <- 2000 decision_function <- function(x){ (((x[1]%%1-0.5)*(-x[2]%%1-0.5))>0)+1 } x <- abs(matrix(runif(n = N, min = 0, max = 2), ncol = 2)) y <- apply(X = x, MARGIN = 1, FUN = decision_function) plot(x, col=y) model <- lm(y ~ x, data = cbind.data.frame(x,y)) summary(model)
Regression as a way to determine variable importance Does anyone have any thoughts on the validity of this method? The main issue I can see is that the linear regression proposes to split the sample space in two (the two "sides" of a hyperplane), wherea
46,045
Regression as a way to determine variable importance
In the classical regression situation the problem has given rise to a number of solutions. In an article outlining her software for calculating a number of these methods entitled "Relative importance for linear regression in R: the package relaimpo" available here Ulricke Grömping gives a number of different metrics for deciding on the relative importance of predictors in a regression model. Simple ones include: which variable alone does best, which variable adds most conditional on all the others. More complex ones include some relying on standardising first and computer-intensive ones which involve optimising over possible orders of entry of variables into the model. Since the article is open access I will not attempt a more detailed summary here. The software is open source.
Regression as a way to determine variable importance
In the classical regression situation the problem has given rise to a number of solutions. In an article outlining her software for calculating a number of these methods entitled "Relative importance
Regression as a way to determine variable importance In the classical regression situation the problem has given rise to a number of solutions. In an article outlining her software for calculating a number of these methods entitled "Relative importance for linear regression in R: the package relaimpo" available here Ulricke Grömping gives a number of different metrics for deciding on the relative importance of predictors in a regression model. Simple ones include: which variable alone does best, which variable adds most conditional on all the others. More complex ones include some relying on standardising first and computer-intensive ones which involve optimising over possible orders of entry of variables into the model. Since the article is open access I will not attempt a more detailed summary here. The software is open source.
Regression as a way to determine variable importance In the classical regression situation the problem has given rise to a number of solutions. In an article outlining her software for calculating a number of these methods entitled "Relative importance
46,046
Cox PH linearity assumption: reading martingal residual plots
Quoting from Harrell's Regression Modeling Strategies, second edition, page 494: When correlations among predictors are mild, plots of estimated predictor transformations without adjustment for other predictors (i.e., marginal transformations) may be useful. Martingale residuals may be obtained quickly by fixing $\hat \beta$ = 0 for all predictors. Then smoothed plots of predictor against residual may be made for all predictors. So there is nothing necessarily wrong with examining martingale residuals from a null model for linearity, provided that "correlations among predictors are mild." As noted in Table 20.3 on page 494, there are several acceptable ways to use martingale residuals in this context, depending on whether you want to estimate transformations versus checking for nonlinearity, and whether you wish to adjust for other predictors in the process. The apparent differences among the cited references represent different ways the martingale residuals were calculated and used. If you display residuals for a model null in the continuous predictor of interest, then you will display the shape of the relation of the predictor to outcome. So if that's a reasonably straight line you have close to a linear relation, as in the first example you cite (and in the zoomed-out picture for your data). If you don't get a straight line, the shape of the curve might suggest a useful functional form to try for that predictor. As the martingale residuals can't go above 1, the actual value at each point along the curve is not a hazard ratio, but the shape of the curve indicates the general shape of the relation of hazard to the predictor. That's what you do when you try to estimate the functional form of the relation. If instead you check the relation of the martingale residuals to the predictor from a model including an estimated coefficient for that predictor, then you hope for a flat horizontal smoothed line, indicating that the single linear coefficient for the predictor adequately represents the contribution of the predictor to outcome. That's what you do when you test for linearity in the predictor, as in the 2nd and 3rd references. Nevertheless, particularly with this many data points, starting with restricted spline fits for continuous predictors might provide a simpler and more general way to both test for and adjust for nonlinearities. If the relation of a predictor to outcome is linear, then the higher-order spline terms will have non-significant coefficients. If the relation is non-linear, you can adjust for any reasonable degree of nonlinearity by adding more knots. You ultimately should calibrate the full model (checking linearity with respect to the overall linear predictor, and correcting for optimism), as with the calibrate() function in Harrell's rms package in R.
Cox PH linearity assumption: reading martingal residual plots
Quoting from Harrell's Regression Modeling Strategies, second edition, page 494: When correlations among predictors are mild, plots of estimated predictor transformations without adjustment for other
Cox PH linearity assumption: reading martingal residual plots Quoting from Harrell's Regression Modeling Strategies, second edition, page 494: When correlations among predictors are mild, plots of estimated predictor transformations without adjustment for other predictors (i.e., marginal transformations) may be useful. Martingale residuals may be obtained quickly by fixing $\hat \beta$ = 0 for all predictors. Then smoothed plots of predictor against residual may be made for all predictors. So there is nothing necessarily wrong with examining martingale residuals from a null model for linearity, provided that "correlations among predictors are mild." As noted in Table 20.3 on page 494, there are several acceptable ways to use martingale residuals in this context, depending on whether you want to estimate transformations versus checking for nonlinearity, and whether you wish to adjust for other predictors in the process. The apparent differences among the cited references represent different ways the martingale residuals were calculated and used. If you display residuals for a model null in the continuous predictor of interest, then you will display the shape of the relation of the predictor to outcome. So if that's a reasonably straight line you have close to a linear relation, as in the first example you cite (and in the zoomed-out picture for your data). If you don't get a straight line, the shape of the curve might suggest a useful functional form to try for that predictor. As the martingale residuals can't go above 1, the actual value at each point along the curve is not a hazard ratio, but the shape of the curve indicates the general shape of the relation of hazard to the predictor. That's what you do when you try to estimate the functional form of the relation. If instead you check the relation of the martingale residuals to the predictor from a model including an estimated coefficient for that predictor, then you hope for a flat horizontal smoothed line, indicating that the single linear coefficient for the predictor adequately represents the contribution of the predictor to outcome. That's what you do when you test for linearity in the predictor, as in the 2nd and 3rd references. Nevertheless, particularly with this many data points, starting with restricted spline fits for continuous predictors might provide a simpler and more general way to both test for and adjust for nonlinearities. If the relation of a predictor to outcome is linear, then the higher-order spline terms will have non-significant coefficients. If the relation is non-linear, you can adjust for any reasonable degree of nonlinearity by adding more knots. You ultimately should calibrate the full model (checking linearity with respect to the overall linear predictor, and correcting for optimism), as with the calibrate() function in Harrell's rms package in R.
Cox PH linearity assumption: reading martingal residual plots Quoting from Harrell's Regression Modeling Strategies, second edition, page 494: When correlations among predictors are mild, plots of estimated predictor transformations without adjustment for other
46,047
Joint credible regions from MCMC draws
Indeed, the two approaches you mention are the most straightforward; I believe for example that the function HPDregionplot in the R package emdbook uses a kernel density estimator. Another option (and this is just a suggestion) would be to find the mode or centre of your distribution, compute the distance of each point in your sample to that mode, and choose the $(1-\alpha)n$ points with smallest such distance. The Euclidian distance would give you a circular credible region; you can use another distance (maybe based on the empirical covariance matrix?) to get an elliptical region.
Joint credible regions from MCMC draws
Indeed, the two approaches you mention are the most straightforward; I believe for example that the function HPDregionplot in the R package emdbook uses a kernel density estimator. Another option (and
Joint credible regions from MCMC draws Indeed, the two approaches you mention are the most straightforward; I believe for example that the function HPDregionplot in the R package emdbook uses a kernel density estimator. Another option (and this is just a suggestion) would be to find the mode or centre of your distribution, compute the distance of each point in your sample to that mode, and choose the $(1-\alpha)n$ points with smallest such distance. The Euclidian distance would give you a circular credible region; you can use another distance (maybe based on the empirical covariance matrix?) to get an elliptical region.
Joint credible regions from MCMC draws Indeed, the two approaches you mention are the most straightforward; I believe for example that the function HPDregionplot in the R package emdbook uses a kernel density estimator. Another option (and
46,048
Joint credible regions from MCMC draws
If you don't mind having a rectangular confidence region, and you know things are unimodal and symmetric, you could use the approach taken by credible.region() in the bayesSurv R package: https://rdrr.io/cran/bayesSurv/src/R/credible.region.R I believe this is the idea behind how it works: We have $n$ MCMC samples for each of $p$ parameters. Imagine them plotted in $p$-dimensional space. Start with the smallest hypercube around all the samples. Shrink it down so it just barely excludes any point which is a marginal min or max along any axis. (E.g. with $d=2$, we would remove up to 4 points: the top-most, left-most, right-most, and bottom-most points. Or maybe as few as just 2 points, if e.g. the top-most and right-most values are at the same point and likewise for left/bottom.) Repeat the whole process on the remaining points. Continue until only 95% (or whatever desired fraction) of samples remain. (But the way it's implemented in practice, you only need to sort each column once---no need for a loop the way I described it here.) This rectangular region will be larger than an ellipsoid, but not as computationally-intensive as relying on KDE or Euclidean distances.
Joint credible regions from MCMC draws
If you don't mind having a rectangular confidence region, and you know things are unimodal and symmetric, you could use the approach taken by credible.region() in the bayesSurv R package: https://rdrr
Joint credible regions from MCMC draws If you don't mind having a rectangular confidence region, and you know things are unimodal and symmetric, you could use the approach taken by credible.region() in the bayesSurv R package: https://rdrr.io/cran/bayesSurv/src/R/credible.region.R I believe this is the idea behind how it works: We have $n$ MCMC samples for each of $p$ parameters. Imagine them plotted in $p$-dimensional space. Start with the smallest hypercube around all the samples. Shrink it down so it just barely excludes any point which is a marginal min or max along any axis. (E.g. with $d=2$, we would remove up to 4 points: the top-most, left-most, right-most, and bottom-most points. Or maybe as few as just 2 points, if e.g. the top-most and right-most values are at the same point and likewise for left/bottom.) Repeat the whole process on the remaining points. Continue until only 95% (or whatever desired fraction) of samples remain. (But the way it's implemented in practice, you only need to sort each column once---no need for a loop the way I described it here.) This rectangular region will be larger than an ellipsoid, but not as computationally-intensive as relying on KDE or Euclidean distances.
Joint credible regions from MCMC draws If you don't mind having a rectangular confidence region, and you know things are unimodal and symmetric, you could use the approach taken by credible.region() in the bayesSurv R package: https://rdrr
46,049
Joint credible regions from MCMC draws
I played around with a few different options, but I figured I'd share the one that I found worked best. Note: In my application, the posterior is well approximated as a multivariate normal. In other applications, the HPDregionplot approach may be more flexible. There are essentially 3 steps... Rotate the data along it's principal components. Scale each direction by the reciprocal square root of the eigenvalue corresponding to that direction. In this new space, find the euclidean ball with the smallest radius $r$, that contains $(1-\alpha)\times n$ points. Convert this "spherical" region back to an "elliptical" region in the original space. A quick illustration of this process is given below. Edit: This approach seems (roughly, if not exactly) equivalent to using Mahalanobis distance using the sample precision matrix $S^{-1}$. In other words, let $d_\star$ be the smallest possible $d$ such that $$\sum_{i=1}^nI\{({\bf x}_i - {\bf \hat\mu})^T S^{-1}({\bf x}_i - {\bf \hat\mu}) \leq d\} \geq n(1-\alpha),$$ where $I(b)$ is the indicator function (equal to $1$ when $b$ is true). Then $$R = \{{\bf x} \ | \ ({\bf x} - {\bf \hat\mu})^T S^{-1}({\bf x} - {\bf \hat\mu}) \leq d_\star\}$$ is an approximate $(1-\alpha)\times 100\%$ joint confidence region for ${\bf x}$.
Joint credible regions from MCMC draws
I played around with a few different options, but I figured I'd share the one that I found worked best. Note: In my application, the posterior is well approximated as a multivariate normal. In other a
Joint credible regions from MCMC draws I played around with a few different options, but I figured I'd share the one that I found worked best. Note: In my application, the posterior is well approximated as a multivariate normal. In other applications, the HPDregionplot approach may be more flexible. There are essentially 3 steps... Rotate the data along it's principal components. Scale each direction by the reciprocal square root of the eigenvalue corresponding to that direction. In this new space, find the euclidean ball with the smallest radius $r$, that contains $(1-\alpha)\times n$ points. Convert this "spherical" region back to an "elliptical" region in the original space. A quick illustration of this process is given below. Edit: This approach seems (roughly, if not exactly) equivalent to using Mahalanobis distance using the sample precision matrix $S^{-1}$. In other words, let $d_\star$ be the smallest possible $d$ such that $$\sum_{i=1}^nI\{({\bf x}_i - {\bf \hat\mu})^T S^{-1}({\bf x}_i - {\bf \hat\mu}) \leq d\} \geq n(1-\alpha),$$ where $I(b)$ is the indicator function (equal to $1$ when $b$ is true). Then $$R = \{{\bf x} \ | \ ({\bf x} - {\bf \hat\mu})^T S^{-1}({\bf x} - {\bf \hat\mu}) \leq d_\star\}$$ is an approximate $(1-\alpha)\times 100\%$ joint confidence region for ${\bf x}$.
Joint credible regions from MCMC draws I played around with a few different options, but I figured I'd share the one that I found worked best. Note: In my application, the posterior is well approximated as a multivariate normal. In other a
46,050
Why are exponential smoothing models not considered auto-regressive?
For an autoregressive model, non-linear or linear, the number of lags must be finite. An ETS(A,N,N) model can be written as an AR($\infty$) model, but not as an autoregressive model with finite lags. A few other exponential smoothing models can be written in AR($\infty$) form, but none can be written as an autoregressive model with finite lags. See https://otexts.org/fpp2/arima-ets.html for the details. The results in my paper with Bergmeir and Koo require you to estimate the model using a finite set of autoregressive predictors, so the number of lags must be much less than the number of observations.
Why are exponential smoothing models not considered auto-regressive?
For an autoregressive model, non-linear or linear, the number of lags must be finite. An ETS(A,N,N) model can be written as an AR($\infty$) model, but not as an autoregressive model with finite lags.
Why are exponential smoothing models not considered auto-regressive? For an autoregressive model, non-linear or linear, the number of lags must be finite. An ETS(A,N,N) model can be written as an AR($\infty$) model, but not as an autoregressive model with finite lags. A few other exponential smoothing models can be written in AR($\infty$) form, but none can be written as an autoregressive model with finite lags. See https://otexts.org/fpp2/arima-ets.html for the details. The results in my paper with Bergmeir and Koo require you to estimate the model using a finite set of autoregressive predictors, so the number of lags must be much less than the number of observations.
Why are exponential smoothing models not considered auto-regressive? For an autoregressive model, non-linear or linear, the number of lags must be finite. An ETS(A,N,N) model can be written as an AR($\infty$) model, but not as an autoregressive model with finite lags.
46,051
Checking the proportional hazard assumption
The global test of proportional hazards is not well-calibrated. You haven't controlled for multiple comparisons. It's difficult to gauge power of the test. $\alpha=0.05$ is probably too lax in most sample sizes. The test is arbitrarily powerful in large sample sizes. It's possible that the covariate you identify is a spurious finding, and that it arises from natural variability in the observation of time-to-event data. Even if the hazards were not proportional, altering the model to fit a set of assumptions fundamentally changes the scientific question. As Tukey said, "Better an approximate answer to the exact question, rather than an exact answer to the approximate question." If you were to fit the Cox model in the presence of non-proportional hazards, what is the net effect? Slightly less power. In fact, you can recover most of that power with robust standard errors (specify robust=TRUE or cluster = ~id). In this case the interpretation of the (exponentiated) model coefficient is a time-weighted average of the hazard ratio--I do this every single time. When the actual hazard ratio over-time is of interest, there are flexible methods of estimating its value. You may create a flexible, polynomial representation of time using basis splines and fit their interaction with the covariate(s) to estimate a hazard ratio time function. The power of the Cox model may be compromised by this. Using a parametric exponential survival model with spline adjustment for time can approximate the semi-parametric inference of the Cox model very well, and is better powered to detect interactions of time with one or more covariates.
Checking the proportional hazard assumption
The global test of proportional hazards is not well-calibrated. You haven't controlled for multiple comparisons. It's difficult to gauge power of the test. $\alpha=0.05$ is probably too lax in most sa
Checking the proportional hazard assumption The global test of proportional hazards is not well-calibrated. You haven't controlled for multiple comparisons. It's difficult to gauge power of the test. $\alpha=0.05$ is probably too lax in most sample sizes. The test is arbitrarily powerful in large sample sizes. It's possible that the covariate you identify is a spurious finding, and that it arises from natural variability in the observation of time-to-event data. Even if the hazards were not proportional, altering the model to fit a set of assumptions fundamentally changes the scientific question. As Tukey said, "Better an approximate answer to the exact question, rather than an exact answer to the approximate question." If you were to fit the Cox model in the presence of non-proportional hazards, what is the net effect? Slightly less power. In fact, you can recover most of that power with robust standard errors (specify robust=TRUE or cluster = ~id). In this case the interpretation of the (exponentiated) model coefficient is a time-weighted average of the hazard ratio--I do this every single time. When the actual hazard ratio over-time is of interest, there are flexible methods of estimating its value. You may create a flexible, polynomial representation of time using basis splines and fit their interaction with the covariate(s) to estimate a hazard ratio time function. The power of the Cox model may be compromised by this. Using a parametric exponential survival model with spline adjustment for time can approximate the semi-parametric inference of the Cox model very well, and is better powered to detect interactions of time with one or more covariates.
Checking the proportional hazard assumption The global test of proportional hazards is not well-calibrated. You haven't controlled for multiple comparisons. It's difficult to gauge power of the test. $\alpha=0.05$ is probably too lax in most sa
46,052
Checking the proportional hazard assumption
If hypothesis testing is your main goal, you should not do anything at all and stick with the model you had anticipated using before seeing the data. Changing the pre-defined model based on the results of the cox.zph() test can lead to biased estimates and invalid p-values. See SiM paper: https://harlanhappydog.github.io/files/SiM.pdf
Checking the proportional hazard assumption
If hypothesis testing is your main goal, you should not do anything at all and stick with the model you had anticipated using before seeing the data. Changing the pre-defined model based on the result
Checking the proportional hazard assumption If hypothesis testing is your main goal, you should not do anything at all and stick with the model you had anticipated using before seeing the data. Changing the pre-defined model based on the results of the cox.zph() test can lead to biased estimates and invalid p-values. See SiM paper: https://harlanhappydog.github.io/files/SiM.pdf
Checking the proportional hazard assumption If hypothesis testing is your main goal, you should not do anything at all and stick with the model you had anticipated using before seeing the data. Changing the pre-defined model based on the result
46,053
Sampling from a categorical distribution
Let me unravel your question to remove all the not-so-relevant fluff around it. Given a tuple of $n$ values of the form $(p_i)_{1\leq i\leq n}$, where each $p_i\in(0,1)$, the question is to characterize the distribution of the following process: Draw independent random variables $U_1,\dots,U_n$ uniformly distributed in $[0,1]$ Return $$\arg\!\max_{1\leq i\leq n} \left( \log \frac{p_i}{1-p_i} - \log\log\frac{1}{U_i}\right)$$ What we will show: The output of this algorithm is a categorical random variable $Z$ such that $$ \forall i\in [n], \quad \mathbb{P}\{ Z = i\} \propto \frac{p_i}{1-p_i}\,. \tag{$\dagger$} $$ Detailed proof. For every $1\leq i\leq n$, let $$X_i \stackrel{\rm def}{=} \log \frac{p_i}{1-p_i} - \log\log\frac{1}{U_i} = - \log\left(\frac{1-p_i}{p_i}\log\frac{1}{U_i}\right)\,.$$ By a standard result, we have that $\log\frac{1}{U_i}$ follows an exponential distribution with parameter $1$, and therefore (by properties of the exponential distribution) this implies that $$ Y_i \stackrel{\rm def}{=}\frac{1-p_i}{p_i}\log\frac{1}{U_i} \sim \mathrm{Exp}(\frac{p_i}{1-p_i})\,.\tag{1}$$ Now, since $$ \arg\!\max_{1\leq i\leq n} X_i = \arg\!\max_{1\leq i\leq n} (-\log Y_i) = \arg\!\min_{1\leq i\leq n} Y_i\tag{2} $$ we have that the output $Z$ of the algorithm has probability mass function $$ \forall i\in[n],\quad\mathbb{P}\{Z=i\} = \mathbb{P}\{ Y_i = \min_{1\leq j\leq n} Y_j\} = \frac{\frac{p_i}{1-p_i}}{\sum_{j=1}^n \frac{p_j}{1-p_j}} \tag{3} $$ using e.g. the result from this other question for the last equality. In other term, The output of the algorithm is $i$ with probability proportional to $\frac{p_i}{1-p_i}$. Alternative. if you don't care about the derivation, here is the quick and simple explanation: this is called the Gumbel trick. If $U$ is uniform on $[0,1]$, then $-\log\log\frac{1}{U}$ has a standard Gumbel distribution. And you can use the following theorem (well-known, I reckon, to whoever knows it) Theorem. If $G_1,\dots,G_n$ are independent standard Gumbel r.v.'s, and $\alpha_1,\dots, \alpha_n > 0$, then the random variable $$ X = \arg\!\max_{1\leq i\leq n}(\log \alpha_i + G_i) $$ takes values proportional to the $\alpha_i$: $$ \forall i\in [n], \quad \mathbb{P}\{X=i\} \propto \alpha_i\,. $$ See e.g. this for more.
Sampling from a categorical distribution
Let me unravel your question to remove all the not-so-relevant fluff around it. Given a tuple of $n$ values of the form $(p_i)_{1\leq i\leq n}$, where each $p_i\in(0,1)$, the question is to character
Sampling from a categorical distribution Let me unravel your question to remove all the not-so-relevant fluff around it. Given a tuple of $n$ values of the form $(p_i)_{1\leq i\leq n}$, where each $p_i\in(0,1)$, the question is to characterize the distribution of the following process: Draw independent random variables $U_1,\dots,U_n$ uniformly distributed in $[0,1]$ Return $$\arg\!\max_{1\leq i\leq n} \left( \log \frac{p_i}{1-p_i} - \log\log\frac{1}{U_i}\right)$$ What we will show: The output of this algorithm is a categorical random variable $Z$ such that $$ \forall i\in [n], \quad \mathbb{P}\{ Z = i\} \propto \frac{p_i}{1-p_i}\,. \tag{$\dagger$} $$ Detailed proof. For every $1\leq i\leq n$, let $$X_i \stackrel{\rm def}{=} \log \frac{p_i}{1-p_i} - \log\log\frac{1}{U_i} = - \log\left(\frac{1-p_i}{p_i}\log\frac{1}{U_i}\right)\,.$$ By a standard result, we have that $\log\frac{1}{U_i}$ follows an exponential distribution with parameter $1$, and therefore (by properties of the exponential distribution) this implies that $$ Y_i \stackrel{\rm def}{=}\frac{1-p_i}{p_i}\log\frac{1}{U_i} \sim \mathrm{Exp}(\frac{p_i}{1-p_i})\,.\tag{1}$$ Now, since $$ \arg\!\max_{1\leq i\leq n} X_i = \arg\!\max_{1\leq i\leq n} (-\log Y_i) = \arg\!\min_{1\leq i\leq n} Y_i\tag{2} $$ we have that the output $Z$ of the algorithm has probability mass function $$ \forall i\in[n],\quad\mathbb{P}\{Z=i\} = \mathbb{P}\{ Y_i = \min_{1\leq j\leq n} Y_j\} = \frac{\frac{p_i}{1-p_i}}{\sum_{j=1}^n \frac{p_j}{1-p_j}} \tag{3} $$ using e.g. the result from this other question for the last equality. In other term, The output of the algorithm is $i$ with probability proportional to $\frac{p_i}{1-p_i}$. Alternative. if you don't care about the derivation, here is the quick and simple explanation: this is called the Gumbel trick. If $U$ is uniform on $[0,1]$, then $-\log\log\frac{1}{U}$ has a standard Gumbel distribution. And you can use the following theorem (well-known, I reckon, to whoever knows it) Theorem. If $G_1,\dots,G_n$ are independent standard Gumbel r.v.'s, and $\alpha_1,\dots, \alpha_n > 0$, then the random variable $$ X = \arg\!\max_{1\leq i\leq n}(\log \alpha_i + G_i) $$ takes values proportional to the $\alpha_i$: $$ \forall i\in [n], \quad \mathbb{P}\{X=i\} \propto \alpha_i\,. $$ See e.g. this for more.
Sampling from a categorical distribution Let me unravel your question to remove all the not-so-relevant fluff around it. Given a tuple of $n$ values of the form $(p_i)_{1\leq i\leq n}$, where each $p_i\in(0,1)$, the question is to character
46,054
Is it possible to do Fizz Buzz in deep learning?
As the author of the linked blog post, I am happy to say that (with the correct choice of hyperparameters and a little luck) Fizz Buzz can be completely learned by a neural network with one hidden layer. I spent some time investigating why it works, and the reason is somewhat interesting. It hinges upon the binary representation of the input and the following observation: if two numbers differ by a multiple of 15, then they belong to the same fizzbuzz "class" (as-is / fizz / buzz / fizzbuzz) It turns out that there are a number of ways in which you can reverse two bits in a 10-digit binary number and get a number that differs by a multiple of 15. For example, if you start with some number x and turn on the 128 bit and turn off the 8 bit, you get x + 120. There are many other such examples. And if you have a linear function of the bits that puts the identical weights on those two bits, it will produce the same output for x and x + 128. As there are many such bit pairs, it turns out that the neural network basically learns a bunch of equivalence classes (for example, one would contain x, x + 120, and a few other numbers), and "memorizes" the correct answer for each equivalence class And so it turns out that when you train the model on the numbers 101 to 1023, you've got enough equivalence classes to predict correctly on 1 to 100. (This is not formal, of course, this is just a high-level summary of what I learned when I investigated the network.) -- As to your question about finding prime numbers, I'd be surprised if an approach like this worked. My sense is that the nice "same modulo 15" structure of this problem is what makes the neural net work, and it's hard to think of anything analogous for e.g. finding prime numbers.
Is it possible to do Fizz Buzz in deep learning?
As the author of the linked blog post, I am happy to say that (with the correct choice of hyperparameters and a little luck) Fizz Buzz can be completely learned by a neural network with one hidden lay
Is it possible to do Fizz Buzz in deep learning? As the author of the linked blog post, I am happy to say that (with the correct choice of hyperparameters and a little luck) Fizz Buzz can be completely learned by a neural network with one hidden layer. I spent some time investigating why it works, and the reason is somewhat interesting. It hinges upon the binary representation of the input and the following observation: if two numbers differ by a multiple of 15, then they belong to the same fizzbuzz "class" (as-is / fizz / buzz / fizzbuzz) It turns out that there are a number of ways in which you can reverse two bits in a 10-digit binary number and get a number that differs by a multiple of 15. For example, if you start with some number x and turn on the 128 bit and turn off the 8 bit, you get x + 120. There are many other such examples. And if you have a linear function of the bits that puts the identical weights on those two bits, it will produce the same output for x and x + 128. As there are many such bit pairs, it turns out that the neural network basically learns a bunch of equivalence classes (for example, one would contain x, x + 120, and a few other numbers), and "memorizes" the correct answer for each equivalence class And so it turns out that when you train the model on the numbers 101 to 1023, you've got enough equivalence classes to predict correctly on 1 to 100. (This is not formal, of course, this is just a high-level summary of what I learned when I investigated the network.) -- As to your question about finding prime numbers, I'd be surprised if an approach like this worked. My sense is that the nice "same modulo 15" structure of this problem is what makes the neural net work, and it's hard to think of anything analogous for e.g. finding prime numbers.
Is it possible to do Fizz Buzz in deep learning? As the author of the linked blog post, I am happy to say that (with the correct choice of hyperparameters and a little luck) Fizz Buzz can be completely learned by a neural network with one hidden lay
46,055
Is it possible to do Fizz Buzz in deep learning?
Let me answer the question in a meme way. Why (always) deep learning? All neural nets do is linear regression (x*w+b) with some non-linearity around the (intermediate) response. Lets talk machine learning, better yet, optimization. The obvious general class of problems you are referring to is function approximation (not regression per se). So, why not to use method that was developed to do exactly this: writing programs. In theory, yes, you can use 'artificial intelligence' methods to create programs and one of them, given enough data and time, can theoretically be FizzBuzz. Or a programm that computes prime numbers (and that program could be theoretically the same when written by a human). -- No deep learning here --. Learning from data Well, can we learn from data? Yes, we can. But first we need to understand data and to engineer some features. Because only one numeric feature is not expressive enough... (for now). Some code incoming: library(tidyverse) theData <- data_frame(a = as.double(1:100), a3 = as.double(a %% 3 == 0), a5 = as.double(a %% 5 == 0), cl = case_when((a3 > 0) & (a5 > 0) ~ 'FizzBuzz', a3 > 0 ~ 'Fizz', a5 > 0 ~ 'Buzz', TRUE ~ 'Number')) %>% mutate(cl = factor(cl)) Now we have a numerical feature a (numbers) and a3 and a5 te help with the decision ... ... tree. (╯°□°)╯︵ ┻━┻ again not deep learning here. But a stacked model: first level is DT and scond level is (using Viola-Jones-Cascades or simple filter on the Number response) a plain old linear regression with the solution $y=a$. The DT first: treeModel <- rpart::rpart(cl ~ ., theData, control = rpart::rpart.control(minsplit = 5)) rattle::fancyRpartPlot(treeModel, caption = '') THAT IS CRAZY! A simple decision tree learned FizzBuzz! But did it? Apply some test data: testData <- data_frame(a = as.double(200:300), a3 = as.double(a %% 3 == 0), a5 = as.double(a %% 5 == 0), cl = case_when((a3 > 0) & (a5 > 0) ~ 'FizzBuzz', a3 > 0 ~ 'Fizz', a5 > 0 ~ 'Buzz', TRUE ~ 'Number')) predictions <- predict(treeModel, testData, type = 'class') table(testData$cl, predictions) predictions Buzz Fizz FizzBuzz Number Buzz 14 0 0 0 Fizz 0 27 0 0 FizzBuzz 0 0 7 0 Number 0 0 0 53 Perfect on test set for numbers 200 to 300! Well, the second layer is easy: lmModel <- lm(arep ~ a - 1, mutate(theData, arep = a)) The error estimating the number is testData %>% mutate(pred = predict(treeModel, ., type = 'class')) %>% filter(pred == 'Number') %>% mutate(apred = predict(lmModel, .), error = a - apred) %>% pull(error) %>% summary() Min. 1st Qu. Median Mean 3rd Qu. Max. 5.684e-14 5.684e-14 5.684e-14 5.684e-14 5.684e-14 5.684e-1 ... well very close to 0. Tada! We learned FizzBuzz from data! Derp learning Probably you can do the same stuff with deep learning. You can also do stacked models with LSTM layers and convolution (ya know, because of modulo 3 and 5), and with a huge amount of data you may have a chance to generalize some patterns... yeah... no. So hope this answer helps to clarify that yes it is possible. And no, you don't need deep learning to do the job. And now, from a single feature a even deep learning will not be able to learn FizzBuzz. As for prime numbers... if you compute/engineer as many features as there are prime numbers, you can learn them from data, too. ¯_(ツ)_/¯
Is it possible to do Fizz Buzz in deep learning?
Let me answer the question in a meme way. Why (always) deep learning? All neural nets do is linear regression (x*w+b) with some non-linearity around the (intermediate) response. Lets talk machine lea
Is it possible to do Fizz Buzz in deep learning? Let me answer the question in a meme way. Why (always) deep learning? All neural nets do is linear regression (x*w+b) with some non-linearity around the (intermediate) response. Lets talk machine learning, better yet, optimization. The obvious general class of problems you are referring to is function approximation (not regression per se). So, why not to use method that was developed to do exactly this: writing programs. In theory, yes, you can use 'artificial intelligence' methods to create programs and one of them, given enough data and time, can theoretically be FizzBuzz. Or a programm that computes prime numbers (and that program could be theoretically the same when written by a human). -- No deep learning here --. Learning from data Well, can we learn from data? Yes, we can. But first we need to understand data and to engineer some features. Because only one numeric feature is not expressive enough... (for now). Some code incoming: library(tidyverse) theData <- data_frame(a = as.double(1:100), a3 = as.double(a %% 3 == 0), a5 = as.double(a %% 5 == 0), cl = case_when((a3 > 0) & (a5 > 0) ~ 'FizzBuzz', a3 > 0 ~ 'Fizz', a5 > 0 ~ 'Buzz', TRUE ~ 'Number')) %>% mutate(cl = factor(cl)) Now we have a numerical feature a (numbers) and a3 and a5 te help with the decision ... ... tree. (╯°□°)╯︵ ┻━┻ again not deep learning here. But a stacked model: first level is DT and scond level is (using Viola-Jones-Cascades or simple filter on the Number response) a plain old linear regression with the solution $y=a$. The DT first: treeModel <- rpart::rpart(cl ~ ., theData, control = rpart::rpart.control(minsplit = 5)) rattle::fancyRpartPlot(treeModel, caption = '') THAT IS CRAZY! A simple decision tree learned FizzBuzz! But did it? Apply some test data: testData <- data_frame(a = as.double(200:300), a3 = as.double(a %% 3 == 0), a5 = as.double(a %% 5 == 0), cl = case_when((a3 > 0) & (a5 > 0) ~ 'FizzBuzz', a3 > 0 ~ 'Fizz', a5 > 0 ~ 'Buzz', TRUE ~ 'Number')) predictions <- predict(treeModel, testData, type = 'class') table(testData$cl, predictions) predictions Buzz Fizz FizzBuzz Number Buzz 14 0 0 0 Fizz 0 27 0 0 FizzBuzz 0 0 7 0 Number 0 0 0 53 Perfect on test set for numbers 200 to 300! Well, the second layer is easy: lmModel <- lm(arep ~ a - 1, mutate(theData, arep = a)) The error estimating the number is testData %>% mutate(pred = predict(treeModel, ., type = 'class')) %>% filter(pred == 'Number') %>% mutate(apred = predict(lmModel, .), error = a - apred) %>% pull(error) %>% summary() Min. 1st Qu. Median Mean 3rd Qu. Max. 5.684e-14 5.684e-14 5.684e-14 5.684e-14 5.684e-14 5.684e-1 ... well very close to 0. Tada! We learned FizzBuzz from data! Derp learning Probably you can do the same stuff with deep learning. You can also do stacked models with LSTM layers and convolution (ya know, because of modulo 3 and 5), and with a huge amount of data you may have a chance to generalize some patterns... yeah... no. So hope this answer helps to clarify that yes it is possible. And no, you don't need deep learning to do the job. And now, from a single feature a even deep learning will not be able to learn FizzBuzz. As for prime numbers... if you compute/engineer as many features as there are prime numbers, you can learn them from data, too. ¯_(ツ)_/¯
Is it possible to do Fizz Buzz in deep learning? Let me answer the question in a meme way. Why (always) deep learning? All neural nets do is linear regression (x*w+b) with some non-linearity around the (intermediate) response. Lets talk machine lea
46,056
Interpreting glm.diag.plots
Diagnostic plots for GLMs are very similar to those for LMs, on the grounds that the residuals of GLMs should be homoscedastic, independent of the mean, and asymptotically approach normality, i.e. in the case of large numbers of counts for a Poisson or binomial (this means that such plots are much less useful for Bernoulli (aka binary or binomial with $N=1$; aka standard logistic regression) responses; see e.g. this CV answer. It's not entirely clear to me why there are separate boot::glm.diag.plots() and underlying boot::glm.diag() functions that overlap a great deal with the built-in stats::plot.lm() (which also handles glm models); my guess is that either the author of the package (A. Canty) had slightly different preferences from the author of stats::plot.lm(), or more likely that the boot package was developed in 1997, when the R project had just begun, and so these functions were written in parallel. Here are some similarities and differences between plot.lm() and glm.diag.plots(): Linear predictor (aka "fitted") vs residuals: same as first plot (or which=1) in plot.lm(), except that glm.diag.plots() uses jackknife deviance residuals, which I can't find defined anywhere (it may be in the Davison and Snell chapter given as a reference in ?glm.diag), but which are computed as sign(dev) * sqrt(dev^2 + h * rp^2) where dev is the deviance residual; h is the hat value; and rp is the standardized Pearson residual (Pearson residual divided by the scale parameter, if there is one, and divided by sqrt(1-h)). ordered deviance residuals vs. quantiles of standard normal: "Q-Q plot", same as the third plot (which=3) in plot.lm(); both use standardized deviance residuals $r_i/(h_i(1-h_i))$, although you have to look in the code to find out. $h/(1-h)$ vs. Cook statistic: use which=6 to get this from plot.lm() ($h/(1-h)$ is "leverage"). glm.diag.plots() uses a more conservative rule of thumb to decide which points might be "influential". case vs Cook statistic: the same as which=4 in plot.lm(). plot.lm() seems to have a few advantages, in that it (1) deals more robustly with zero-weight and infinite cases, (2) adds some smoothing lines and contours to the figures to help with interpretation. The car package has some extended functions (e.g. qqPlot), and the DHARMa package uses simulated residuals to get much more interpretable residual plots for logistic regression and other GLMs. mgcv::qq.gam also produces improved Q-Q plots by simulation of residuals. Your first case looks OK: no obvious pattern, linear Q-Q plot, a couple of "influential" points. I would definitely double-check the two points that have high Cook's distance and linear predictor value of >15 ... I can't really tell from the plot, but it looks like you may have many repeated values in your data? Second case: same as the first, except that you have a lot more points (so that e.g. the clustering pattern in plot 1 is much stronger). The Q-Q plot looks wonky, but I'm not sure how much I would worry about that. Try with simulated residuals (DHARMa package) ...
Interpreting glm.diag.plots
Diagnostic plots for GLMs are very similar to those for LMs, on the grounds that the residuals of GLMs should be homoscedastic, independent of the mean, and asymptotically approach normality, i.e. in
Interpreting glm.diag.plots Diagnostic plots for GLMs are very similar to those for LMs, on the grounds that the residuals of GLMs should be homoscedastic, independent of the mean, and asymptotically approach normality, i.e. in the case of large numbers of counts for a Poisson or binomial (this means that such plots are much less useful for Bernoulli (aka binary or binomial with $N=1$; aka standard logistic regression) responses; see e.g. this CV answer. It's not entirely clear to me why there are separate boot::glm.diag.plots() and underlying boot::glm.diag() functions that overlap a great deal with the built-in stats::plot.lm() (which also handles glm models); my guess is that either the author of the package (A. Canty) had slightly different preferences from the author of stats::plot.lm(), or more likely that the boot package was developed in 1997, when the R project had just begun, and so these functions were written in parallel. Here are some similarities and differences between plot.lm() and glm.diag.plots(): Linear predictor (aka "fitted") vs residuals: same as first plot (or which=1) in plot.lm(), except that glm.diag.plots() uses jackknife deviance residuals, which I can't find defined anywhere (it may be in the Davison and Snell chapter given as a reference in ?glm.diag), but which are computed as sign(dev) * sqrt(dev^2 + h * rp^2) where dev is the deviance residual; h is the hat value; and rp is the standardized Pearson residual (Pearson residual divided by the scale parameter, if there is one, and divided by sqrt(1-h)). ordered deviance residuals vs. quantiles of standard normal: "Q-Q plot", same as the third plot (which=3) in plot.lm(); both use standardized deviance residuals $r_i/(h_i(1-h_i))$, although you have to look in the code to find out. $h/(1-h)$ vs. Cook statistic: use which=6 to get this from plot.lm() ($h/(1-h)$ is "leverage"). glm.diag.plots() uses a more conservative rule of thumb to decide which points might be "influential". case vs Cook statistic: the same as which=4 in plot.lm(). plot.lm() seems to have a few advantages, in that it (1) deals more robustly with zero-weight and infinite cases, (2) adds some smoothing lines and contours to the figures to help with interpretation. The car package has some extended functions (e.g. qqPlot), and the DHARMa package uses simulated residuals to get much more interpretable residual plots for logistic regression and other GLMs. mgcv::qq.gam also produces improved Q-Q plots by simulation of residuals. Your first case looks OK: no obvious pattern, linear Q-Q plot, a couple of "influential" points. I would definitely double-check the two points that have high Cook's distance and linear predictor value of >15 ... I can't really tell from the plot, but it looks like you may have many repeated values in your data? Second case: same as the first, except that you have a lot more points (so that e.g. the clustering pattern in plot 1 is much stronger). The Q-Q plot looks wonky, but I'm not sure how much I would worry about that. Try with simulated residuals (DHARMa package) ...
Interpreting glm.diag.plots Diagnostic plots for GLMs are very similar to those for LMs, on the grounds that the residuals of GLMs should be homoscedastic, independent of the mean, and asymptotically approach normality, i.e. in
46,057
General approaches and techniques for developing good explanatory models for nonlinear data
I think it is worth considering the use of generalised additive models (GAMs). GAMs are able to encapsulate non-linear relations between the response variable and the outcome variables and are straight-forward to explain. They are well-understood and widely used within the Statistics community. In totally informal manner: GAMs are practically GLMs with a out-of-the-box, semi-automated basis expansion module strapped in. No need to define quirky $x^{\frac{1}{4}}, \sin(2\pi x)$, etc. transformations, the best non-linear relation will be automatically selected. GAM are great to visualise the (potentially) varying influence of $x$ on the outcome $y$. There is an abundance of good resources on using GAMs online (e.g. here and here), in print (e.g. here and here) and CV has literally dozens of insightful questions and answers on generalized additive models. R has two extremely good GAM packages gam and mgcv. I would suggest you start with mgcv as a matter of convenience. I would suggest you also look at the FAT/ML (Fairness, Accountability, and Transparency in Machine Learning) initiative. It has some great novel ideas. In relation with GAMs I would point you to the 2017 invited talk by Rich Caruana on "Friends Don’t Let Friends Deploy Black-Box Models"; it shows an application of an extension of GAMs (called GA2M) that is used instead of standard ML techniques (random forests) and gives results of similar accuracy but also being fully interpretable.
General approaches and techniques for developing good explanatory models for nonlinear data
I think it is worth considering the use of generalised additive models (GAMs). GAMs are able to encapsulate non-linear relations between the response variable and the outcome variables and are straigh
General approaches and techniques for developing good explanatory models for nonlinear data I think it is worth considering the use of generalised additive models (GAMs). GAMs are able to encapsulate non-linear relations between the response variable and the outcome variables and are straight-forward to explain. They are well-understood and widely used within the Statistics community. In totally informal manner: GAMs are practically GLMs with a out-of-the-box, semi-automated basis expansion module strapped in. No need to define quirky $x^{\frac{1}{4}}, \sin(2\pi x)$, etc. transformations, the best non-linear relation will be automatically selected. GAM are great to visualise the (potentially) varying influence of $x$ on the outcome $y$. There is an abundance of good resources on using GAMs online (e.g. here and here), in print (e.g. here and here) and CV has literally dozens of insightful questions and answers on generalized additive models. R has two extremely good GAM packages gam and mgcv. I would suggest you start with mgcv as a matter of convenience. I would suggest you also look at the FAT/ML (Fairness, Accountability, and Transparency in Machine Learning) initiative. It has some great novel ideas. In relation with GAMs I would point you to the 2017 invited talk by Rich Caruana on "Friends Don’t Let Friends Deploy Black-Box Models"; it shows an application of an extension of GAMs (called GA2M) that is used instead of standard ML techniques (random forests) and gives results of similar accuracy but also being fully interpretable.
General approaches and techniques for developing good explanatory models for nonlinear data I think it is worth considering the use of generalised additive models (GAMs). GAMs are able to encapsulate non-linear relations between the response variable and the outcome variables and are straigh
46,058
General approaches and techniques for developing good explanatory models for nonlinear data
I will spin the question to terminology that I think clarifies the issue at hand - how can we increase model flexibility while maintaining interpretability and not substantially increasing model variance (in the sense of bias/variance trade off)? Applying many transformations to features & interactions is a reasonable approach to increase the flexibility of the model. Regularized regression, for example the LASSO or elastic net, can be applied on top of the synthesized feature set to fit a comparatively flexible model while simultaneously performing a feature selection & shrinking model variability. In addition, regularization will shrink or outright remove model parameters for correlated features. This makes interpretation of individual effects in the fitted model more reliable by correctly attributing effect sizes. However, this approach requires the practitioner to determine the functional form of the model. For example, how does one decide which transformations to apply? Which features to interact and their degree? Depending on the context, like domain knowledge or data size & sparsity, this may not be practical. In such cases, we'd prefer even more flexible methods like neural nets or tree ensembles, without sacrificing interpretability, which neural nets and tree ensembles typically do. One such model is RuleFit (initial publication and short explanation). The idea is to fit arbitrary shapes in the data (flexibility) using decision rules and then build a model using a small subset of rules (variability). This is accomplished using a tree ensemble for rule generation followed by a LASSO for rule selection. Tree depth is used to control the degree of interaction between features. Note that the end result is still a linear model which enjoys the typical interpretability. The final rule set in the fitted model may provide additional insights by identifying splits in the data where the response varies largely. A final note - while interpretable models are desirable, it is possible to perform many types of diagnostics & inference on "black box" models using model agnostic methods.
General approaches and techniques for developing good explanatory models for nonlinear data
I will spin the question to terminology that I think clarifies the issue at hand - how can we increase model flexibility while maintaining interpretability and not substantially increasing model varia
General approaches and techniques for developing good explanatory models for nonlinear data I will spin the question to terminology that I think clarifies the issue at hand - how can we increase model flexibility while maintaining interpretability and not substantially increasing model variance (in the sense of bias/variance trade off)? Applying many transformations to features & interactions is a reasonable approach to increase the flexibility of the model. Regularized regression, for example the LASSO or elastic net, can be applied on top of the synthesized feature set to fit a comparatively flexible model while simultaneously performing a feature selection & shrinking model variability. In addition, regularization will shrink or outright remove model parameters for correlated features. This makes interpretation of individual effects in the fitted model more reliable by correctly attributing effect sizes. However, this approach requires the practitioner to determine the functional form of the model. For example, how does one decide which transformations to apply? Which features to interact and their degree? Depending on the context, like domain knowledge or data size & sparsity, this may not be practical. In such cases, we'd prefer even more flexible methods like neural nets or tree ensembles, without sacrificing interpretability, which neural nets and tree ensembles typically do. One such model is RuleFit (initial publication and short explanation). The idea is to fit arbitrary shapes in the data (flexibility) using decision rules and then build a model using a small subset of rules (variability). This is accomplished using a tree ensemble for rule generation followed by a LASSO for rule selection. Tree depth is used to control the degree of interaction between features. Note that the end result is still a linear model which enjoys the typical interpretability. The final rule set in the fitted model may provide additional insights by identifying splits in the data where the response varies largely. A final note - while interpretable models are desirable, it is possible to perform many types of diagnostics & inference on "black box" models using model agnostic methods.
General approaches and techniques for developing good explanatory models for nonlinear data I will spin the question to terminology that I think clarifies the issue at hand - how can we increase model flexibility while maintaining interpretability and not substantially increasing model varia
46,059
General approaches and techniques for developing good explanatory models for nonlinear data
As a engineer from the physics field, I understand how this matter can be crucial. If you need multi-dimensional "smooth" fits, gaussian process regression usually works quite well. The analysis of your data can then be done through projection plots (for 2 or maybe 3 input parameters at the same time, at most). You don't get any analytic expression of your data, but still can link it to your knowledge of the problem. Another good solution that I heard of (but never tried by myself) is the global sensitivity analysis using Sobol indices. This method is based on the decomposition of your problem in first, second, and so on, orders, where the $n$-th order corresponds to combinations of subsets of n input parameters (among the total number $N$). Finally, there is a supervised learning technique that I like because it's a white box approach: classification and regression trees. You get a binary tree which can be easily interpreted, because it works directly on input parameters. Combined influences of several input parameters can be found by studying the deeper layers of the tree.
General approaches and techniques for developing good explanatory models for nonlinear data
As a engineer from the physics field, I understand how this matter can be crucial. If you need multi-dimensional "smooth" fits, gaussian process regression usually works quite well. The analysis of yo
General approaches and techniques for developing good explanatory models for nonlinear data As a engineer from the physics field, I understand how this matter can be crucial. If you need multi-dimensional "smooth" fits, gaussian process regression usually works quite well. The analysis of your data can then be done through projection plots (for 2 or maybe 3 input parameters at the same time, at most). You don't get any analytic expression of your data, but still can link it to your knowledge of the problem. Another good solution that I heard of (but never tried by myself) is the global sensitivity analysis using Sobol indices. This method is based on the decomposition of your problem in first, second, and so on, orders, where the $n$-th order corresponds to combinations of subsets of n input parameters (among the total number $N$). Finally, there is a supervised learning technique that I like because it's a white box approach: classification and regression trees. You get a binary tree which can be easily interpreted, because it works directly on input parameters. Combined influences of several input parameters can be found by studying the deeper layers of the tree.
General approaches and techniques for developing good explanatory models for nonlinear data As a engineer from the physics field, I understand how this matter can be crucial. If you need multi-dimensional "smooth" fits, gaussian process regression usually works quite well. The analysis of yo
46,060
How can a deep neural network learn anything?
You're absolutely right, it makes no sense, and it's a huge problem. The loss function is locally linear. You are effectively saying that $L(w+h) = L(w)+h\cdot dL(w)$, where $L$ is your loss function, $w$ your vector of weights, and $dL$ is the gradient vector. In this sense, the contribution of changes of each weight produces an approximately linear change in $L$ as a function of the step-size. $dL$ and in particular $dL_i$, the individual components, definitely depends on weights in a nonlinear way. This helps to decouple the range of the output from the range of how $L$ changes as the weights change. Due to the sensitivity you point out, there are also strategies which assign smaller learning rates to earlier layers, especially for the purposes of fine-tuning a network. Even though we choose a larger learning rate initially, this learning rate is still tuned to how sensitive $L$ is, so that the above linear relationship is roughly held. In theory the above is right, but in practice the situation is considerably more complex. For example in a recent paper The Shattered Gradients Problem: If resnets are the answer, then what is the question? Balduzzi, et. al, evidence is presented exactly to your point, that vanilla deep learning networks have gradients that are extremely sensitivity to both weights and input images, even with batch normalization. It is shown that residual networks considerably improve this sensitivity, which makes a convincing case for why they work so well. At the same time, initially almost anything is better than random guessing. For example if you're training a network to tell "1" apart from "7", then it's clear that "1"'s have more vertical features, vs 7's have a ton of diagonal and horizontal features, so that even first level convolutions that correlate slightly with vertical vs diagonal features will contribute to improving the model.
How can a deep neural network learn anything?
You're absolutely right, it makes no sense, and it's a huge problem. The loss function is locally linear. You are effectively saying that $L(w+h) = L(w)+h\cdot dL(w)$, where $L$ is your loss function,
How can a deep neural network learn anything? You're absolutely right, it makes no sense, and it's a huge problem. The loss function is locally linear. You are effectively saying that $L(w+h) = L(w)+h\cdot dL(w)$, where $L$ is your loss function, $w$ your vector of weights, and $dL$ is the gradient vector. In this sense, the contribution of changes of each weight produces an approximately linear change in $L$ as a function of the step-size. $dL$ and in particular $dL_i$, the individual components, definitely depends on weights in a nonlinear way. This helps to decouple the range of the output from the range of how $L$ changes as the weights change. Due to the sensitivity you point out, there are also strategies which assign smaller learning rates to earlier layers, especially for the purposes of fine-tuning a network. Even though we choose a larger learning rate initially, this learning rate is still tuned to how sensitive $L$ is, so that the above linear relationship is roughly held. In theory the above is right, but in practice the situation is considerably more complex. For example in a recent paper The Shattered Gradients Problem: If resnets are the answer, then what is the question? Balduzzi, et. al, evidence is presented exactly to your point, that vanilla deep learning networks have gradients that are extremely sensitivity to both weights and input images, even with batch normalization. It is shown that residual networks considerably improve this sensitivity, which makes a convincing case for why they work so well. At the same time, initially almost anything is better than random guessing. For example if you're training a network to tell "1" apart from "7", then it's clear that "1"'s have more vertical features, vs 7's have a ton of diagonal and horizontal features, so that even first level convolutions that correlate slightly with vertical vs diagonal features will contribute to improving the model.
How can a deep neural network learn anything? You're absolutely right, it makes no sense, and it's a huge problem. The loss function is locally linear. You are effectively saying that $L(w+h) = L(w)+h\cdot dL(w)$, where $L$ is your loss function,
46,061
How can a deep neural network learn anything?
A recent paper addresses this topic: "Doing the impossible: Why neural networks can be trained at all" by Nathan Hodas, Panos Stinis As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don’t we also need billions of samples to train it? Of course, the success of deep learning indicates that reliable models can be learned with reasonable amounts of data. Similar questions arise in protein folding, spin glasses and biological neural networks. With effectively infinite potential folding/spin/wiring configurations, how does the system find the precise arrangement that leads to useful and robust results? Simple sampling of the possible configurations until an optimal one is reached is not a viable option even if one waited for the age of the universe. On the contrary, there appears to be a mechanism in the above phenomena that forces them to achieve configurations that live on a low-dimensional manifold, avoiding the curse of dimensionality. In the current work we use the concept of mutual information between successive layers of a deep neural network to elucidate this mechanism and suggest possible ways of exploiting it to accelerate training. We show that adding structure to the neural network that enforces higher mutual information between layers speeds training and leads to more accurate results. High mutual information between layers implies that the effective number of free parameters is exponentially smaller than the raw number of tunable weights.
How can a deep neural network learn anything?
A recent paper addresses this topic: "Doing the impossible: Why neural networks can be trained at all" by Nathan Hodas, Panos Stinis As deep neural networks grow in size, from thousands to millions t
How can a deep neural network learn anything? A recent paper addresses this topic: "Doing the impossible: Why neural networks can be trained at all" by Nathan Hodas, Panos Stinis As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don’t we also need billions of samples to train it? Of course, the success of deep learning indicates that reliable models can be learned with reasonable amounts of data. Similar questions arise in protein folding, spin glasses and biological neural networks. With effectively infinite potential folding/spin/wiring configurations, how does the system find the precise arrangement that leads to useful and robust results? Simple sampling of the possible configurations until an optimal one is reached is not a viable option even if one waited for the age of the universe. On the contrary, there appears to be a mechanism in the above phenomena that forces them to achieve configurations that live on a low-dimensional manifold, avoiding the curse of dimensionality. In the current work we use the concept of mutual information between successive layers of a deep neural network to elucidate this mechanism and suggest possible ways of exploiting it to accelerate training. We show that adding structure to the neural network that enforces higher mutual information between layers speeds training and leads to more accurate results. High mutual information between layers implies that the effective number of free parameters is exponentially smaller than the raw number of tunable weights.
How can a deep neural network learn anything? A recent paper addresses this topic: "Doing the impossible: Why neural networks can be trained at all" by Nathan Hodas, Panos Stinis As deep neural networks grow in size, from thousands to millions t
46,062
How can a deep neural network learn anything?
This is why learning rates, despite being "as large as can be" are still quite small. On the order of $10^{-3}$ and $10^{-4}$ is common. But you are right that this can be an issue. Batch normalization is one partial solution to this problem which often allows for faster training: Normalizing the activations after each layer so that they have the same mean and variance reduces the problem of "covariate shift" -- in other words, the fact that after an iteration of SGD, the features output at a certain layer might've changed in an unpredictable way, making it hard for the subsequent layers to do their job. So to summarize: a combination of small step sizes and batch norm.
How can a deep neural network learn anything?
This is why learning rates, despite being "as large as can be" are still quite small. On the order of $10^{-3}$ and $10^{-4}$ is common. But you are right that this can be an issue. Batch normalizati
How can a deep neural network learn anything? This is why learning rates, despite being "as large as can be" are still quite small. On the order of $10^{-3}$ and $10^{-4}$ is common. But you are right that this can be an issue. Batch normalization is one partial solution to this problem which often allows for faster training: Normalizing the activations after each layer so that they have the same mean and variance reduces the problem of "covariate shift" -- in other words, the fact that after an iteration of SGD, the features output at a certain layer might've changed in an unpredictable way, making it hard for the subsequent layers to do their job. So to summarize: a combination of small step sizes and batch norm.
How can a deep neural network learn anything? This is why learning rates, despite being "as large as can be" are still quite small. On the order of $10^{-3}$ and $10^{-4}$ is common. But you are right that this can be an issue. Batch normalizati
46,063
Serial Mediation in R - how to setup the model?
This should probably be migrated to StackOverflow since it is about software, but: You could do this in the R package lavaan. In your model, you would first specify models for M1, M2, and Y. We will want to label all the paths, as well. I will label c' as cp, for "c-prime": M1 ~ a1 * X M2 ~ a2 * X + d21 * M1 Y ~ cp * X + b1 * M1 + b2 * M2 The indirect effect, ind_eff is then defined, per Hayes, as a1 * d21 * b2: ind_eff := a1 * d21 * b2 You need to put this all in a string object: model <- " M1 ~ a1 * X M2 ~ a2 * X + d21 * M1 Y ~ cp * X + b1 * M1 + b2 * M2 ind_eff := a1 * d21 * b2 " Then you just run the model using bootstrapped confidence intervals to get the confidence interval for the indirect effect (ind_eff): fit <- lavaan::sem(model = model, data = dat, se = "boot", bootstrap = 5000) Where dat is the name of your data frame and 5000 is the number of bootstrap resamples you would like to do (this will likely take a few minutes). To look at your results, you can call: lavaan::parameterEstimates(fit, boot.ci.type = "bca.simple")
Serial Mediation in R - how to setup the model?
This should probably be migrated to StackOverflow since it is about software, but: You could do this in the R package lavaan. In your model, you would first specify models for M1, M2, and Y. We will w
Serial Mediation in R - how to setup the model? This should probably be migrated to StackOverflow since it is about software, but: You could do this in the R package lavaan. In your model, you would first specify models for M1, M2, and Y. We will want to label all the paths, as well. I will label c' as cp, for "c-prime": M1 ~ a1 * X M2 ~ a2 * X + d21 * M1 Y ~ cp * X + b1 * M1 + b2 * M2 The indirect effect, ind_eff is then defined, per Hayes, as a1 * d21 * b2: ind_eff := a1 * d21 * b2 You need to put this all in a string object: model <- " M1 ~ a1 * X M2 ~ a2 * X + d21 * M1 Y ~ cp * X + b1 * M1 + b2 * M2 ind_eff := a1 * d21 * b2 " Then you just run the model using bootstrapped confidence intervals to get the confidence interval for the indirect effect (ind_eff): fit <- lavaan::sem(model = model, data = dat, se = "boot", bootstrap = 5000) Where dat is the name of your data frame and 5000 is the number of bootstrap resamples you would like to do (this will likely take a few minutes). To look at your results, you can call: lavaan::parameterEstimates(fit, boot.ci.type = "bca.simple")
Serial Mediation in R - how to setup the model? This should probably be migrated to StackOverflow since it is about software, but: You could do this in the R package lavaan. In your model, you would first specify models for M1, M2, and Y. We will w
46,064
Validating cluster tendency using Hopkins statistic
I have also been confused about this contradictory information regarding Hopkins statistics. In http://www.sthda.com/english/articles/29-cluster-validation-essentials/95-assessing-clustering-tendency-essentials/ it is said that We can conduct the Hopkins Statistic test iteratively, using 0.5 as the threshold to reject the alternative hypothesis. That is, if H < 0.5, then it is unlikely that D has statistically significant clusters. Put in other words, If the value of Hopkins statistic is close to 1, then we can reject the null hypothesis and conclude that the dataset D is significantly a clusterable data. And there is also an example on iris dataset using get_clust_tendency() function that shows that for highly clusterable dataset the Hopkins statistic is 0.818, but for random dataset 0.466. However, if you repeat their analysis, you would actally get 0.182 for clusterable dataset and 0.534 for random dataset. This suggests that the get_clust_tendency() function used has been changed (when? why?) s.t. now the Hopkins statistic is computed as $1-H$, where (definition from Wikipedia) $ H=\frac{\sum_{i=1}^m{u_i^d}}{\sum_{i=1}^m{u_i^d}+\sum_{i=1}^m{w_i^d}} \, $, $u_i$ - nearest neighbour distances from uniformly generated sample points to sample data from given dataset, $w_i$ - nearest neighbour distances within sample data from given dataset. Thus, for clusterable datasets Hopkins statistic is close to 0, if computed using get_clust_tendency() function from factoextra package.
Validating cluster tendency using Hopkins statistic
I have also been confused about this contradictory information regarding Hopkins statistics. In http://www.sthda.com/english/articles/29-cluster-validation-essentials/95-assessing-clustering-tendency
Validating cluster tendency using Hopkins statistic I have also been confused about this contradictory information regarding Hopkins statistics. In http://www.sthda.com/english/articles/29-cluster-validation-essentials/95-assessing-clustering-tendency-essentials/ it is said that We can conduct the Hopkins Statistic test iteratively, using 0.5 as the threshold to reject the alternative hypothesis. That is, if H < 0.5, then it is unlikely that D has statistically significant clusters. Put in other words, If the value of Hopkins statistic is close to 1, then we can reject the null hypothesis and conclude that the dataset D is significantly a clusterable data. And there is also an example on iris dataset using get_clust_tendency() function that shows that for highly clusterable dataset the Hopkins statistic is 0.818, but for random dataset 0.466. However, if you repeat their analysis, you would actally get 0.182 for clusterable dataset and 0.534 for random dataset. This suggests that the get_clust_tendency() function used has been changed (when? why?) s.t. now the Hopkins statistic is computed as $1-H$, where (definition from Wikipedia) $ H=\frac{\sum_{i=1}^m{u_i^d}}{\sum_{i=1}^m{u_i^d}+\sum_{i=1}^m{w_i^d}} \, $, $u_i$ - nearest neighbour distances from uniformly generated sample points to sample data from given dataset, $w_i$ - nearest neighbour distances within sample data from given dataset. Thus, for clusterable datasets Hopkins statistic is close to 0, if computed using get_clust_tendency() function from factoextra package.
Validating cluster tendency using Hopkins statistic I have also been confused about this contradictory information regarding Hopkins statistics. In http://www.sthda.com/english/articles/29-cluster-validation-essentials/95-assessing-clustering-tendency
46,065
Validating cluster tendency using Hopkins statistic
Hopkins is a pretty extreme test for uniform distributions. It's naive to assume that data will cluster, just because it has a tendency - the test is mostly useful to detect uniform data. The problem is that it doesn't imply a multimodal distribution. A single Gaussian will have a "clustering tendency" according to Hopkins test. But running cluster analysis on a single Gaussian is pointless. The best result is "everything is the same cluster". It just tested that a Gaussian is not uniform. Nevertheless, I would expect a small value to indicate the data looks uniform at least in the current normalization. If Hopkins indicates a uniform distribution, then you clearly can stop, there probably is some bad column ruining the analysis. But with the arguments above, the more sane interpretation is to use the opposite statistic 1-H and interpret this as a "normalized deviation from uniform data".
Validating cluster tendency using Hopkins statistic
Hopkins is a pretty extreme test for uniform distributions. It's naive to assume that data will cluster, just because it has a tendency - the test is mostly useful to detect uniform data. The problem
Validating cluster tendency using Hopkins statistic Hopkins is a pretty extreme test for uniform distributions. It's naive to assume that data will cluster, just because it has a tendency - the test is mostly useful to detect uniform data. The problem is that it doesn't imply a multimodal distribution. A single Gaussian will have a "clustering tendency" according to Hopkins test. But running cluster analysis on a single Gaussian is pointless. The best result is "everything is the same cluster". It just tested that a Gaussian is not uniform. Nevertheless, I would expect a small value to indicate the data looks uniform at least in the current normalization. If Hopkins indicates a uniform distribution, then you clearly can stop, there probably is some bad column ruining the analysis. But with the arguments above, the more sane interpretation is to use the opposite statistic 1-H and interpret this as a "normalized deviation from uniform data".
Validating cluster tendency using Hopkins statistic Hopkins is a pretty extreme test for uniform distributions. It's naive to assume that data will cluster, just because it has a tendency - the test is mostly useful to detect uniform data. The problem
46,066
Validating cluster tendency using Hopkins statistic
Based on the source code for get_clust_tendency() it appear's that Iden's answer is correct and for some versions of the factoextra package, the hopkin's stat may be calculated as (1-H). Check the code for the get_clust_tendency() function in your version of factoextra, if you see that the hopkin's stat is calculated as... hopkins_stat = sum(minq)/(sum(minp) + sum(minq)) then this is the (1-H) version. Contrary to the explanation given on Assessing Clustering Tendency, sum(minq) is actually the sum of the nearest neighbor distances for the real points, not the artificial ones. With respect to the formula in Iden's answer $$ H=\frac{\sum_{i=1}^m{u_i^d}}{\sum_{i=1}^m{u_i^d}+\sum_{i=1}^m{w_i^d}} \ $$ The get_clust_tendency() may actually be returning $$ H=\frac{\sum_{i=1}^m{w_i^d}}{\sum_{i=1}^m{u_i^d}+\sum_{i=1}^m{w_i^d}} \ $$ In the most recent version of the factoextra package this seems to have been corrected, see code line 110 in the source code where hopkins_stat = sum(minp)/(sum(minp) + sum(minq))
Validating cluster tendency using Hopkins statistic
Based on the source code for get_clust_tendency() it appear's that Iden's answer is correct and for some versions of the factoextra package, the hopkin's stat may be calculated as (1-H). Check the co
Validating cluster tendency using Hopkins statistic Based on the source code for get_clust_tendency() it appear's that Iden's answer is correct and for some versions of the factoextra package, the hopkin's stat may be calculated as (1-H). Check the code for the get_clust_tendency() function in your version of factoextra, if you see that the hopkin's stat is calculated as... hopkins_stat = sum(minq)/(sum(minp) + sum(minq)) then this is the (1-H) version. Contrary to the explanation given on Assessing Clustering Tendency, sum(minq) is actually the sum of the nearest neighbor distances for the real points, not the artificial ones. With respect to the formula in Iden's answer $$ H=\frac{\sum_{i=1}^m{u_i^d}}{\sum_{i=1}^m{u_i^d}+\sum_{i=1}^m{w_i^d}} \ $$ The get_clust_tendency() may actually be returning $$ H=\frac{\sum_{i=1}^m{w_i^d}}{\sum_{i=1}^m{u_i^d}+\sum_{i=1}^m{w_i^d}} \ $$ In the most recent version of the factoextra package this seems to have been corrected, see code line 110 in the source code where hopkins_stat = sum(minp)/(sum(minp) + sum(minq))
Validating cluster tendency using Hopkins statistic Based on the source code for get_clust_tendency() it appear's that Iden's answer is correct and for some versions of the factoextra package, the hopkin's stat may be calculated as (1-H). Check the co
46,067
Validating cluster tendency using Hopkins statistic
Actually, as already mentioned in the previous answers, it is a matter of how Hopkins statistic has been calculated. The literature is has clear information on what value is expected. For example, Lawson and Jurs (1990) and Banerjee & Dave (2004) explains that you may expect 3 different results: 1) H = 0.5 (the dataset reveals no clustering structure) " in the formula, W always refers to the real data, and it is in the denominator) 2) H close to 1.0, a significant evidence that the data might be cluster-able. 3) H is close to 0, in this case the test is indecisive (data are neither clustered nor random) based on the above info , you may find that: get_cluster_tendency(df, n, ..) provides the write calculation, whereas hopkins(df, ..) provides a reverse result as it might have been calculated based on 1-H. Cheers!
Validating cluster tendency using Hopkins statistic
Actually, as already mentioned in the previous answers, it is a matter of how Hopkins statistic has been calculated. The literature is has clear information on what value is expected. For example, La
Validating cluster tendency using Hopkins statistic Actually, as already mentioned in the previous answers, it is a matter of how Hopkins statistic has been calculated. The literature is has clear information on what value is expected. For example, Lawson and Jurs (1990) and Banerjee & Dave (2004) explains that you may expect 3 different results: 1) H = 0.5 (the dataset reveals no clustering structure) " in the formula, W always refers to the real data, and it is in the denominator) 2) H close to 1.0, a significant evidence that the data might be cluster-able. 3) H is close to 0, in this case the test is indecisive (data are neither clustered nor random) based on the above info , you may find that: get_cluster_tendency(df, n, ..) provides the write calculation, whereas hopkins(df, ..) provides a reverse result as it might have been calculated based on 1-H. Cheers!
Validating cluster tendency using Hopkins statistic Actually, as already mentioned in the previous answers, it is a matter of how Hopkins statistic has been calculated. The literature is has clear information on what value is expected. For example, La
46,068
Validating cluster tendency using Hopkins statistic
I have been pretty confused too with contradicting information available online. However if you are using pyclusterend library, the documentation says if hopkins value is less than 0.3, there are possibilities of cluster. To quote: Hopkins test A statistical test which allow to guess if the data follow an uniform distribution. If the test is positve (an hopkins score which tends to 0) it means that the data is not uniformly distributed. Hence clustering can be useful to classify the observations. However, if the score is too high (above 0.3 for exemple); the data is uniformly distributed and clustering can’t be really useful for the problem at hand.
Validating cluster tendency using Hopkins statistic
I have been pretty confused too with contradicting information available online. However if you are using pyclusterend library, the documentation says if hopkins value is less than 0.3, there are poss
Validating cluster tendency using Hopkins statistic I have been pretty confused too with contradicting information available online. However if you are using pyclusterend library, the documentation says if hopkins value is less than 0.3, there are possibilities of cluster. To quote: Hopkins test A statistical test which allow to guess if the data follow an uniform distribution. If the test is positve (an hopkins score which tends to 0) it means that the data is not uniformly distributed. Hence clustering can be useful to classify the observations. However, if the score is too high (above 0.3 for exemple); the data is uniformly distributed and clustering can’t be really useful for the problem at hand.
Validating cluster tendency using Hopkins statistic I have been pretty confused too with contradicting information available online. However if you are using pyclusterend library, the documentation says if hopkins value is less than 0.3, there are poss
46,069
What exactly happens when I do a feature cross?
Your data is not linearly separable in the original space. But it seems like it actually is separable with a circle/ellipse (let's say it's inside a circle to simplify the problem): it seems reasonable to have hypothesis that, for some $c$ if $x^2 + y^2< c$ then a point is blue. That means that if you use $x^2, y^2$ as features, you can fit a linear classifier to these data points and actually separate the classes linearly.
What exactly happens when I do a feature cross?
Your data is not linearly separable in the original space. But it seems like it actually is separable with a circle/ellipse (let's say it's inside a circle to simplify the problem): it seems reasonabl
What exactly happens when I do a feature cross? Your data is not linearly separable in the original space. But it seems like it actually is separable with a circle/ellipse (let's say it's inside a circle to simplify the problem): it seems reasonable to have hypothesis that, for some $c$ if $x^2 + y^2< c$ then a point is blue. That means that if you use $x^2, y^2$ as features, you can fit a linear classifier to these data points and actually separate the classes linearly.
What exactly happens when I do a feature cross? Your data is not linearly separable in the original space. But it seems like it actually is separable with a circle/ellipse (let's say it's inside a circle to simplify the problem): it seems reasonabl
46,070
Defining Conditional Likelihood
Usually one assumes that there is a distribution $$p_{\text{data}}(y,x)$$ that somewhat defines not only the distributions of $x$ and $y$ but also their dependency (i.e. if $y_i = f(x_i)$ then we can estimate $f$ by computing the conditional expectation $E[y|X=x]$ with respect to this common probability distribution and so on). Now we do not only assume that the $x_i$ were drawn independently but rather that the whole tuples $(y_1, x_1), ..., (y_n,x_n)$ were drawn independently. Caution: this does in no way mean that $y_i$ is somewhat independent from $x_i$, it only means that $$p(y,x) = \prod_{i=1}^n p(y_i, x_i)$$ and by using marginalization and the Theorem of Fubini (wikipedia) we see that \begin{align*} p(y|x) &= \frac{p(y,x)}{p(x)} = \frac{p(y,x)}{\int p(\hat{y}, x) d\hat{y}} \\ &= \frac{p(y,x)}{\int ... \int \prod_{i=1}^n p(\hat{y}_i, x) d\hat{y}_1 ... d\hat{y}_n} \\ &= \frac{\prod_{i=1}^n p(y_i,x_i)}{\prod_{i=1}^n \int p(\hat{y}_i, x) d\hat{y}_i} \\ &= \prod_{i=1}^n p(y_i|x_i) \end{align*} So, you can safely feel comfortable with this, it follows from the basic assumptions we always have: the observed data are independent. Edit: note that usually we do not make assumptions about the joint probability $p(y,x)$ or $p(y_i, x_i)$ directly but we rather assume that $y_i = f(x_i) + \text{'small' error}$ for a single function $f$ and then we make assumptions on $f$, for example, in linear regression we assume that $$f(x) = \beta^T \cdot x$$
Defining Conditional Likelihood
Usually one assumes that there is a distribution $$p_{\text{data}}(y,x)$$ that somewhat defines not only the distributions of $x$ and $y$ but also their dependency (i.e. if $y_i = f(x_i)$ then we ca
Defining Conditional Likelihood Usually one assumes that there is a distribution $$p_{\text{data}}(y,x)$$ that somewhat defines not only the distributions of $x$ and $y$ but also their dependency (i.e. if $y_i = f(x_i)$ then we can estimate $f$ by computing the conditional expectation $E[y|X=x]$ with respect to this common probability distribution and so on). Now we do not only assume that the $x_i$ were drawn independently but rather that the whole tuples $(y_1, x_1), ..., (y_n,x_n)$ were drawn independently. Caution: this does in no way mean that $y_i$ is somewhat independent from $x_i$, it only means that $$p(y,x) = \prod_{i=1}^n p(y_i, x_i)$$ and by using marginalization and the Theorem of Fubini (wikipedia) we see that \begin{align*} p(y|x) &= \frac{p(y,x)}{p(x)} = \frac{p(y,x)}{\int p(\hat{y}, x) d\hat{y}} \\ &= \frac{p(y,x)}{\int ... \int \prod_{i=1}^n p(\hat{y}_i, x) d\hat{y}_1 ... d\hat{y}_n} \\ &= \frac{\prod_{i=1}^n p(y_i,x_i)}{\prod_{i=1}^n \int p(\hat{y}_i, x) d\hat{y}_i} \\ &= \prod_{i=1}^n p(y_i|x_i) \end{align*} So, you can safely feel comfortable with this, it follows from the basic assumptions we always have: the observed data are independent. Edit: note that usually we do not make assumptions about the joint probability $p(y,x)$ or $p(y_i, x_i)$ directly but we rather assume that $y_i = f(x_i) + \text{'small' error}$ for a single function $f$ and then we make assumptions on $f$, for example, in linear regression we assume that $$f(x) = \beta^T \cdot x$$
Defining Conditional Likelihood Usually one assumes that there is a distribution $$p_{\text{data}}(y,x)$$ that somewhat defines not only the distributions of $x$ and $y$ but also their dependency (i.e. if $y_i = f(x_i)$ then we ca
46,071
Why Poisson and Binomial distribution are giving different results for the same problem?
Both yield nearly the same result: > dpois(2,2) [1] 0.2706706 > dbinom(2,100,.02) [1] 0.2734139 Both results would get more similar as n tended to infinite and p tended to zero, but n=100 is large but a lot smaller than infinite, so you can get an accurate result up to a couple of significant digits. Edit in response to comment: Ok.. So based on n should I choose which method to use? The variable in your statement is distributed according to a binomial. Therefore, the binomial distribution produces the exact result. Then, you should use the binomial distribution if you can do the maths. However, sometimes calculation with binomial is difficult, specially when computations need to be done by hand or when some parameters are unknown. Then you can resort to two approximations of binomial if n is large: If n is large and p is small, you can use Poisson distribution to approximate binomial (as in this problem). If n is large and n*p is not small, you can use normal distribution to approximate binomial. For example, solving your problem by hand using binomial involves calculating $0.98^{98}$, which can take a bit long, while solving it using Poisson doesn't need anything harder than $e^{-2}$, that is a lot easier even if you don't have a logarithms table at hand. However, if you use R, Excel or any other software with statistical capabilities, you don't need to worry about such approximations because the program handles them when needed.
Why Poisson and Binomial distribution are giving different results for the same problem?
Both yield nearly the same result: > dpois(2,2) [1] 0.2706706 > dbinom(2,100,.02) [1] 0.2734139 Both results would get more similar as n tended to infinite and p tended to zero, but n=100 is large bu
Why Poisson and Binomial distribution are giving different results for the same problem? Both yield nearly the same result: > dpois(2,2) [1] 0.2706706 > dbinom(2,100,.02) [1] 0.2734139 Both results would get more similar as n tended to infinite and p tended to zero, but n=100 is large but a lot smaller than infinite, so you can get an accurate result up to a couple of significant digits. Edit in response to comment: Ok.. So based on n should I choose which method to use? The variable in your statement is distributed according to a binomial. Therefore, the binomial distribution produces the exact result. Then, you should use the binomial distribution if you can do the maths. However, sometimes calculation with binomial is difficult, specially when computations need to be done by hand or when some parameters are unknown. Then you can resort to two approximations of binomial if n is large: If n is large and p is small, you can use Poisson distribution to approximate binomial (as in this problem). If n is large and n*p is not small, you can use normal distribution to approximate binomial. For example, solving your problem by hand using binomial involves calculating $0.98^{98}$, which can take a bit long, while solving it using Poisson doesn't need anything harder than $e^{-2}$, that is a lot easier even if you don't have a logarithms table at hand. However, if you use R, Excel or any other software with statistical capabilities, you don't need to worry about such approximations because the program handles them when needed.
Why Poisson and Binomial distribution are giving different results for the same problem? Both yield nearly the same result: > dpois(2,2) [1] 0.2706706 > dbinom(2,100,.02) [1] 0.2734139 Both results would get more similar as n tended to infinite and p tended to zero, but n=100 is large bu
46,072
Inferring the number of topics for gensim's LDA - perplexity, CM, AIC, and BIC
Counter intuitively, it appears that the log_perplexity function doesn't output a $perplexity$ after all (the documentation of the function wasn't clear enough for me personally), but a likelihood $bound$ which must be utilised in the perplexity's lower bound equation thus (Taken from this paper - Online Learning for Latent Dirichlet Allocation by Hoffman, Blei and Bach): $$ perplexity (n^{test}, \lambda, \alpha) \leq exp\{ -(\sum_i{ \mathbb{E}_q [log_p(n_i^{test}, \theta_i, z_i | \alpha, \beta)] - \mathbb{E}_q[log_q(\theta_i, z_i)]) / (\sum_{i,w}{n_{iw}^{test}}) } \} $$ Viz., $$ perplexity (n^{test}, \lambda, \alpha) \leq e^{- bound} $$ Some people like to use $2$ instead of $e$ in the equation above. For calculating $AIC$ and $BIC$, one usually needs the Bayesian likelihood of the model, not necessarily the $SSE$, especially in a topic modelling environment. Finally, as for the $U Mass$ coherence measure, to the best of my knowledge, it hasn't been used in a model selection scenario with LDA yet, but the sharp dip I got at $k=20$ (the proper number of topics according to the 20 newsgroups dataset) is encouraging. However, topic coherence measures should be close to zero optimally, so that sharp dip isn't an improvement, rather a deterioration in the coherence (the meaningfulness or interpretability) of topics.
Inferring the number of topics for gensim's LDA - perplexity, CM, AIC, and BIC
Counter intuitively, it appears that the log_perplexity function doesn't output a $perplexity$ after all (the documentation of the function wasn't clear enough for me personally), but a likelihood $bo
Inferring the number of topics for gensim's LDA - perplexity, CM, AIC, and BIC Counter intuitively, it appears that the log_perplexity function doesn't output a $perplexity$ after all (the documentation of the function wasn't clear enough for me personally), but a likelihood $bound$ which must be utilised in the perplexity's lower bound equation thus (Taken from this paper - Online Learning for Latent Dirichlet Allocation by Hoffman, Blei and Bach): $$ perplexity (n^{test}, \lambda, \alpha) \leq exp\{ -(\sum_i{ \mathbb{E}_q [log_p(n_i^{test}, \theta_i, z_i | \alpha, \beta)] - \mathbb{E}_q[log_q(\theta_i, z_i)]) / (\sum_{i,w}{n_{iw}^{test}}) } \} $$ Viz., $$ perplexity (n^{test}, \lambda, \alpha) \leq e^{- bound} $$ Some people like to use $2$ instead of $e$ in the equation above. For calculating $AIC$ and $BIC$, one usually needs the Bayesian likelihood of the model, not necessarily the $SSE$, especially in a topic modelling environment. Finally, as for the $U Mass$ coherence measure, to the best of my knowledge, it hasn't been used in a model selection scenario with LDA yet, but the sharp dip I got at $k=20$ (the proper number of topics according to the 20 newsgroups dataset) is encouraging. However, topic coherence measures should be close to zero optimally, so that sharp dip isn't an improvement, rather a deterioration in the coherence (the meaningfulness or interpretability) of topics.
Inferring the number of topics for gensim's LDA - perplexity, CM, AIC, and BIC Counter intuitively, it appears that the log_perplexity function doesn't output a $perplexity$ after all (the documentation of the function wasn't clear enough for me personally), but a likelihood $bo
46,073
Rationale behind shrinking regression coefficients in Ridge or LASSO regression
Here is the general intuition behind shrinking co-efficients in linear regression. Borrowing figures and equations from Pattern Recognition and Machine Learning by Bishop. Imagine that you have to approximate the function, $y = sin(2\pi x)$ from $N$ observations. You can do this using linear regression, which approximates the $M$ degree polynomial, $$ y(x, \textbf{w}) = \sum_{j=0}^{M}{w_j x^j} $$ by minimizing the error function, $$ E( \textbf{w}) = \frac{1}{2} \sum_{n=1}^{N}{\{ y(x_n, \textbf{w}) - t_n \}^2} $$ By choosing different values of $M$, one can fit different degree polynomials of varying complexity. Here are some example fits (red lines) and the corresponding values of $M$. Blue dots represent the observations and the green line is the true underlying function. The goal is to fit a polynomial which closely approximates the underlying function (green line). Watch what happens with the high degree polynomial, $M=9$. This polynomial gives the minimum error, since it passes through all the points. But this is not a good fit, because the model is fitting to the noise structure of the data, rather than the underlying function. Since the overall goal of linear regression is to be able to predict $t$ for an unknown value of $x$, you will be screwed with the model with the high degree polynomial! Further, let's take a look at the values of the regression coefficients. Watch how the values of the coefficients exploded for the higher degree polynomial. The solution to this problem is regularization! Where, the error function to be minimized, is redefined as follows: $$ E'( \textbf{w}) = \frac{1}{2} \sum_{n=1}^{N}{\{ y(x_n, \textbf{w}) - t_n \}^2} + \frac{\lambda}{2} \Vert \textbf{w} \Vert^2 $$ This gives us the formulation of Ridge regression. The inclusion of the penalty term, $L2$ norm, discourages the values of the regression coefficients from reaching high values, thus preventing over-fitting. Lasso has a similar formulation, where the penalty term is the $L1$ norm. Inclusion of the lasso penalty term has an interesting effect -- it drives some of the coefficients to zero giving a sparse solution. $$ E''( \textbf{w}) = \frac{1}{2} \sum_{n=1}^{N}{\{ y(x_n, \textbf{w}) - t_n \}^2} + \frac{\lambda}{2} \Vert \textbf{w} \Vert^1 $$
Rationale behind shrinking regression coefficients in Ridge or LASSO regression
Here is the general intuition behind shrinking co-efficients in linear regression. Borrowing figures and equations from Pattern Recognition and Machine Learning by Bishop. Imagine that you have to ap
Rationale behind shrinking regression coefficients in Ridge or LASSO regression Here is the general intuition behind shrinking co-efficients in linear regression. Borrowing figures and equations from Pattern Recognition and Machine Learning by Bishop. Imagine that you have to approximate the function, $y = sin(2\pi x)$ from $N$ observations. You can do this using linear regression, which approximates the $M$ degree polynomial, $$ y(x, \textbf{w}) = \sum_{j=0}^{M}{w_j x^j} $$ by minimizing the error function, $$ E( \textbf{w}) = \frac{1}{2} \sum_{n=1}^{N}{\{ y(x_n, \textbf{w}) - t_n \}^2} $$ By choosing different values of $M$, one can fit different degree polynomials of varying complexity. Here are some example fits (red lines) and the corresponding values of $M$. Blue dots represent the observations and the green line is the true underlying function. The goal is to fit a polynomial which closely approximates the underlying function (green line). Watch what happens with the high degree polynomial, $M=9$. This polynomial gives the minimum error, since it passes through all the points. But this is not a good fit, because the model is fitting to the noise structure of the data, rather than the underlying function. Since the overall goal of linear regression is to be able to predict $t$ for an unknown value of $x$, you will be screwed with the model with the high degree polynomial! Further, let's take a look at the values of the regression coefficients. Watch how the values of the coefficients exploded for the higher degree polynomial. The solution to this problem is regularization! Where, the error function to be minimized, is redefined as follows: $$ E'( \textbf{w}) = \frac{1}{2} \sum_{n=1}^{N}{\{ y(x_n, \textbf{w}) - t_n \}^2} + \frac{\lambda}{2} \Vert \textbf{w} \Vert^2 $$ This gives us the formulation of Ridge regression. The inclusion of the penalty term, $L2$ norm, discourages the values of the regression coefficients from reaching high values, thus preventing over-fitting. Lasso has a similar formulation, where the penalty term is the $L1$ norm. Inclusion of the lasso penalty term has an interesting effect -- it drives some of the coefficients to zero giving a sparse solution. $$ E''( \textbf{w}) = \frac{1}{2} \sum_{n=1}^{N}{\{ y(x_n, \textbf{w}) - t_n \}^2} + \frac{\lambda}{2} \Vert \textbf{w} \Vert^1 $$
Rationale behind shrinking regression coefficients in Ridge or LASSO regression Here is the general intuition behind shrinking co-efficients in linear regression. Borrowing figures and equations from Pattern Recognition and Machine Learning by Bishop. Imagine that you have to ap
46,074
Rationale behind shrinking regression coefficients in Ridge or LASSO regression
The question is what we really mean by "fit the best line." Yes, a standard unpenalized regression will be the "best line" to fit the data sample that you have. It might not, however, be the "best line" for a new sample from the population, as that standard unpenalized regression might pick up quirks that are peculiar to the particular data sample that you have. Penalizing the regression coefficients trades off a "worse" fit to the present data sample for a better fit to future samples, particularly important if the regresssion model is to be used for predictions. In response to comment: I don't think anyone can guarantee that penalization will always perform better than unpenalized regression, but penalization does help to avoid some potentially important problems particularly in the context of a predictive model. An Introduction to Statistical Learning is a good introduction to this and to many other issues in statistical modeling. Try repeating the modeling (both standard and penalized) on multiple bootstrap samples from your data set and see how well the standard and penalized models fit the full data set to get an idea about how this might play out in your situation. The issue of scaling data for regularization is complicated. The usual approach is to standardize all predictors to unit standard deviation before penalization. The idea is to treat all predictors as equally as possible; you don't want different penalization if, for example, you measure lengths in millimeters versus kilometers. But standardization isn't straightforward with categorical predictors, and sometimes it's important to ensure that some predictors aren't penalized. There is also the issue of how the penalization coefficient is chosen. So how the coefficients of particular predictors will differ between penalized and standard regression models depends on details of the analysis. In ridge regression they will typically be lower in magnitude in the penalized model, but in LASSO the complete removal of some predictors from the penalized model might require predictors correlated to the removed predictors to have higher-magnitude coefficients than they would in the standard regression.
Rationale behind shrinking regression coefficients in Ridge or LASSO regression
The question is what we really mean by "fit the best line." Yes, a standard unpenalized regression will be the "best line" to fit the data sample that you have. It might not, however, be the "best lin
Rationale behind shrinking regression coefficients in Ridge or LASSO regression The question is what we really mean by "fit the best line." Yes, a standard unpenalized regression will be the "best line" to fit the data sample that you have. It might not, however, be the "best line" for a new sample from the population, as that standard unpenalized regression might pick up quirks that are peculiar to the particular data sample that you have. Penalizing the regression coefficients trades off a "worse" fit to the present data sample for a better fit to future samples, particularly important if the regresssion model is to be used for predictions. In response to comment: I don't think anyone can guarantee that penalization will always perform better than unpenalized regression, but penalization does help to avoid some potentially important problems particularly in the context of a predictive model. An Introduction to Statistical Learning is a good introduction to this and to many other issues in statistical modeling. Try repeating the modeling (both standard and penalized) on multiple bootstrap samples from your data set and see how well the standard and penalized models fit the full data set to get an idea about how this might play out in your situation. The issue of scaling data for regularization is complicated. The usual approach is to standardize all predictors to unit standard deviation before penalization. The idea is to treat all predictors as equally as possible; you don't want different penalization if, for example, you measure lengths in millimeters versus kilometers. But standardization isn't straightforward with categorical predictors, and sometimes it's important to ensure that some predictors aren't penalized. There is also the issue of how the penalization coefficient is chosen. So how the coefficients of particular predictors will differ between penalized and standard regression models depends on details of the analysis. In ridge regression they will typically be lower in magnitude in the penalized model, but in LASSO the complete removal of some predictors from the penalized model might require predictors correlated to the removed predictors to have higher-magnitude coefficients than they would in the standard regression.
Rationale behind shrinking regression coefficients in Ridge or LASSO regression The question is what we really mean by "fit the best line." Yes, a standard unpenalized regression will be the "best line" to fit the data sample that you have. It might not, however, be the "best lin
46,075
Rationale behind shrinking regression coefficients in Ridge or LASSO regression
Shrinkage method will limit the value of regression coefficients, which will avoid model overfitting. From Elements of Statistical Learning When there are many correlated variables in a linear regression model, their coefficients can become poorly determined and exhibit high variance. A wildly large positive coefficient on one variable can be canceled by a similarly large negative coefficient on its correlated cousin. By imposing a size constraint on the coefficients [...] this problem is alleviated. The ridge regression squared penalty term can be thought as follows. The residual sum of square (RSS) writes $RSS(\beta)=\sum_{i=1}^{n}[y_{i}-\beta_{0}-\sum_{j=1}^{p}(x_{ij}\beta_{j})]^{2}$. We add p dummy variables $y_{j+n}=0$ for $j=1..p$ and corresponding inputs $x_{n+i,j}$ such that $x_{n+j,j}=\sqrt{\lambda}$ and $x_{n+i,j}=0, i\neq j$. Rewriting the RSS including those p variables we obtain $RSS^{ridge} = RSS(\beta) + \lambda \sum_{j=1}^{p}\beta_{j}^{2}$. When $\lambda$ increases, non-zero elements of the $p$ added variables also increase, and their influence in the regression increase. When $\lambda \to \infty$ the coefficients tend to zero.
Rationale behind shrinking regression coefficients in Ridge or LASSO regression
Shrinkage method will limit the value of regression coefficients, which will avoid model overfitting. From Elements of Statistical Learning When there are many correlated variables in a linear regr
Rationale behind shrinking regression coefficients in Ridge or LASSO regression Shrinkage method will limit the value of regression coefficients, which will avoid model overfitting. From Elements of Statistical Learning When there are many correlated variables in a linear regression model, their coefficients can become poorly determined and exhibit high variance. A wildly large positive coefficient on one variable can be canceled by a similarly large negative coefficient on its correlated cousin. By imposing a size constraint on the coefficients [...] this problem is alleviated. The ridge regression squared penalty term can be thought as follows. The residual sum of square (RSS) writes $RSS(\beta)=\sum_{i=1}^{n}[y_{i}-\beta_{0}-\sum_{j=1}^{p}(x_{ij}\beta_{j})]^{2}$. We add p dummy variables $y_{j+n}=0$ for $j=1..p$ and corresponding inputs $x_{n+i,j}$ such that $x_{n+j,j}=\sqrt{\lambda}$ and $x_{n+i,j}=0, i\neq j$. Rewriting the RSS including those p variables we obtain $RSS^{ridge} = RSS(\beta) + \lambda \sum_{j=1}^{p}\beta_{j}^{2}$. When $\lambda$ increases, non-zero elements of the $p$ added variables also increase, and their influence in the regression increase. When $\lambda \to \infty$ the coefficients tend to zero.
Rationale behind shrinking regression coefficients in Ridge or LASSO regression Shrinkage method will limit the value of regression coefficients, which will avoid model overfitting. From Elements of Statistical Learning When there are many correlated variables in a linear regr
46,076
Rationale behind shrinking regression coefficients in Ridge or LASSO regression
Ridge regression is mathematically equivalent to a Bayesian regression with a gaussian prior with $\beta$ centered around 0 (see page 11 of the Gaussian Process book here). I think that's helpful as a way to understand when to deploy ridge regression. If you are worried that the data may generate unnaturally large $\beta$ (say, many $x$ share a high linear correlation) you might think of the ridge regression as a way to limit that problem. It tends to produce smaller $\beta$ than the corresponding OLS. Lasso regression tends to set a lot of $\beta$ to zero. It is useful in studies where you have many potential $x$ but only a few observations (say, genetics) or where you don't have a strong theoretical reason to believe that all $x$ matter. I think the most common approach is actually to do both lasso and ridge at the same time (the elastic nets approach). R has the beautiful covnet package precisely for this purpose (including self-tuning by cross-validation). Of course each problem is different and what you should do is to withhold some data and test prediction error after the fits are done.
Rationale behind shrinking regression coefficients in Ridge or LASSO regression
Ridge regression is mathematically equivalent to a Bayesian regression with a gaussian prior with $\beta$ centered around 0 (see page 11 of the Gaussian Process book here). I think that's helpful as a
Rationale behind shrinking regression coefficients in Ridge or LASSO regression Ridge regression is mathematically equivalent to a Bayesian regression with a gaussian prior with $\beta$ centered around 0 (see page 11 of the Gaussian Process book here). I think that's helpful as a way to understand when to deploy ridge regression. If you are worried that the data may generate unnaturally large $\beta$ (say, many $x$ share a high linear correlation) you might think of the ridge regression as a way to limit that problem. It tends to produce smaller $\beta$ than the corresponding OLS. Lasso regression tends to set a lot of $\beta$ to zero. It is useful in studies where you have many potential $x$ but only a few observations (say, genetics) or where you don't have a strong theoretical reason to believe that all $x$ matter. I think the most common approach is actually to do both lasso and ridge at the same time (the elastic nets approach). R has the beautiful covnet package precisely for this purpose (including self-tuning by cross-validation). Of course each problem is different and what you should do is to withhold some data and test prediction error after the fits are done.
Rationale behind shrinking regression coefficients in Ridge or LASSO regression Ridge regression is mathematically equivalent to a Bayesian regression with a gaussian prior with $\beta$ centered around 0 (see page 11 of the Gaussian Process book here). I think that's helpful as a
46,077
What is the difference between a Spine, Register, and Index?
I couldn't find any single resource that would answer all parts of your question. Here's what I could piece together from multiple sources. A register-based census is a type of census that relies on already existing population registers (according to Google's dictionary function, a register is simply "an official list or record, for example of births, marriages, and deaths [...]"). See e.g. this description in Wikipedia of the new Swiss census format (with which I have some first-hand experience): "In order to ease the burden on the population, the information is primarily drawn from population registers and supplemented by sample surveys. Only a small proportion of the population (about 5%) is surveyed in writing or by telephone." The way this works in Switzerland is that the municipalities and the states (cantons) already have very detailed population registers since the population has to report all changes in their residence and in their civil status to the municipalities concerned. The Federal Statistical Office can thus rely to a large extent on this information. The register-based census seems to stand in contrast to a traditional census, in which the entire population has to fill out and return a census form. An example of such a traditional census is the United States Census. This distinction fits what Statistics New Zealand describes as pre-conditions for a register-based census in the document you linked to: " a strong legal basis, public approval, unified identification systems, and comprehensive and reliable register systems developed for administrative needs." strong legal basis and public approval: the government would need to be allowed to use this data, which has hitherto been collected for different purposes that may not include censal purposes unified identification systems: if Statistics New Zealand is going to pull together population registers from different sources, it will have to be possible to somehow match them (e.g. what if one dataset includes middle names and the other one doesn't?) comprehensive and reliable register systems: this one is obvious; a switch to a register-based system should not impact the quality of the census (in the Swiss example that I mentioned above, municipalities have long since maintained accurate population registers, but, for various reasons, that is not the case everywhere) After reading the description in Wikipedia of the New Zealand census, it seems natural that they would at least consider switching to a register-based census: "all census forms are hand-delivered by census workers" A statistical spine is a concept related to the register-based census. When you are pulling together information from various data sources, data points will need to be linked and checked for duplicates and omissions. E.g. the health data on John Smith should ideally be linked to the same person's tax data, etc. The spine thus describes this unification of different registers. The analogy to the biological spine probably comes from the fact that you have one variable that all or most datasets have in common, such as people's names, "in the middle" and there are then various other data sources attached to each name. See e.g. this document by the UK's Office for National Statistics (search for "spine" and start reading at the first appearance) or this sentence from the New Zealand document that you provided: "Within the statistical agency, the population register serves as a central spine and establishes the reference population of people resident in the country." As for the term index, I think it is probably interchangeably used with register and dataset but I'm happy to be proved wrong.
What is the difference between a Spine, Register, and Index?
I couldn't find any single resource that would answer all parts of your question. Here's what I could piece together from multiple sources. A register-based census is a type of census that relies on a
What is the difference between a Spine, Register, and Index? I couldn't find any single resource that would answer all parts of your question. Here's what I could piece together from multiple sources. A register-based census is a type of census that relies on already existing population registers (according to Google's dictionary function, a register is simply "an official list or record, for example of births, marriages, and deaths [...]"). See e.g. this description in Wikipedia of the new Swiss census format (with which I have some first-hand experience): "In order to ease the burden on the population, the information is primarily drawn from population registers and supplemented by sample surveys. Only a small proportion of the population (about 5%) is surveyed in writing or by telephone." The way this works in Switzerland is that the municipalities and the states (cantons) already have very detailed population registers since the population has to report all changes in their residence and in their civil status to the municipalities concerned. The Federal Statistical Office can thus rely to a large extent on this information. The register-based census seems to stand in contrast to a traditional census, in which the entire population has to fill out and return a census form. An example of such a traditional census is the United States Census. This distinction fits what Statistics New Zealand describes as pre-conditions for a register-based census in the document you linked to: " a strong legal basis, public approval, unified identification systems, and comprehensive and reliable register systems developed for administrative needs." strong legal basis and public approval: the government would need to be allowed to use this data, which has hitherto been collected for different purposes that may not include censal purposes unified identification systems: if Statistics New Zealand is going to pull together population registers from different sources, it will have to be possible to somehow match them (e.g. what if one dataset includes middle names and the other one doesn't?) comprehensive and reliable register systems: this one is obvious; a switch to a register-based system should not impact the quality of the census (in the Swiss example that I mentioned above, municipalities have long since maintained accurate population registers, but, for various reasons, that is not the case everywhere) After reading the description in Wikipedia of the New Zealand census, it seems natural that they would at least consider switching to a register-based census: "all census forms are hand-delivered by census workers" A statistical spine is a concept related to the register-based census. When you are pulling together information from various data sources, data points will need to be linked and checked for duplicates and omissions. E.g. the health data on John Smith should ideally be linked to the same person's tax data, etc. The spine thus describes this unification of different registers. The analogy to the biological spine probably comes from the fact that you have one variable that all or most datasets have in common, such as people's names, "in the middle" and there are then various other data sources attached to each name. See e.g. this document by the UK's Office for National Statistics (search for "spine" and start reading at the first appearance) or this sentence from the New Zealand document that you provided: "Within the statistical agency, the population register serves as a central spine and establishes the reference population of people resident in the country." As for the term index, I think it is probably interchangeably used with register and dataset but I'm happy to be proved wrong.
What is the difference between a Spine, Register, and Index? I couldn't find any single resource that would answer all parts of your question. Here's what I could piece together from multiple sources. A register-based census is a type of census that relies on a
46,078
disadvantages variational inference
Further disadvantages: The outcome tends to depend heavily on the starting point for the optimization. Example: this paper which is heavily cited but known to have severe problems (software packages based on it were later withdrawn, etc.) The calculations required to figure out what you are optimizing are often very complicated. (See any paper on variational inference.) On the plus side, there is an excellent introduction to the subject in Mackay's textbook Information Theory, Inference and Learning Algorithms.
disadvantages variational inference
Further disadvantages: The outcome tends to depend heavily on the starting point for the optimization. Example: this paper which is heavily cited but known to have severe problems (software packages
disadvantages variational inference Further disadvantages: The outcome tends to depend heavily on the starting point for the optimization. Example: this paper which is heavily cited but known to have severe problems (software packages based on it were later withdrawn, etc.) The calculations required to figure out what you are optimizing are often very complicated. (See any paper on variational inference.) On the plus side, there is an excellent introduction to the subject in Mackay's textbook Information Theory, Inference and Learning Algorithms.
disadvantages variational inference Further disadvantages: The outcome tends to depend heavily on the starting point for the optimization. Example: this paper which is heavily cited but known to have severe problems (software packages
46,079
disadvantages variational inference
Briefly: Disadvantages: approximate, very little theory around it Advantages: speed, scalability, novelty There isn't much theory around variational inference. However you define "optimal" (qv below), you probably can't expect to obtain it. VI is a method for approximating a difficult-to-compute probability density, $p$, by optimization. This is done by suggesting a family of distributions $\mathcal Q$ and finding the member $q \in \mathcal Q$ that has the lowest Kullback–Leibler divergence $KL(q \|p)$. How well you can approximate $p$ naturally depends on your choice of $\mathcal Q$, but you can assume that some aspect of $p$ is lost when substituting it by $q$. VI doesn't guarantee you find the globally optimal member $q \in \mathcal Q$ either. A common choice is to use what's called the mean-field variational family and find $q$ by coordinate ascent. You can find a local optimum. A big advantage is that VI is very fast and scales well to large datasets. It is natural to compare with MCMC methods as these solve the same problem, see the answer to this related question, which compares the two. Reading: David M. Blei, Alp Kucukelbir, Jon D. McAuliffe Variational Inference: A Review for Statisticians
disadvantages variational inference
Briefly: Disadvantages: approximate, very little theory around it Advantages: speed, scalability, novelty There isn't much theory around variational inference. However you define "optimal" (qv below
disadvantages variational inference Briefly: Disadvantages: approximate, very little theory around it Advantages: speed, scalability, novelty There isn't much theory around variational inference. However you define "optimal" (qv below), you probably can't expect to obtain it. VI is a method for approximating a difficult-to-compute probability density, $p$, by optimization. This is done by suggesting a family of distributions $\mathcal Q$ and finding the member $q \in \mathcal Q$ that has the lowest Kullback–Leibler divergence $KL(q \|p)$. How well you can approximate $p$ naturally depends on your choice of $\mathcal Q$, but you can assume that some aspect of $p$ is lost when substituting it by $q$. VI doesn't guarantee you find the globally optimal member $q \in \mathcal Q$ either. A common choice is to use what's called the mean-field variational family and find $q$ by coordinate ascent. You can find a local optimum. A big advantage is that VI is very fast and scales well to large datasets. It is natural to compare with MCMC methods as these solve the same problem, see the answer to this related question, which compares the two. Reading: David M. Blei, Alp Kucukelbir, Jon D. McAuliffe Variational Inference: A Review for Statisticians
disadvantages variational inference Briefly: Disadvantages: approximate, very little theory around it Advantages: speed, scalability, novelty There isn't much theory around variational inference. However you define "optimal" (qv below
46,080
Why is ROC curve used in assessing how 'good' a logistic regression model is?
A logistic regression doesn't "agree" with anything because the nature of the outcome is 0/1 and the nature of the prediction is a continuous probability. Agreement requires comparable scales: 0.999 does not equal 1. One way of developing a classifier from a probability is by dichotomizing at a threshold. The obvious limitation with that approach: the threshold is arbitrary and can be artificially chosen to produce very high or very low sensitivity (or specificity). Thus, the ROC considers all possible thresholds. A discriminating model is capable of ranking people in terms of their risk. The predicted risk from the model could be way off, but if you want to design a substudy or clinical trial to recruit "high risk" participants, such a model gives you a way forward. Preventative tamoxifen is recommended for women in the highest risk category of breast cancer as the result of such a study. Discrimination != Calibration. If my model assigns all non-events a probability of 0.45 and all events a probability of 0.46, the discrimination is perfect, even if the incidence/prevalence is <0.001.
Why is ROC curve used in assessing how 'good' a logistic regression model is?
A logistic regression doesn't "agree" with anything because the nature of the outcome is 0/1 and the nature of the prediction is a continuous probability. Agreement requires comparable scales: 0.999 d
Why is ROC curve used in assessing how 'good' a logistic regression model is? A logistic regression doesn't "agree" with anything because the nature of the outcome is 0/1 and the nature of the prediction is a continuous probability. Agreement requires comparable scales: 0.999 does not equal 1. One way of developing a classifier from a probability is by dichotomizing at a threshold. The obvious limitation with that approach: the threshold is arbitrary and can be artificially chosen to produce very high or very low sensitivity (or specificity). Thus, the ROC considers all possible thresholds. A discriminating model is capable of ranking people in terms of their risk. The predicted risk from the model could be way off, but if you want to design a substudy or clinical trial to recruit "high risk" participants, such a model gives you a way forward. Preventative tamoxifen is recommended for women in the highest risk category of breast cancer as the result of such a study. Discrimination != Calibration. If my model assigns all non-events a probability of 0.45 and all events a probability of 0.46, the discrimination is perfect, even if the incidence/prevalence is <0.001.
Why is ROC curve used in assessing how 'good' a logistic regression model is? A logistic regression doesn't "agree" with anything because the nature of the outcome is 0/1 and the nature of the prediction is a continuous probability. Agreement requires comparable scales: 0.999 d
46,081
Can Convolution Neural Network be useful with encoded categorical features?
I would hazard a guess that in this case your convolution kernels start memorizing groups of features and useful correlations between consecutive pairs of features. As a concrete example, imagine we have three features: (age, gender, education,race). You can then use, say, 1x2 convolution kernels with weights $w_1,w_2$, which slide across your features. So your kernel will extract weighted pairings: $out1=w_1*age+w_2*gender$, $out2=w_1*gender+w_2*education$, $out3=w_1*education+w_2*race$. The simplest kernel might look like $w_1=1$, $w_2=0$, in which case the outs would just be age,gender and education. Another kernel might be or $w_1=0,w_2=1$ which would get you "race" in the last out3. When features are correlated, and the set of features is nonlinear, you might see weights like $w_1=0.6,w_2=-0.3$, and then when your out's are passed along to fully connected layers, maybe only out1 is used to capture the correlation between age and gender and your fully connected layers will manage the rest. Usually though, convolutions are useful when your features are invariant under translation (an example is detection of faces in images), which in this case they are not. So I think your network would just use only say, out1 for the first kernel, out2 for the second kernel, etc. Again if there's some correlation, it will likely be more convoluted, or especialy if it gets stuck in a local minimum. You might think this is useful in NLP, but the evidence here is slimmer. A great example is convolutional neural nets used for text classification here: https://arxiv.org/abs/1502.01710.pdf The kernels here are defined over one-hot encodings of words. However it's difficult to understand if all the network is doing is just memorizing sequences of words in a very expensive way. Example: "[blah] was delicious!" would strongly correlate with "food." FastText was trained to do the same thing, except instead of training kernels, they literally enumerated billions of n-grams and used each as a feature in a sparse logistic classifier. The result was considerably faster (we're talking minutes vs days for training) and essentially as accurate: https://arxiv.org/abs/1607.01759
Can Convolution Neural Network be useful with encoded categorical features?
I would hazard a guess that in this case your convolution kernels start memorizing groups of features and useful correlations between consecutive pairs of features. As a concrete example, imagine we h
Can Convolution Neural Network be useful with encoded categorical features? I would hazard a guess that in this case your convolution kernels start memorizing groups of features and useful correlations between consecutive pairs of features. As a concrete example, imagine we have three features: (age, gender, education,race). You can then use, say, 1x2 convolution kernels with weights $w_1,w_2$, which slide across your features. So your kernel will extract weighted pairings: $out1=w_1*age+w_2*gender$, $out2=w_1*gender+w_2*education$, $out3=w_1*education+w_2*race$. The simplest kernel might look like $w_1=1$, $w_2=0$, in which case the outs would just be age,gender and education. Another kernel might be or $w_1=0,w_2=1$ which would get you "race" in the last out3. When features are correlated, and the set of features is nonlinear, you might see weights like $w_1=0.6,w_2=-0.3$, and then when your out's are passed along to fully connected layers, maybe only out1 is used to capture the correlation between age and gender and your fully connected layers will manage the rest. Usually though, convolutions are useful when your features are invariant under translation (an example is detection of faces in images), which in this case they are not. So I think your network would just use only say, out1 for the first kernel, out2 for the second kernel, etc. Again if there's some correlation, it will likely be more convoluted, or especialy if it gets stuck in a local minimum. You might think this is useful in NLP, but the evidence here is slimmer. A great example is convolutional neural nets used for text classification here: https://arxiv.org/abs/1502.01710.pdf The kernels here are defined over one-hot encodings of words. However it's difficult to understand if all the network is doing is just memorizing sequences of words in a very expensive way. Example: "[blah] was delicious!" would strongly correlate with "food." FastText was trained to do the same thing, except instead of training kernels, they literally enumerated billions of n-grams and used each as a feature in a sparse logistic classifier. The result was considerably faster (we're talking minutes vs days for training) and essentially as accurate: https://arxiv.org/abs/1607.01759
Can Convolution Neural Network be useful with encoded categorical features? I would hazard a guess that in this case your convolution kernels start memorizing groups of features and useful correlations between consecutive pairs of features. As a concrete example, imagine we h
46,082
Can Convolution Neural Network be useful with encoded categorical features?
I think you can simply implement Convolution neural nets on categorical data. As we know there are two representation of Texts ( think of it as a huge categorical data): one-hot and embedding representation. you can merge all of your categorical data into one feature. for example by merging 3 genders and and 8 ethnicity you will have 24 values of a new feature. However you also have numerical data alongside categorical data; in this case you would better to some feature extraction on your data with a Embedding layer and then merge your all data. In this stage your data will be ready as input of convent neural network.
Can Convolution Neural Network be useful with encoded categorical features?
I think you can simply implement Convolution neural nets on categorical data. As we know there are two representation of Texts ( think of it as a huge categorical data): one-hot and embedding represen
Can Convolution Neural Network be useful with encoded categorical features? I think you can simply implement Convolution neural nets on categorical data. As we know there are two representation of Texts ( think of it as a huge categorical data): one-hot and embedding representation. you can merge all of your categorical data into one feature. for example by merging 3 genders and and 8 ethnicity you will have 24 values of a new feature. However you also have numerical data alongside categorical data; in this case you would better to some feature extraction on your data with a Embedding layer and then merge your all data. In this stage your data will be ready as input of convent neural network.
Can Convolution Neural Network be useful with encoded categorical features? I think you can simply implement Convolution neural nets on categorical data. As we know there are two representation of Texts ( think of it as a huge categorical data): one-hot and embedding represen
46,083
When a one-tailed test passes but a two-tailed test does not
Case 1: Entertaining the hypothesis that average height may have increased or decreased, we cannot reject the null hypothesis that neither has happened. Case 2: Entertaining the hypothesis that average height may have increased, we reject the null hypothesis that it hasn't. Both examined at the same accepted Type I error probability. (e.g. 5%). By "casting a wider net" (Case 1), we require more from our data sample, since we ask from it to statistically "disprove/not disprove" two effects at once (increase-decrease). Assume that descriptive statistics of the data sample indicate that the current average height is greater than in the past. The data is already showing us the way, and what is left is to test statistically whether the observed increase is statistically large enough. To execute here a two-tailed test would be wrong, since it would artificially dilute the informational potential of the data sample.
When a one-tailed test passes but a two-tailed test does not
Case 1: Entertaining the hypothesis that average height may have increased or decreased, we cannot reject the null hypothesis that neither has happened. Case 2: Entertaining the hypothesis that avera
When a one-tailed test passes but a two-tailed test does not Case 1: Entertaining the hypothesis that average height may have increased or decreased, we cannot reject the null hypothesis that neither has happened. Case 2: Entertaining the hypothesis that average height may have increased, we reject the null hypothesis that it hasn't. Both examined at the same accepted Type I error probability. (e.g. 5%). By "casting a wider net" (Case 1), we require more from our data sample, since we ask from it to statistically "disprove/not disprove" two effects at once (increase-decrease). Assume that descriptive statistics of the data sample indicate that the current average height is greater than in the past. The data is already showing us the way, and what is left is to test statistically whether the observed increase is statistically large enough. To execute here a two-tailed test would be wrong, since it would artificially dilute the informational potential of the data sample.
When a one-tailed test passes but a two-tailed test does not Case 1: Entertaining the hypothesis that average height may have increased or decreased, we cannot reject the null hypothesis that neither has happened. Case 2: Entertaining the hypothesis that avera
46,084
When a one-tailed test passes but a two-tailed test does not
As @Aksakal says, there is nothing weird about this: it is easy to see that the significance level (for a continuous random variable) is equal to the probability of a type I error. So your one-sided and two sided test have the same type I error probability. What differs is the power of the two tests. If you know that the alternative is an increase, then for the same type I error probability, the type II error probability is lower with the one sided test (or the power is higher). In fact, it can be shown that, for a given type I error probability (and in the univariate case), the one sided test is the most powerfull you can find, whatever the alternative is. This is thus the UMPT, the Uniformly Most Powerful Test. It all depends on what you want to test. Assume you want to buy lamps from your supplier and the supplier says that the life time of a lamp is 1000 hours (on average). If you want to test these lamps then you will probably not care if these lamps live longer so you will test $H_0: \mu=1000$ versus $H_1: \mu < 1000$ because this test, for the same type I error probability, has more power. see also What follows if we fail to reject the null hypothesis?
When a one-tailed test passes but a two-tailed test does not
As @Aksakal says, there is nothing weird about this: it is easy to see that the significance level (for a continuous random variable) is equal to the probability of a type I error. So your one-side
When a one-tailed test passes but a two-tailed test does not As @Aksakal says, there is nothing weird about this: it is easy to see that the significance level (for a continuous random variable) is equal to the probability of a type I error. So your one-sided and two sided test have the same type I error probability. What differs is the power of the two tests. If you know that the alternative is an increase, then for the same type I error probability, the type II error probability is lower with the one sided test (or the power is higher). In fact, it can be shown that, for a given type I error probability (and in the univariate case), the one sided test is the most powerfull you can find, whatever the alternative is. This is thus the UMPT, the Uniformly Most Powerful Test. It all depends on what you want to test. Assume you want to buy lamps from your supplier and the supplier says that the life time of a lamp is 1000 hours (on average). If you want to test these lamps then you will probably not care if these lamps live longer so you will test $H_0: \mu=1000$ versus $H_1: \mu < 1000$ because this test, for the same type I error probability, has more power. see also What follows if we fail to reject the null hypothesis?
When a one-tailed test passes but a two-tailed test does not As @Aksakal says, there is nothing weird about this: it is easy to see that the significance level (for a continuous random variable) is equal to the probability of a type I error. So your one-side
46,085
When a one-tailed test passes but a two-tailed test does not
There is nothing weird about this results. The reason why this result looks weird to you is because you're using the same significance level. The hypo 1 includes the possibility that the height either went down or up, while hypo 2 only includes the increase. So, intuitively (but not precisely) you need to compare the critical values of 0.05 significance of hypo 2 and 0.1 significance of hypo 1. Again, don't take these literally, this is just to point out that you can't compare the critical values at the same significance of these hypos. UPDATE: you journalist should not be reporting on the statistical studies if she can't interpret them. There's no other way about this. Writing "Studies [on the same data] show that heights have not changed, but have gone up" simply disqualify the person from her job.
When a one-tailed test passes but a two-tailed test does not
There is nothing weird about this results. The reason why this result looks weird to you is because you're using the same significance level. The hypo 1 includes the possibility that the height eithe
When a one-tailed test passes but a two-tailed test does not There is nothing weird about this results. The reason why this result looks weird to you is because you're using the same significance level. The hypo 1 includes the possibility that the height either went down or up, while hypo 2 only includes the increase. So, intuitively (but not precisely) you need to compare the critical values of 0.05 significance of hypo 2 and 0.1 significance of hypo 1. Again, don't take these literally, this is just to point out that you can't compare the critical values at the same significance of these hypos. UPDATE: you journalist should not be reporting on the statistical studies if she can't interpret them. There's no other way about this. Writing "Studies [on the same data] show that heights have not changed, but have gone up" simply disqualify the person from her job.
When a one-tailed test passes but a two-tailed test does not There is nothing weird about this results. The reason why this result looks weird to you is because you're using the same significance level. The hypo 1 includes the possibility that the height eithe
46,086
When a one-tailed test passes but a two-tailed test does not
In many fields (e.g. medical statistics where you may be comparing a new drug vs. an old one) the convention is that one-sided tests are done at 2.5% by default (vs. two-sided ones at 5%). And that, even if you only hypothesize an effect in direction, two-sided tests are done. This convention has in part developed to prevent people from switching to one-sided tests just to increase type 1 errors in the desired direction.
When a one-tailed test passes but a two-tailed test does not
In many fields (e.g. medical statistics where you may be comparing a new drug vs. an old one) the convention is that one-sided tests are done at 2.5% by default (vs. two-sided ones at 5%). And that, e
When a one-tailed test passes but a two-tailed test does not In many fields (e.g. medical statistics where you may be comparing a new drug vs. an old one) the convention is that one-sided tests are done at 2.5% by default (vs. two-sided ones at 5%). And that, even if you only hypothesize an effect in direction, two-sided tests are done. This convention has in part developed to prevent people from switching to one-sided tests just to increase type 1 errors in the desired direction.
When a one-tailed test passes but a two-tailed test does not In many fields (e.g. medical statistics where you may be comparing a new drug vs. an old one) the convention is that one-sided tests are done at 2.5% by default (vs. two-sided ones at 5%). And that, e
46,087
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphical Models)
Seems as thought I figured this out. I'm posting it here just in case it helps someone. Well, it turns out that I wasn't making an error with my "by-hand" calculation, but it was indeed a problem with the pgmpy package. For some reason, it was inferring independence between $A$ and $B$ given $C$ (they should be independent without knowing C). I updated the package to the developer edition and it gives the desired result.
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphi
Seems as thought I figured this out. I'm posting it here just in case it helps someone. Well, it turns out that I wasn't making an error with my "by-hand" calculation, but it was indeed a problem with
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphical Models) Seems as thought I figured this out. I'm posting it here just in case it helps someone. Well, it turns out that I wasn't making an error with my "by-hand" calculation, but it was indeed a problem with the pgmpy package. For some reason, it was inferring independence between $A$ and $B$ given $C$ (they should be independent without knowing C). I updated the package to the developer edition and it gives the desired result.
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphi Seems as thought I figured this out. I'm posting it here just in case it helps someone. Well, it turns out that I wasn't making an error with my "by-hand" calculation, but it was indeed a problem with
46,088
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphical Models)
I tried installing the new package and the bug remains. So i looked at the Bayesian network code itself and found the bug. Its in line 530 of BayesianModel.py when finding the descendents. Because neighbors is an iterator, it is empty after visit.extend and so descendants.extend does nothing and the result is empty. I fixed it in place by casting neighbors as a list. Since Bayesian Networks are acyclic, this is a very simple dfs which does not remember which nodes it has visited. """ descendents = [] visit = [node] while visit: n = visit.pop() neighbors = self.neighbors(n) visit.extend(neighbors) **descendents.extend(neighbors)** return descendents
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphi
I tried installing the new package and the bug remains. So i looked at the Bayesian network code itself and found the bug. Its in line 530 of BayesianModel.py when finding the descendents. Because ne
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphical Models) I tried installing the new package and the bug remains. So i looked at the Bayesian network code itself and found the bug. Its in line 530 of BayesianModel.py when finding the descendents. Because neighbors is an iterator, it is empty after visit.extend and so descendants.extend does nothing and the result is empty. I fixed it in place by casting neighbors as a list. Since Bayesian Networks are acyclic, this is a very simple dfs which does not remember which nodes it has visited. """ descendents = [] visit = [node] while visit: n = visit.pop() neighbors = self.neighbors(n) visit.extend(neighbors) **descendents.extend(neighbors)** return descendents
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphi I tried installing the new package and the bug remains. So i looked at the Bayesian network code itself and found the bug. Its in line 530 of BayesianModel.py when finding the descendents. Because ne
46,089
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphical Models)
Confirmed this works well (matches the hand calculations) in version 0.1.14. Note I had to make changes to the code, to fix both errors in how pgmpy was being used (incorrect shapes) and for Python 3. Here is the updated code. from pgmpy.models import BayesianModel from pgmpy.factors.discrete import TabularCPD # Defining the network structure model = BayesianModel([('A', 'C'), ('B', 'C')]) # Defining the CPDs: cpd_p = TabularCPD('A', 2, [[0.99], [0.01]]) cpd_a = TabularCPD('B', 2, [[0.9], [0.1]]) cpd_t = TabularCPD('C', 2, [[0.9, 0.5, 0.4, 0.1], [0.1, 0.5, 0.6, 0.9]], evidence=['A', 'B'], evidence_card=[2, 2]) # Associating the CPDs with the network structure. model.add_cpds(cpd_p, cpd_a, cpd_t) # Some other methods model.get_cpds() from pgmpy.inference import VariableElimination print('P(B|A=1,C=1)') infer = VariableElimination(model) posterior_p = infer.query(['B'], evidence={'A': 1, 'C': 1}) print(posterior_p) print('P(B|C=1)') posterior_p = infer.query(['B'], evidence={'C': 1}) print(posterior_p) print('probs') posterior_p = infer.query(['B','C','A']) print(posterior_p)
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphi
Confirmed this works well (matches the hand calculations) in version 0.1.14. Note I had to make changes to the code, to fix both errors in how pgmpy was being used (incorrect shapes) and for Python 3.
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphical Models) Confirmed this works well (matches the hand calculations) in version 0.1.14. Note I had to make changes to the code, to fix both errors in how pgmpy was being used (incorrect shapes) and for Python 3. Here is the updated code. from pgmpy.models import BayesianModel from pgmpy.factors.discrete import TabularCPD # Defining the network structure model = BayesianModel([('A', 'C'), ('B', 'C')]) # Defining the CPDs: cpd_p = TabularCPD('A', 2, [[0.99], [0.01]]) cpd_a = TabularCPD('B', 2, [[0.9], [0.1]]) cpd_t = TabularCPD('C', 2, [[0.9, 0.5, 0.4, 0.1], [0.1, 0.5, 0.6, 0.9]], evidence=['A', 'B'], evidence_card=[2, 2]) # Associating the CPDs with the network structure. model.add_cpds(cpd_p, cpd_a, cpd_t) # Some other methods model.get_cpds() from pgmpy.inference import VariableElimination print('P(B|A=1,C=1)') infer = VariableElimination(model) posterior_p = infer.query(['B'], evidence={'A': 1, 'C': 1}) print(posterior_p) print('P(B|C=1)') posterior_p = infer.query(['B'], evidence={'C': 1}) print(posterior_p) print('probs') posterior_p = infer.query(['B','C','A']) print(posterior_p)
Inconsistencies between conditional probability calculations by hand and with pgmpy (Bayesian Graphi Confirmed this works well (matches the hand calculations) in version 0.1.14. Note I had to make changes to the code, to fix both errors in how pgmpy was being used (incorrect shapes) and for Python 3.
46,090
Variable transformation on Kaggle titanic problem
What people are doing is trying different things until they find something that appears to work. There is no reason in particular why transforming a variable that can take many values into a variable that can only take two values should improve matters. Try to think of it this way. Somewhere out there is data generating process (DGP) that generated the data that you see before you. What you're trying to do is to find a model that allows you to reproduce the data generated by the DGP. If the DGP happens to be a linear then it happens to be linear. If it happens to be non-linear then it happens to be non-linear. If the DGP is a function of a variable that takes many values, then it is a function of variable that takes many values. If the DGP happens to be a function of a variable that takes only two values, then it is a function that only takes two values. It is what it is. Sometimes they'll find something that appears to reproduce the DGP fairly well only to later find out that it is wrong because it produces different outcomes than the DGP produces. And sometimes it happens to work well. There is no rule of thumb except to guess, check your guess, and then probably guess again. In this case I imagine some people guessed that it mattered that people were not alone, e.g. if you're alone then there is no one to give your place to and no one to give you a place or to help you get a place. They first tried both separately together, then combined them as they considered that they're a group realized that group size perhaps doesn't matter much, then guessed that the mere fact of being together mattered etc. Later other people noticed or heard that this worked and copied it hence why it's probably so common. But to answer your question, there is no rule of thumb that I am aware of. There is also no reason why this transformation should work. It's just something that people try when they try to guess what the DGP might look like.
Variable transformation on Kaggle titanic problem
What people are doing is trying different things until they find something that appears to work. There is no reason in particular why transforming a variable that can take many values into a variable
Variable transformation on Kaggle titanic problem What people are doing is trying different things until they find something that appears to work. There is no reason in particular why transforming a variable that can take many values into a variable that can only take two values should improve matters. Try to think of it this way. Somewhere out there is data generating process (DGP) that generated the data that you see before you. What you're trying to do is to find a model that allows you to reproduce the data generated by the DGP. If the DGP happens to be a linear then it happens to be linear. If it happens to be non-linear then it happens to be non-linear. If the DGP is a function of a variable that takes many values, then it is a function of variable that takes many values. If the DGP happens to be a function of a variable that takes only two values, then it is a function that only takes two values. It is what it is. Sometimes they'll find something that appears to reproduce the DGP fairly well only to later find out that it is wrong because it produces different outcomes than the DGP produces. And sometimes it happens to work well. There is no rule of thumb except to guess, check your guess, and then probably guess again. In this case I imagine some people guessed that it mattered that people were not alone, e.g. if you're alone then there is no one to give your place to and no one to give you a place or to help you get a place. They first tried both separately together, then combined them as they considered that they're a group realized that group size perhaps doesn't matter much, then guessed that the mere fact of being together mattered etc. Later other people noticed or heard that this worked and copied it hence why it's probably so common. But to answer your question, there is no rule of thumb that I am aware of. There is also no reason why this transformation should work. It's just something that people try when they try to guess what the DGP might look like.
Variable transformation on Kaggle titanic problem What people are doing is trying different things until they find something that appears to work. There is no reason in particular why transforming a variable that can take many values into a variable
46,091
Variable transformation on Kaggle titanic problem
Note that the IBM Watson solution to this problem was utterly wrong, and that the dataset is too small by a factor of 20 for split-sample validation to work properly. The original source dataset is at https://hbiostat.org/data . Please don't refer to a copy of the original dataset. sibsp and parch cannot be interpreted correctly without interacting them with age, although there may be some merit in computing a "passenger is alone" variable. A fully worked out case study using logistic regression for this dataset is in my book Regression Modeling Strategies and its course notes which may be found at the RMS entry under https://hbiostat.org/rms . In my detailed analysis you'll see that I dealt with the age transformation very thoroughly and flexibly, and properly penalized the analysis for using the data to derive the function forms for age.
Variable transformation on Kaggle titanic problem
Note that the IBM Watson solution to this problem was utterly wrong, and that the dataset is too small by a factor of 20 for split-sample validation to work properly. The original source dataset is a
Variable transformation on Kaggle titanic problem Note that the IBM Watson solution to this problem was utterly wrong, and that the dataset is too small by a factor of 20 for split-sample validation to work properly. The original source dataset is at https://hbiostat.org/data . Please don't refer to a copy of the original dataset. sibsp and parch cannot be interpreted correctly without interacting them with age, although there may be some merit in computing a "passenger is alone" variable. A fully worked out case study using logistic regression for this dataset is in my book Regression Modeling Strategies and its course notes which may be found at the RMS entry under https://hbiostat.org/rms . In my detailed analysis you'll see that I dealt with the age transformation very thoroughly and flexibly, and properly penalized the analysis for using the data to derive the function forms for age.
Variable transformation on Kaggle titanic problem Note that the IBM Watson solution to this problem was utterly wrong, and that the dataset is too small by a factor of 20 for split-sample validation to work properly. The original source dataset is a
46,092
Why is $n < p$ a problem for OLS regression?
I can use gradient descent on the quadratic loss function and get a solution… Sure, but you have fewer constraints than unknowns, so your loss function is something like a parabola in three-space: $L =(w_1+w_2-1)^2$. There is a whole space of solutions. In this example, the solutions lie on the line: $w_1+w_2=1$. If you're really indifferent to which solution you take, then there is no problem. If you do care, regularization (additional loss terms) can change the set of solutions. E.g., adding the L2 norm of the weights would identify the solution $w_1=w_2=\frac13$, which is not quite on the parabola, but it's close and adding the L2 norm guarantees a unique solution.
Why is $n < p$ a problem for OLS regression?
I can use gradient descent on the quadratic loss function and get a solution… Sure, but you have fewer constraints than unknowns, so your loss function is something like a parabola in three-space:
Why is $n < p$ a problem for OLS regression? I can use gradient descent on the quadratic loss function and get a solution… Sure, but you have fewer constraints than unknowns, so your loss function is something like a parabola in three-space: $L =(w_1+w_2-1)^2$. There is a whole space of solutions. In this example, the solutions lie on the line: $w_1+w_2=1$. If you're really indifferent to which solution you take, then there is no problem. If you do care, regularization (additional loss terms) can change the set of solutions. E.g., adding the L2 norm of the weights would identify the solution $w_1=w_2=\frac13$, which is not quite on the parabola, but it's close and adding the L2 norm guarantees a unique solution.
Why is $n < p$ a problem for OLS regression? I can use gradient descent on the quadratic loss function and get a solution… Sure, but you have fewer constraints than unknowns, so your loss function is something like a parabola in three-space:
46,093
Why is $n < p$ a problem for OLS regression?
Here is a little specific example to illustrate the issue: Suppose you want to fit a regression of $y_i$ on $x_i$, $x_i^2$ and a constant, i.e. $$ y_i = a x_i + b x_i^2 + c + u_i $$ or, in matrix notation, \begin{align*} \mathbf{y} = \begin{pmatrix} y_1 \\ \vdots \\ y_n \end{pmatrix}, \quad \mathbf{X} = \begin{pmatrix} 1 & x_1 & x_1^2 \\ \vdots & \vdots & \vdots \\ 1 & x_n & x_n^2 \end{pmatrix}, \quad \boldsymbol{\beta} = \begin{pmatrix} c \\ a \\ b \end{pmatrix}, \quad \mathbf{u} = \begin{pmatrix} u_1 \\ \vdots \\ u_n \end{pmatrix} \end{align*} Suppose you observe $\mathbf{y}^T=(0,1)$ and $\mathbf{x}^T=(0,1)$, i.e., $n=2<p=3$. Then, the OLS estimator is \begin{align*} \widehat{\boldsymbol{\beta}} =& \, (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y} \\ =& \, \left[ \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \end{pmatrix}^T \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \end{pmatrix} \right]^{-1} \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \end{pmatrix}^T \begin{pmatrix} 0 \\ 1 \end{pmatrix} \\ =& \, \left[\underbrace{\begin{pmatrix} 2 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{pmatrix}}_{\text{not invertible, as $\mathrm{rk}()=2\neq 3$}} \right]^{-1} \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \end{align*} There are infinitely many solutions to the problem: setting up a system of equations and inserting the observations gives \begin{align*} 0 =& \, a \cdot 0 + b \cdot 0^2 + c \ \ \Rightarrow c=0 \\ 1 =& \, a \cdot 1 + b \cdot 1^2 + c \ \ \Rightarrow 1 = a + b \end{align*} Hence, all $a=1-b$, $b \in \mathbb{R}$ satisfy the regression equation.
Why is $n < p$ a problem for OLS regression?
Here is a little specific example to illustrate the issue: Suppose you want to fit a regression of $y_i$ on $x_i$, $x_i^2$ and a constant, i.e. $$ y_i = a x_i + b x_i^2 + c + u_i $$ or, in matrix not
Why is $n < p$ a problem for OLS regression? Here is a little specific example to illustrate the issue: Suppose you want to fit a regression of $y_i$ on $x_i$, $x_i^2$ and a constant, i.e. $$ y_i = a x_i + b x_i^2 + c + u_i $$ or, in matrix notation, \begin{align*} \mathbf{y} = \begin{pmatrix} y_1 \\ \vdots \\ y_n \end{pmatrix}, \quad \mathbf{X} = \begin{pmatrix} 1 & x_1 & x_1^2 \\ \vdots & \vdots & \vdots \\ 1 & x_n & x_n^2 \end{pmatrix}, \quad \boldsymbol{\beta} = \begin{pmatrix} c \\ a \\ b \end{pmatrix}, \quad \mathbf{u} = \begin{pmatrix} u_1 \\ \vdots \\ u_n \end{pmatrix} \end{align*} Suppose you observe $\mathbf{y}^T=(0,1)$ and $\mathbf{x}^T=(0,1)$, i.e., $n=2<p=3$. Then, the OLS estimator is \begin{align*} \widehat{\boldsymbol{\beta}} =& \, (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y} \\ =& \, \left[ \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \end{pmatrix}^T \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \end{pmatrix} \right]^{-1} \begin{pmatrix} 1 & 0 & 0 \\ 1 & 1 & 1 \end{pmatrix}^T \begin{pmatrix} 0 \\ 1 \end{pmatrix} \\ =& \, \left[\underbrace{\begin{pmatrix} 2 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{pmatrix}}_{\text{not invertible, as $\mathrm{rk}()=2\neq 3$}} \right]^{-1} \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \end{align*} There are infinitely many solutions to the problem: setting up a system of equations and inserting the observations gives \begin{align*} 0 =& \, a \cdot 0 + b \cdot 0^2 + c \ \ \Rightarrow c=0 \\ 1 =& \, a \cdot 1 + b \cdot 1^2 + c \ \ \Rightarrow 1 = a + b \end{align*} Hence, all $a=1-b$, $b \in \mathbb{R}$ satisfy the regression equation.
Why is $n < p$ a problem for OLS regression? Here is a little specific example to illustrate the issue: Suppose you want to fit a regression of $y_i$ on $x_i$, $x_i^2$ and a constant, i.e. $$ y_i = a x_i + b x_i^2 + c + u_i $$ or, in matrix not
46,094
Why is $n < p$ a problem for OLS regression?
If $X'X$ is not invertable, the estimation $\hat \beta$ does not exists. Check this post, @mpiktas's answer for details ("Existence" section) What is a complete list of the usual assumptions for linear regression?
Why is $n < p$ a problem for OLS regression?
If $X'X$ is not invertable, the estimation $\hat \beta$ does not exists. Check this post, @mpiktas's answer for details ("Existence" section) What is a complete list of the usual assumptions for linea
Why is $n < p$ a problem for OLS regression? If $X'X$ is not invertable, the estimation $\hat \beta$ does not exists. Check this post, @mpiktas's answer for details ("Existence" section) What is a complete list of the usual assumptions for linear regression?
Why is $n < p$ a problem for OLS regression? If $X'X$ is not invertable, the estimation $\hat \beta$ does not exists. Check this post, @mpiktas's answer for details ("Existence" section) What is a complete list of the usual assumptions for linea
46,095
Convolution of random variables: unimodality of the likelihood function
The short answer is no, it is not a unique global maximum, but there is a longer answer as well. Consider the sum of two Geometric variates with parameters $p_1 = 0.1,\space p_2 = 0.9$. If you have access to the individual variates, all is well, but if you only have access to the sum, it should be intuitively clear that you will not be able to distinguish the two cases $p_1 =0.1,\space p_2 = 0.9$ and $p_1 = 0.9,\space p_2 = 0.1$. More generally, you will not be able to distinguish $p_1 = a,\space p_2 = b$ from $p_1 = b,\space p_2 = a$, because the distribution of the sum is the same either way. Because of this, depending on your starting values, your algorithm could converge to either (for example) $\hat{p}_1 = 0.134, \space \hat{p}_2 = 0.802$ or $\hat{p}_1 = 0.802, \space \hat{p}_2 = 0.134$, and the associated values of the likelihood function will be the same. This is because the distribution of the sum is independent of the arrangement of the distributions of the random variates across the indices of the random variates being summed. This will always be the case if the components are independent. In another context - estimation of mixture models - this is related to the "label-switching" problem. If, on the other hand, you don't care about $p_1,\space p_2,\dots,p_k$ as they relate to the specific unobserved random variables, then things are different. You just want a collection of $k$ estimates, and don't care which is associated with index 1, index 2, etc. You can achieve this by generating your estimates, then ordering them from smallest to largest, for example (there are plenty of other ways of doing it), so $p_1^*$ is your estimate of the smallest $p_i$, etc. In this case, the ordered estimates do form a unique global maximum of the likelihood function - assuming at least $k$ data points - subject to the constraint. But note that you achieve this by, in effect, imposing a constraint on the estimation procedure that forces the ordering $p_1 \leq p_2 \leq \dots \leq p_k$ on the estimates themselves.
Convolution of random variables: unimodality of the likelihood function
The short answer is no, it is not a unique global maximum, but there is a longer answer as well. Consider the sum of two Geometric variates with parameters $p_1 = 0.1,\space p_2 = 0.9$. If you have a
Convolution of random variables: unimodality of the likelihood function The short answer is no, it is not a unique global maximum, but there is a longer answer as well. Consider the sum of two Geometric variates with parameters $p_1 = 0.1,\space p_2 = 0.9$. If you have access to the individual variates, all is well, but if you only have access to the sum, it should be intuitively clear that you will not be able to distinguish the two cases $p_1 =0.1,\space p_2 = 0.9$ and $p_1 = 0.9,\space p_2 = 0.1$. More generally, you will not be able to distinguish $p_1 = a,\space p_2 = b$ from $p_1 = b,\space p_2 = a$, because the distribution of the sum is the same either way. Because of this, depending on your starting values, your algorithm could converge to either (for example) $\hat{p}_1 = 0.134, \space \hat{p}_2 = 0.802$ or $\hat{p}_1 = 0.802, \space \hat{p}_2 = 0.134$, and the associated values of the likelihood function will be the same. This is because the distribution of the sum is independent of the arrangement of the distributions of the random variates across the indices of the random variates being summed. This will always be the case if the components are independent. In another context - estimation of mixture models - this is related to the "label-switching" problem. If, on the other hand, you don't care about $p_1,\space p_2,\dots,p_k$ as they relate to the specific unobserved random variables, then things are different. You just want a collection of $k$ estimates, and don't care which is associated with index 1, index 2, etc. You can achieve this by generating your estimates, then ordering them from smallest to largest, for example (there are plenty of other ways of doing it), so $p_1^*$ is your estimate of the smallest $p_i$, etc. In this case, the ordered estimates do form a unique global maximum of the likelihood function - assuming at least $k$ data points - subject to the constraint. But note that you achieve this by, in effect, imposing a constraint on the estimation procedure that forces the ordering $p_1 \leq p_2 \leq \dots \leq p_k$ on the estimates themselves.
Convolution of random variables: unimodality of the likelihood function The short answer is no, it is not a unique global maximum, but there is a longer answer as well. Consider the sum of two Geometric variates with parameters $p_1 = 0.1,\space p_2 = 0.9$. If you have a
46,096
Convolution of random variables: unimodality of the likelihood function
jbowman provided an answer regarding the label-switching problem. But putting that aside it made me wonder about the proof for the unique global maximum given the constraint of $p_1 < p_2 < etc$. How can we be sure that the set of p-values for the maximum $\mathcal{L}$ is unique? I could get an answer in the case of a single measurement $Y$. But still wonder about multiple measurements of $Y$. Consider a particular set $(X_1, X_2, X_3 ...)$ to obtain the sum $Y=\sum X_i$ Then for each individual variable $X_i$ the maximum likelihood of the associated $p_i$ is $\hat{p_i} = \tfrac{1}{X_i}$ with value $\mathcal{L}(\hat{p_i} \vert X_i) = (1-\tfrac{1}{X_i})^{X_i-1}\tfrac{1}{X_i}$ And for the total set of variables $X_i$ the likelihood (if for each $X_i$ we choose the maximum $p_i$) is $\mathcal{L}(\hat{p_1}, \hat{p_2}, \hat{p_3}, etc. \vert X_1,X_2,X_3,etc.) = \prod \mathcal{L}(\hat{p_i} \vert X_i) = (1-\tfrac{1}{X_1})^{X_1-1}\tfrac{1}{X_1} (1-\tfrac{1}{X_2})^{X_2-1}\tfrac{1}{X_2} (1-\tfrac{1}{X_3})^{X_3-1}\tfrac{1}{X_3} etc$ For a given set $X$ we can analyze two separate terms in the $\mathcal{L}$-function. And consider whether the likelihood increases if we would choose two different terms with the same sum. This relates to evaluating the maximum of the related terms parameterized by $m$ and $x$ (with $m$ the midpoint of the two variables $X_i$ and $x$ the difference of the variables from the midpoint): $\frac{1}{m+x}\frac{1}{m-x}(1-\frac{1}{m+x})^{m+x-1}(1-\frac{1}{m-x})^{m-x-1}$ whose derivative for $x$ is equal to $\frac{(1-\frac{1}{m-x})^{(m-x)}(1-\frac{1}{m+x})^{(m+x)}\left( log(1-\frac{1}{m+x}) log(1-\frac{1}{m-x}) \right) }{(m-x-1)(m+x-1)}$ from which we can conclude that the maximum is present $x=0$ and that the closer the variables in the set $X$, or the more uniform, the higher the likelihood. And thus there is a unique set of $p_i$ (the most homogeneous set $X$ of variables $X_i$ such that we can't select any pair from $X$ for which we could decrease the value of $x$ according to the previous discussed pattern) for the maximum likelihood given a single $Y$ and a given number size of the set of variables $X$.
Convolution of random variables: unimodality of the likelihood function
jbowman provided an answer regarding the label-switching problem. But putting that aside it made me wonder about the proof for the unique global maximum given the constraint of $p_1 < p_2 < etc$. How
Convolution of random variables: unimodality of the likelihood function jbowman provided an answer regarding the label-switching problem. But putting that aside it made me wonder about the proof for the unique global maximum given the constraint of $p_1 < p_2 < etc$. How can we be sure that the set of p-values for the maximum $\mathcal{L}$ is unique? I could get an answer in the case of a single measurement $Y$. But still wonder about multiple measurements of $Y$. Consider a particular set $(X_1, X_2, X_3 ...)$ to obtain the sum $Y=\sum X_i$ Then for each individual variable $X_i$ the maximum likelihood of the associated $p_i$ is $\hat{p_i} = \tfrac{1}{X_i}$ with value $\mathcal{L}(\hat{p_i} \vert X_i) = (1-\tfrac{1}{X_i})^{X_i-1}\tfrac{1}{X_i}$ And for the total set of variables $X_i$ the likelihood (if for each $X_i$ we choose the maximum $p_i$) is $\mathcal{L}(\hat{p_1}, \hat{p_2}, \hat{p_3}, etc. \vert X_1,X_2,X_3,etc.) = \prod \mathcal{L}(\hat{p_i} \vert X_i) = (1-\tfrac{1}{X_1})^{X_1-1}\tfrac{1}{X_1} (1-\tfrac{1}{X_2})^{X_2-1}\tfrac{1}{X_2} (1-\tfrac{1}{X_3})^{X_3-1}\tfrac{1}{X_3} etc$ For a given set $X$ we can analyze two separate terms in the $\mathcal{L}$-function. And consider whether the likelihood increases if we would choose two different terms with the same sum. This relates to evaluating the maximum of the related terms parameterized by $m$ and $x$ (with $m$ the midpoint of the two variables $X_i$ and $x$ the difference of the variables from the midpoint): $\frac{1}{m+x}\frac{1}{m-x}(1-\frac{1}{m+x})^{m+x-1}(1-\frac{1}{m-x})^{m-x-1}$ whose derivative for $x$ is equal to $\frac{(1-\frac{1}{m-x})^{(m-x)}(1-\frac{1}{m+x})^{(m+x)}\left( log(1-\frac{1}{m+x}) log(1-\frac{1}{m-x}) \right) }{(m-x-1)(m+x-1)}$ from which we can conclude that the maximum is present $x=0$ and that the closer the variables in the set $X$, or the more uniform, the higher the likelihood. And thus there is a unique set of $p_i$ (the most homogeneous set $X$ of variables $X_i$ such that we can't select any pair from $X$ for which we could decrease the value of $x$ according to the previous discussed pattern) for the maximum likelihood given a single $Y$ and a given number size of the set of variables $X$.
Convolution of random variables: unimodality of the likelihood function jbowman provided an answer regarding the label-switching problem. But putting that aside it made me wonder about the proof for the unique global maximum given the constraint of $p_1 < p_2 < etc$. How
46,097
Is there any statistical difference between 10 Bernoulli trials and 1 binomial trial with parameter n = 10?
Yes they're different in a simple, fairly obvious sense. Ten Bernoulli trials (presumed to be independent with with common parameter $p$) is ten 0/1 values, so an observation on it looks like "$(0,0,0,1,0,1,1,1,0,0)$". The sum of such a vector is distributed as $\text{binomial}(10,p)$; an observation on that looks like "$4$". With an observation from the first thing (ten Bernoulli trials) I can answer a question like "was there a 1 on the third trial?" or "what's the difference in the number of 1's in the first 5 trials and the second 5 trials?". With the second thing (one observation on a binomial), I simply cannot consider those questions, since I have compressed the information from ten values to a single number. The first thing contains more information (in a particular sense) $-$ but if the assumptions are true it's not got any more information about $p$; the sum is sufficient for that. For example, if you want to answer a question "is my binomial model reasonable?" you might want to consider the possibility that $p$ might change over the course of the trials, or that the trials may be serially correlated. You can't begin to assess that from one binomial value (you've thrown out the information that would let you discern if the binomial model was suitable) but you can consider such a question from a set of $n$ Bernoulli trials. Therefore there's a clear "difference of statistical import" here. [Ten trials is small enough that you can't get much power in relation to those questions, but with a binomial you simply can't address it at all. If I had say a hundred trials, though, or a thousand, I could do some more interesting things to address those questions, and might hope to identify the extent to which my model was not really describing the situation.]
Is there any statistical difference between 10 Bernoulli trials and 1 binomial trial with parameter
Yes they're different in a simple, fairly obvious sense. Ten Bernoulli trials (presumed to be independent with with common parameter $p$) is ten 0/1 values, so an observation on it looks like "$(0,0,0
Is there any statistical difference between 10 Bernoulli trials and 1 binomial trial with parameter n = 10? Yes they're different in a simple, fairly obvious sense. Ten Bernoulli trials (presumed to be independent with with common parameter $p$) is ten 0/1 values, so an observation on it looks like "$(0,0,0,1,0,1,1,1,0,0)$". The sum of such a vector is distributed as $\text{binomial}(10,p)$; an observation on that looks like "$4$". With an observation from the first thing (ten Bernoulli trials) I can answer a question like "was there a 1 on the third trial?" or "what's the difference in the number of 1's in the first 5 trials and the second 5 trials?". With the second thing (one observation on a binomial), I simply cannot consider those questions, since I have compressed the information from ten values to a single number. The first thing contains more information (in a particular sense) $-$ but if the assumptions are true it's not got any more information about $p$; the sum is sufficient for that. For example, if you want to answer a question "is my binomial model reasonable?" you might want to consider the possibility that $p$ might change over the course of the trials, or that the trials may be serially correlated. You can't begin to assess that from one binomial value (you've thrown out the information that would let you discern if the binomial model was suitable) but you can consider such a question from a set of $n$ Bernoulli trials. Therefore there's a clear "difference of statistical import" here. [Ten trials is small enough that you can't get much power in relation to those questions, but with a binomial you simply can't address it at all. If I had say a hundred trials, though, or a thousand, I could do some more interesting things to address those questions, and might hope to identify the extent to which my model was not really describing the situation.]
Is there any statistical difference between 10 Bernoulli trials and 1 binomial trial with parameter Yes they're different in a simple, fairly obvious sense. Ten Bernoulli trials (presumed to be independent with with common parameter $p$) is ten 0/1 values, so an observation on it looks like "$(0,0,0
46,098
regression with constraints
Logistic regression with box constraints served me well in the past for problems similar to yours. You are interested in prediction, not in inference, so, as long a suitable estimate of generalization error (for example, 5-fold cross-validation error) is low enough, you should be ok. Let's consider a logit link function and a linear model: $$\beta_0+\beta_1 x_1 +\beta_2 x_2 =\log{\frac{\mu}{1-\mu}}$$ where $\mu=\mathbb{E}[y|x_1]$ Then $$\frac{\partial \mu}{\partial x_1}=\beta_1 \frac{\exp{(-\beta_0-\beta_1 x_1 -\beta_2 x_2)}}{(1+\exp{(-\beta_0-\beta_1 x_1 -\beta_2 x_2)})^2}>0 \iff \beta_1>0 $$ Thus constraint 1 and 2 are satisfied if you just use logistic regression with the constraint that $\beta_1>0$. In general, monotonicity constraints with respect to one or more variables are relatively easy to enforce with GLMs (Generalized Linear Models) such as logistic regression, because the monotonicity of the link function and the fact that it's expressed it as a linear function of the predictors imply that $\mu$ is always monotonic with respect to the continuous predictors. An R package which supports logistic regression with box constraints (constraints of the type $a_i\leq\beta_i\leq b_i$) is glmnet. Its usage is a bit different from other regression functions in R, so have a look at ?glmnet. Constraint 3 wouldn't need specific attention in most cases, because most R regression functions will automatically encode categorical variables using dummy variables. Unfortunately, glmnet is one of the few functions which doesn't do that. You need to use model.matrix to solve this: if my_data holds your observations $X=\{x_{1i},x_{2i}\}_{i=1}^N$, then M <- model.matrix(~ x1 + x2, my_data) will build a design matrix suitable for use with glmnet. The only limitation of this approach lies in the fact that we have modeled the logit function as a linear function of the predictors. This may prove not flexible enough for your problem: in other words, you could get a large cross-validation error. If this is so, you should look into nonparametric logistic regression - here, however, you need to fit GAMs (Generalized Additive Models), not GLMs, and imposing monotonicity becomes more complicated. The package mgcv and the function mono.con are your friends here - you'll need to read quite a lot of documentation. Gavin Simpson's answer to question How to smooth data and force monotonicity which you linked in your question, has a good example. Finally, I reiterate that this approach (as well as all other approaches which rely on logistic regression, whether Bayesian or frequentist) only makes sense because you need a quick tool to approximate in an automated way multiple unknown functions inside your reinforcement learning workflow. $y|\mathbf{x}$ doesn't really have the binomial distribution, so you cannot expect to get realistic estimates of standard errors, confidence intervals, etc. If you need a real statistical model, which would give you not only point estimates but also realistic prediction intervals, then you need to take into account the real conditional distribution of your output. This question might help: Judging the quality of a statistical model for a percentage
regression with constraints
Logistic regression with box constraints served me well in the past for problems similar to yours. You are interested in prediction, not in inference, so, as long a suitable estimate of generalization
regression with constraints Logistic regression with box constraints served me well in the past for problems similar to yours. You are interested in prediction, not in inference, so, as long a suitable estimate of generalization error (for example, 5-fold cross-validation error) is low enough, you should be ok. Let's consider a logit link function and a linear model: $$\beta_0+\beta_1 x_1 +\beta_2 x_2 =\log{\frac{\mu}{1-\mu}}$$ where $\mu=\mathbb{E}[y|x_1]$ Then $$\frac{\partial \mu}{\partial x_1}=\beta_1 \frac{\exp{(-\beta_0-\beta_1 x_1 -\beta_2 x_2)}}{(1+\exp{(-\beta_0-\beta_1 x_1 -\beta_2 x_2)})^2}>0 \iff \beta_1>0 $$ Thus constraint 1 and 2 are satisfied if you just use logistic regression with the constraint that $\beta_1>0$. In general, monotonicity constraints with respect to one or more variables are relatively easy to enforce with GLMs (Generalized Linear Models) such as logistic regression, because the monotonicity of the link function and the fact that it's expressed it as a linear function of the predictors imply that $\mu$ is always monotonic with respect to the continuous predictors. An R package which supports logistic regression with box constraints (constraints of the type $a_i\leq\beta_i\leq b_i$) is glmnet. Its usage is a bit different from other regression functions in R, so have a look at ?glmnet. Constraint 3 wouldn't need specific attention in most cases, because most R regression functions will automatically encode categorical variables using dummy variables. Unfortunately, glmnet is one of the few functions which doesn't do that. You need to use model.matrix to solve this: if my_data holds your observations $X=\{x_{1i},x_{2i}\}_{i=1}^N$, then M <- model.matrix(~ x1 + x2, my_data) will build a design matrix suitable for use with glmnet. The only limitation of this approach lies in the fact that we have modeled the logit function as a linear function of the predictors. This may prove not flexible enough for your problem: in other words, you could get a large cross-validation error. If this is so, you should look into nonparametric logistic regression - here, however, you need to fit GAMs (Generalized Additive Models), not GLMs, and imposing monotonicity becomes more complicated. The package mgcv and the function mono.con are your friends here - you'll need to read quite a lot of documentation. Gavin Simpson's answer to question How to smooth data and force monotonicity which you linked in your question, has a good example. Finally, I reiterate that this approach (as well as all other approaches which rely on logistic regression, whether Bayesian or frequentist) only makes sense because you need a quick tool to approximate in an automated way multiple unknown functions inside your reinforcement learning workflow. $y|\mathbf{x}$ doesn't really have the binomial distribution, so you cannot expect to get realistic estimates of standard errors, confidence intervals, etc. If you need a real statistical model, which would give you not only point estimates but also realistic prediction intervals, then you need to take into account the real conditional distribution of your output. This question might help: Judging the quality of a statistical model for a percentage
regression with constraints Logistic regression with box constraints served me well in the past for problems similar to yours. You are interested in prediction, not in inference, so, as long a suitable estimate of generalization
46,099
regression with constraints
For constraint 3: dummy code $x_2$ so that you can model all categories within one regression function. EDIT: I can think of two ways to enforce similarity between coefficients of those dummy variables but they are rather cumbersome and I do not know available implementations. If you want to go down that road, you could of course always program it yourself: a) You could penalize the difference between the coefficients in the objective: $\min \sum_{i}(y_{i} - \hat{y}_{i})^2 + \lambda |\beta_{2}-\beta_{3}| + \lambda |\beta_{3}-\beta_{4}| + ... $ where $\hat{y}_{i}$ is your estimate of $y_{i}$ and and $\beta_{2}$, $\beta_{3}$, ... are the coefficients of the dummies. This would require tuning of $\lambda$. b) You could put extra constraints such as $|\beta_{j}-\beta_{j+1}| \leq c$ where $c$ is your limit on the dissimilarity. You would have to determine $c$ for that. For constraint 2: use a logistic regression. For constraint 1: if there is truly a positive relationship between $x_1$ and $y$, then the regression should find a positive coefficient. I would not know the benefit of forcing negative regression coefficients to become positive if the data says the opposite.
regression with constraints
For constraint 3: dummy code $x_2$ so that you can model all categories within one regression function. EDIT: I can think of two ways to enforce similarity between coefficients of those dummy variabl
regression with constraints For constraint 3: dummy code $x_2$ so that you can model all categories within one regression function. EDIT: I can think of two ways to enforce similarity between coefficients of those dummy variables but they are rather cumbersome and I do not know available implementations. If you want to go down that road, you could of course always program it yourself: a) You could penalize the difference between the coefficients in the objective: $\min \sum_{i}(y_{i} - \hat{y}_{i})^2 + \lambda |\beta_{2}-\beta_{3}| + \lambda |\beta_{3}-\beta_{4}| + ... $ where $\hat{y}_{i}$ is your estimate of $y_{i}$ and and $\beta_{2}$, $\beta_{3}$, ... are the coefficients of the dummies. This would require tuning of $\lambda$. b) You could put extra constraints such as $|\beta_{j}-\beta_{j+1}| \leq c$ where $c$ is your limit on the dissimilarity. You would have to determine $c$ for that. For constraint 2: use a logistic regression. For constraint 1: if there is truly a positive relationship between $x_1$ and $y$, then the regression should find a positive coefficient. I would not know the benefit of forcing negative regression coefficients to become positive if the data says the opposite.
regression with constraints For constraint 3: dummy code $x_2$ so that you can model all categories within one regression function. EDIT: I can think of two ways to enforce similarity between coefficients of those dummy variabl
46,100
regression with constraints
The only obvious tool based on the constraints is some form of Bayesian logistic regression. The reason is that your constraints would define the prior and the likelihood. For example, by assuming $\partial{f}/\partial{x_1}$ is positive, you are assuming that there is a zero probability that the $\hat{\beta}\le{0}$, in the linear analog problem. The bounding assumes the likelihood is some sigmoid function and the easiest way to express this would be through some function $g(h(x_1,x_2))$ where $g$ is the logistic likelihood. This just requires you to solve the prior for the relationship between $x_1$ and $x_2$ through $h$, and the relationships among the partitions as a prior. Your “smoothness” requirement can be helped through the rather ugly dropping of the “parallel lines” assumption used in Frequentist logistic regression. To see what I mean, lets drop $x_1$ and focus on the relationship between $y$ and $x_2$. Under the parallel lines assumption $y$ is mediated through a single parameter, say $\beta$, which is constant for all values. This implies that there is a direct relationship between how people behave when inside a frozen block of water, in cold water, in 90 or 100 degree water and in boiling water. This is obviously not the case since a person frozen in a block of water would have no activity and a person in boiling water might have brief frenetic activity followed by no activity. This is not at all like the behavior at 80-100 degrees. When you drop the parallel lines assumption you could map $\hat{\beta}_1\dots\hat{\beta}_{10}$ in such a way that it varies in a smoothly increasing or decreasing way. It would, however, create a nightmare of a prior distribution. You would have to restrict the posterior so that, for example, $\beta_{n+1}-\beta{n}\le{k}_n;k_n>0$ for increasing functions to reach “smoothness.” This may be unnecessary, but you should plan for it. I am also assuming no interaction effect between $x_1$ and $x_2$ and that you somehow can construct a clean, proper prior. You would first have to construct this as a series of probability statements, but you may be able to simplify this if you do drop the parallel lines assumption by treating $x_2$ as an ordered partition. You would need a lot of data, because the partitioning would consume a lot of explanatory power. One other method that may preserve your smoothness requirements is to not treat $x_2$ as a $1-10$ variable, but as a ten bit variable, where each bit has its own constant that is added to or multiplied with $x_1$. For example, if $x_2=3$ then it is coded as $0000000100$ and where that is present then $c_3$ is put in the function as either $g(c_3+h(x_1))$ or $g(c_3h(x_1))$. It could even be both a multiplication and additive constant together inside $g$ but outside $h$ such as $g(c_3+k_3h(x_1))$. The sticking point will be the probability statements to show the relationship between the variables prior to collecting the data. You could use Bayesian model selection to test competing models. I had to use something like this in a problem because of the weird restrictions on the variables that I faced.
regression with constraints
The only obvious tool based on the constraints is some form of Bayesian logistic regression. The reason is that your constraints would define the prior and the likelihood. For example, by assuming $
regression with constraints The only obvious tool based on the constraints is some form of Bayesian logistic regression. The reason is that your constraints would define the prior and the likelihood. For example, by assuming $\partial{f}/\partial{x_1}$ is positive, you are assuming that there is a zero probability that the $\hat{\beta}\le{0}$, in the linear analog problem. The bounding assumes the likelihood is some sigmoid function and the easiest way to express this would be through some function $g(h(x_1,x_2))$ where $g$ is the logistic likelihood. This just requires you to solve the prior for the relationship between $x_1$ and $x_2$ through $h$, and the relationships among the partitions as a prior. Your “smoothness” requirement can be helped through the rather ugly dropping of the “parallel lines” assumption used in Frequentist logistic regression. To see what I mean, lets drop $x_1$ and focus on the relationship between $y$ and $x_2$. Under the parallel lines assumption $y$ is mediated through a single parameter, say $\beta$, which is constant for all values. This implies that there is a direct relationship between how people behave when inside a frozen block of water, in cold water, in 90 or 100 degree water and in boiling water. This is obviously not the case since a person frozen in a block of water would have no activity and a person in boiling water might have brief frenetic activity followed by no activity. This is not at all like the behavior at 80-100 degrees. When you drop the parallel lines assumption you could map $\hat{\beta}_1\dots\hat{\beta}_{10}$ in such a way that it varies in a smoothly increasing or decreasing way. It would, however, create a nightmare of a prior distribution. You would have to restrict the posterior so that, for example, $\beta_{n+1}-\beta{n}\le{k}_n;k_n>0$ for increasing functions to reach “smoothness.” This may be unnecessary, but you should plan for it. I am also assuming no interaction effect between $x_1$ and $x_2$ and that you somehow can construct a clean, proper prior. You would first have to construct this as a series of probability statements, but you may be able to simplify this if you do drop the parallel lines assumption by treating $x_2$ as an ordered partition. You would need a lot of data, because the partitioning would consume a lot of explanatory power. One other method that may preserve your smoothness requirements is to not treat $x_2$ as a $1-10$ variable, but as a ten bit variable, where each bit has its own constant that is added to or multiplied with $x_1$. For example, if $x_2=3$ then it is coded as $0000000100$ and where that is present then $c_3$ is put in the function as either $g(c_3+h(x_1))$ or $g(c_3h(x_1))$. It could even be both a multiplication and additive constant together inside $g$ but outside $h$ such as $g(c_3+k_3h(x_1))$. The sticking point will be the probability statements to show the relationship between the variables prior to collecting the data. You could use Bayesian model selection to test competing models. I had to use something like this in a problem because of the weird restrictions on the variables that I faced.
regression with constraints The only obvious tool based on the constraints is some form of Bayesian logistic regression. The reason is that your constraints would define the prior and the likelihood. For example, by assuming $