idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
46,401
How Do I Know If A Markov Chain Follows The Markov Property?
An initial and simple test for this would be to see if the data show evidence of the weather being affected by the weather two days ago when you're already conditioning on the weather one day ago. To do this you would form a $3 \times 3 \times 3$ contingency table for three consecutive days and use this to conduct a t...
How Do I Know If A Markov Chain Follows The Markov Property?
An initial and simple test for this would be to see if the data show evidence of the weather being affected by the weather two days ago when you're already conditioning on the weather one day ago. To
How Do I Know If A Markov Chain Follows The Markov Property? An initial and simple test for this would be to see if the data show evidence of the weather being affected by the weather two days ago when you're already conditioning on the weather one day ago. To do this you would form a $3 \times 3 \times 3$ contingency...
How Do I Know If A Markov Chain Follows The Markov Property? An initial and simple test for this would be to see if the data show evidence of the weather being affected by the weather two days ago when you're already conditioning on the weather one day ago. To
46,402
How Do I Know If A Markov Chain Follows The Markov Property?
You can easily test this by doing multinomial regression. To fit the null hypothesis of a first order Markov chain you would include the previous state as a covariate in the model. You then estimate 6 parameters which translates into an estimate of the $3\times3$ transition matrix. To fit the alternative hypothesis th...
How Do I Know If A Markov Chain Follows The Markov Property?
You can easily test this by doing multinomial regression. To fit the null hypothesis of a first order Markov chain you would include the previous state as a covariate in the model. You then estimate
How Do I Know If A Markov Chain Follows The Markov Property? You can easily test this by doing multinomial regression. To fit the null hypothesis of a first order Markov chain you would include the previous state as a covariate in the model. You then estimate 6 parameters which translates into an estimate of the $3\ti...
How Do I Know If A Markov Chain Follows The Markov Property? You can easily test this by doing multinomial regression. To fit the null hypothesis of a first order Markov chain you would include the previous state as a covariate in the model. You then estimate
46,403
Can we consider the loadings as a proxy for correlation, in a Principal Component Analysis (PCA)?
You can answer the questions yourself if you look at how the PCA is defined. For this, let $\mathbb{X}$ denote the $n\times p$ data matrix, and let $S = [s_{ij}]$ be the sample covariance matrix, e.g. $S = (n-1)^{-1} (\mathbb{X}^\top H \mathbb{X})$, where $H = I_n - \frac{1}{b}1_n1_n^\top$ is the centering matrix. For ...
Can we consider the loadings as a proxy for correlation, in a Principal Component Analysis (PCA)?
You can answer the questions yourself if you look at how the PCA is defined. For this, let $\mathbb{X}$ denote the $n\times p$ data matrix, and let $S = [s_{ij}]$ be the sample covariance matrix, e.g.
Can we consider the loadings as a proxy for correlation, in a Principal Component Analysis (PCA)? You can answer the questions yourself if you look at how the PCA is defined. For this, let $\mathbb{X}$ denote the $n\times p$ data matrix, and let $S = [s_{ij}]$ be the sample covariance matrix, e.g. $S = (n-1)^{-1} (\mat...
Can we consider the loadings as a proxy for correlation, in a Principal Component Analysis (PCA)? You can answer the questions yourself if you look at how the PCA is defined. For this, let $\mathbb{X}$ denote the $n\times p$ data matrix, and let $S = [s_{ij}]$ be the sample covariance matrix, e.g.
46,404
Tweedie Dispersion Parameter Estimation Methods
Tweedie generalized linear models assume a mean-variance relationship with variance power $p$, defined by $$E(y_i)=\mu_i$$ and $${\rm var}(y_i)=\phi \mu_i^p$$ where $y_i$ is the $i$th observation, $\mu_i$ is the expected value, $\phi$ is the dispersion and $p$ is the mean-variance power parameter, also called the Tweed...
Tweedie Dispersion Parameter Estimation Methods
Tweedie generalized linear models assume a mean-variance relationship with variance power $p$, defined by $$E(y_i)=\mu_i$$ and $${\rm var}(y_i)=\phi \mu_i^p$$ where $y_i$ is the $i$th observation, $\m
Tweedie Dispersion Parameter Estimation Methods Tweedie generalized linear models assume a mean-variance relationship with variance power $p$, defined by $$E(y_i)=\mu_i$$ and $${\rm var}(y_i)=\phi \mu_i^p$$ where $y_i$ is the $i$th observation, $\mu_i$ is the expected value, $\phi$ is the dispersion and $p$ is the mean...
Tweedie Dispersion Parameter Estimation Methods Tweedie generalized linear models assume a mean-variance relationship with variance power $p$, defined by $$E(y_i)=\mu_i$$ and $${\rm var}(y_i)=\phi \mu_i^p$$ where $y_i$ is the $i$th observation, $\m
46,405
How to improve difference-in-differences graph?
I can tell that the treatment group starts out lower than the control group and, while the treatment group winds up lower than the control group, the treatment group has closed the gap. For difference-in-differences, this is exactly what I would want to see. One possible improvement is to put standard errors on the fou...
How to improve difference-in-differences graph?
I can tell that the treatment group starts out lower than the control group and, while the treatment group winds up lower than the control group, the treatment group has closed the gap. For difference
How to improve difference-in-differences graph? I can tell that the treatment group starts out lower than the control group and, while the treatment group winds up lower than the control group, the treatment group has closed the gap. For difference-in-differences, this is exactly what I would want to see. One possible ...
How to improve difference-in-differences graph? I can tell that the treatment group starts out lower than the control group and, while the treatment group winds up lower than the control group, the treatment group has closed the gap. For difference
46,406
How to improve difference-in-differences graph?
This is what I finally did, perhaps this helps someone with a similar scenario:
How to improve difference-in-differences graph?
This is what I finally did, perhaps this helps someone with a similar scenario:
How to improve difference-in-differences graph? This is what I finally did, perhaps this helps someone with a similar scenario:
How to improve difference-in-differences graph? This is what I finally did, perhaps this helps someone with a similar scenario:
46,407
Reframing a HMM problem as an RNN
The hidden nodes (states) in an HMM are random variables, while in an RNN only the input nodes could be considered random variables, all the other nodes are just deterministic nonlinear functions. Thus, it is difficult to formulate an HMM with an RNN. However, some attempts have been made to combine the ideas of dynami...
Reframing a HMM problem as an RNN
The hidden nodes (states) in an HMM are random variables, while in an RNN only the input nodes could be considered random variables, all the other nodes are just deterministic nonlinear functions. Thu
Reframing a HMM problem as an RNN The hidden nodes (states) in an HMM are random variables, while in an RNN only the input nodes could be considered random variables, all the other nodes are just deterministic nonlinear functions. Thus, it is difficult to formulate an HMM with an RNN. However, some attempts have been m...
Reframing a HMM problem as an RNN The hidden nodes (states) in an HMM are random variables, while in an RNN only the input nodes could be considered random variables, all the other nodes are just deterministic nonlinear functions. Thu
46,408
Reframing a HMM problem as an RNN
Neural networks can be used to amortize the optimization part, effectively learning an adaptive solution given a corpus of data. The connection with VAEs is pretty easy to see here. So, in your notation, instead of optimizing for $Q*$, you would learn an approximate posterior. See the Structured Inference Networks and ...
Reframing a HMM problem as an RNN
Neural networks can be used to amortize the optimization part, effectively learning an adaptive solution given a corpus of data. The connection with VAEs is pretty easy to see here. So, in your notati
Reframing a HMM problem as an RNN Neural networks can be used to amortize the optimization part, effectively learning an adaptive solution given a corpus of data. The connection with VAEs is pretty easy to see here. So, in your notation, instead of optimizing for $Q*$, you would learn an approximate posterior. See the ...
Reframing a HMM problem as an RNN Neural networks can be used to amortize the optimization part, effectively learning an adaptive solution given a corpus of data. The connection with VAEs is pretty easy to see here. So, in your notati
46,409
Wilcoxon rank sum test correct vectors order
In R, the Wilcoxon rank sum test statistic $W$ is calculated as the sum of ranks in the first sample minus the expected value of the rank sum of $x$ under the null ($\frac{m^2+m}{2}$, where $m$ is the sample size of $x$). Since the distribution of rank sums under the null is symmetric about it's mean, this gives equiv...
Wilcoxon rank sum test correct vectors order
In R, the Wilcoxon rank sum test statistic $W$ is calculated as the sum of ranks in the first sample minus the expected value of the rank sum of $x$ under the null ($\frac{m^2+m}{2}$, where $m$ is th
Wilcoxon rank sum test correct vectors order In R, the Wilcoxon rank sum test statistic $W$ is calculated as the sum of ranks in the first sample minus the expected value of the rank sum of $x$ under the null ($\frac{m^2+m}{2}$, where $m$ is the sample size of $x$). Since the distribution of rank sums under the null i...
Wilcoxon rank sum test correct vectors order In R, the Wilcoxon rank sum test statistic $W$ is calculated as the sum of ranks in the first sample minus the expected value of the rank sum of $x$ under the null ($\frac{m^2+m}{2}$, where $m$ is th
46,410
What is G-computation and G-estimation in causal inference
This is a short beginner-friendly guide to g-computation for estimating the average treatment effect https://github.com/kathoffman/causal-inference-visual-guides/blob/master/visual-guides/G-Computation.pdf . A more in-depth introduction can be found at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6074945/ . The g-formu...
What is G-computation and G-estimation in causal inference
This is a short beginner-friendly guide to g-computation for estimating the average treatment effect https://github.com/kathoffman/causal-inference-visual-guides/blob/master/visual-guides/G-Computatio
What is G-computation and G-estimation in causal inference This is a short beginner-friendly guide to g-computation for estimating the average treatment effect https://github.com/kathoffman/causal-inference-visual-guides/blob/master/visual-guides/G-Computation.pdf . A more in-depth introduction can be found at https://...
What is G-computation and G-estimation in causal inference This is a short beginner-friendly guide to g-computation for estimating the average treatment effect https://github.com/kathoffman/causal-inference-visual-guides/blob/master/visual-guides/G-Computatio
46,411
What is the distribution of a random linear combination of gamma random variables?
A direct attack (via integration) on computing the density looks intractable. Instead, we may more easily compute the characteristic function of $Y$ (when the scale factor $b=1,$ which we may assume without any loss of generality simply by changing the units in which we express $Y$) as $$\begin{aligned} \phi_Y(t;a) &= ...
What is the distribution of a random linear combination of gamma random variables?
A direct attack (via integration) on computing the density looks intractable. Instead, we may more easily compute the characteristic function of $Y$ (when the scale factor $b=1,$ which we may assume w
What is the distribution of a random linear combination of gamma random variables? A direct attack (via integration) on computing the density looks intractable. Instead, we may more easily compute the characteristic function of $Y$ (when the scale factor $b=1,$ which we may assume without any loss of generality simply ...
What is the distribution of a random linear combination of gamma random variables? A direct attack (via integration) on computing the density looks intractable. Instead, we may more easily compute the characteristic function of $Y$ (when the scale factor $b=1,$ which we may assume w
46,412
Statistical interpretation of diagonal of Cholesky decomposition?
If the $X_i$ variables follow a normal distribution with covariance matrix $\Sigma$ and $$\Sigma = LDL'$$ then the diagonal elements of $D$ are the conditional variances of each $X_i$ conditional on $X_1,\ldots,X_{i-1}$. And, as you have already said, the elements of the $i$th row of $L$ give the regression coefficient...
Statistical interpretation of diagonal of Cholesky decomposition?
If the $X_i$ variables follow a normal distribution with covariance matrix $\Sigma$ and $$\Sigma = LDL'$$ then the diagonal elements of $D$ are the conditional variances of each $X_i$ conditional on $
Statistical interpretation of diagonal of Cholesky decomposition? If the $X_i$ variables follow a normal distribution with covariance matrix $\Sigma$ and $$\Sigma = LDL'$$ then the diagonal elements of $D$ are the conditional variances of each $X_i$ conditional on $X_1,\ldots,X_{i-1}$. And, as you have already said, th...
Statistical interpretation of diagonal of Cholesky decomposition? If the $X_i$ variables follow a normal distribution with covariance matrix $\Sigma$ and $$\Sigma = LDL'$$ then the diagonal elements of $D$ are the conditional variances of each $X_i$ conditional on $
46,413
How to find the PMF of a weighted sum of IID Bernoulli random variables with constant sum of weights
You can write out the generating functions for this distribution quite easily, which is sufficient to characterise the distribution. For example, the characteristic function for $Y$ is: $$\begin{align} \phi_Y(t) &\equiv \mathbb{E}(e^{itY}) \\[12pt] &= \prod_{j=1}^k \mathbb{E}(e^{it a_j X_j}) \\[6pt] &= \prod_{j=1}^k ...
How to find the PMF of a weighted sum of IID Bernoulli random variables with constant sum of weights
You can write out the generating functions for this distribution quite easily, which is sufficient to characterise the distribution. For example, the characteristic function for $Y$ is: $$\begin{alig
How to find the PMF of a weighted sum of IID Bernoulli random variables with constant sum of weights You can write out the generating functions for this distribution quite easily, which is sufficient to characterise the distribution. For example, the characteristic function for $Y$ is: $$\begin{align} \phi_Y(t) &\equ...
How to find the PMF of a weighted sum of IID Bernoulli random variables with constant sum of weights You can write out the generating functions for this distribution quite easily, which is sufficient to characterise the distribution. For example, the characteristic function for $Y$ is: $$\begin{alig
46,414
Bernoulli distribution with random means
Since all variables are IID, this is quite straightforward. First we compute the conditional moments: $$\begin{align} \mathbb{E}(S' | \mathbf{p}) &= \mathbb{E} \bigg( \frac{1}{nk} \sum_{i=1}^k \sum_{j=1}^n X_{ij} \bigg|\mathbf{p}\bigg) \\[6pt] &= \frac{1}{nk} \sum_{i=1}^k \sum_{j=1}^n \mathbb{E}(X_{ij}|p_i) \\[6pt] &...
Bernoulli distribution with random means
Since all variables are IID, this is quite straightforward. First we compute the conditional moments: $$\begin{align} \mathbb{E}(S' | \mathbf{p}) &= \mathbb{E} \bigg( \frac{1}{nk} \sum_{i=1}^k \sum_
Bernoulli distribution with random means Since all variables are IID, this is quite straightforward. First we compute the conditional moments: $$\begin{align} \mathbb{E}(S' | \mathbf{p}) &= \mathbb{E} \bigg( \frac{1}{nk} \sum_{i=1}^k \sum_{j=1}^n X_{ij} \bigg|\mathbf{p}\bigg) \\[6pt] &= \frac{1}{nk} \sum_{i=1}^k \sum...
Bernoulli distribution with random means Since all variables are IID, this is quite straightforward. First we compute the conditional moments: $$\begin{align} \mathbb{E}(S' | \mathbf{p}) &= \mathbb{E} \bigg( \frac{1}{nk} \sum_{i=1}^k \sum_
46,415
Predict non-negative continuous variable between 0 to 100
Scale your data to lie between 0 and 1, then use beta regression. A beta regression models the response as conditionally beta distributed, i.e., bounded between 0 and 1 (just like a negative binomial regression models your data as conditionally negbin distributed). The beta is the most common such distribution. (The un...
Predict non-negative continuous variable between 0 to 100
Scale your data to lie between 0 and 1, then use beta regression. A beta regression models the response as conditionally beta distributed, i.e., bounded between 0 and 1 (just like a negative binomial
Predict non-negative continuous variable between 0 to 100 Scale your data to lie between 0 and 1, then use beta regression. A beta regression models the response as conditionally beta distributed, i.e., bounded between 0 and 1 (just like a negative binomial regression models your data as conditionally negbin distribute...
Predict non-negative continuous variable between 0 to 100 Scale your data to lie between 0 and 1, then use beta regression. A beta regression models the response as conditionally beta distributed, i.e., bounded between 0 and 1 (just like a negative binomial
46,416
How to model a probability distribution to phone call duration?
But is there a more rigorous way of modeling? Visually comparing a theoretical distribution to an observed histogram is an important practice toward being rigorous in our inferences. Like a real-world intuition behind one of those options? This would require domain-specific knowledge about phone calls that perhaps ...
How to model a probability distribution to phone call duration?
But is there a more rigorous way of modeling? Visually comparing a theoretical distribution to an observed histogram is an important practice toward being rigorous in our inferences. Like a real-wo
How to model a probability distribution to phone call duration? But is there a more rigorous way of modeling? Visually comparing a theoretical distribution to an observed histogram is an important practice toward being rigorous in our inferences. Like a real-world intuition behind one of those options? This would r...
How to model a probability distribution to phone call duration? But is there a more rigorous way of modeling? Visually comparing a theoretical distribution to an observed histogram is an important practice toward being rigorous in our inferences. Like a real-wo
46,417
How to model a probability distribution to phone call duration?
Treating time as continuous, candidate distributions that come to mind are log-normal, gamma, and Weibull. You can use a time-to-event regression package to fit various models and compare the AIC for which provides the best fit. You can plot the Kaplan-Meier estimator of the survival function and overlay the fitted p...
How to model a probability distribution to phone call duration?
Treating time as continuous, candidate distributions that come to mind are log-normal, gamma, and Weibull. You can use a time-to-event regression package to fit various models and compare the AIC for
How to model a probability distribution to phone call duration? Treating time as continuous, candidate distributions that come to mind are log-normal, gamma, and Weibull. You can use a time-to-event regression package to fit various models and compare the AIC for which provides the best fit. You can plot the Kaplan-M...
How to model a probability distribution to phone call duration? Treating time as continuous, candidate distributions that come to mind are log-normal, gamma, and Weibull. You can use a time-to-event regression package to fit various models and compare the AIC for
46,418
How to model a probability distribution to phone call duration?
It is usually a bad idea to fit a probability density function, bot in the least because the shape and the results would be very sensitive to the choice of the bin size. Robust and simple options for comparing empirical distribution with parametric ones include QQ-plot (mostly to get the intuitive idea) and the Kolmiog...
How to model a probability distribution to phone call duration?
It is usually a bad idea to fit a probability density function, bot in the least because the shape and the results would be very sensitive to the choice of the bin size. Robust and simple options for
How to model a probability distribution to phone call duration? It is usually a bad idea to fit a probability density function, bot in the least because the shape and the results would be very sensitive to the choice of the bin size. Robust and simple options for comparing empirical distribution with parametric ones in...
How to model a probability distribution to phone call duration? It is usually a bad idea to fit a probability density function, bot in the least because the shape and the results would be very sensitive to the choice of the bin size. Robust and simple options for
46,419
How to model a probability distribution to phone call duration?
From a practical point of view, whether a proposed calculation or test or assumption makes sense depends on the purpose. What are you going to do with the distribution once you have fitted it? If you want to use the distribution as input to simulation, then you could just use the empirical model. If you want to input t...
How to model a probability distribution to phone call duration?
From a practical point of view, whether a proposed calculation or test or assumption makes sense depends on the purpose. What are you going to do with the distribution once you have fitted it? If you
How to model a probability distribution to phone call duration? From a practical point of view, whether a proposed calculation or test or assumption makes sense depends on the purpose. What are you going to do with the distribution once you have fitted it? If you want to use the distribution as input to simulation, the...
How to model a probability distribution to phone call duration? From a practical point of view, whether a proposed calculation or test or assumption makes sense depends on the purpose. What are you going to do with the distribution once you have fitted it? If you
46,420
Converting a circular outcome variable to a linear one
I don't want to complicate my model (using a linear mixed effects model) by using circular statistics, so I was wondering if I can use the absolute deviation expressed as a percentage of 180? ...Is this a legitimate fix? There is not sufficient information in order to tell whether this is legitimate or not. The proble...
Converting a circular outcome variable to a linear one
I don't want to complicate my model (using a linear mixed effects model) by using circular statistics, so I was wondering if I can use the absolute deviation expressed as a percentage of 180? ...Is th
Converting a circular outcome variable to a linear one I don't want to complicate my model (using a linear mixed effects model) by using circular statistics, so I was wondering if I can use the absolute deviation expressed as a percentage of 180? ...Is this a legitimate fix? There is not sufficient information in orde...
Converting a circular outcome variable to a linear one I don't want to complicate my model (using a linear mixed effects model) by using circular statistics, so I was wondering if I can use the absolute deviation expressed as a percentage of 180? ...Is th
46,421
Converting a circular outcome variable to a linear one
You cannot validly linearize a circular measure which spans 360°, assuming the circularity of that measure is valid. Any transformation which "linearizes" a circular measure must necessarily privilege some value as being maximally linearly distant from some other value by virtue of lying on the other side of whatever p...
Converting a circular outcome variable to a linear one
You cannot validly linearize a circular measure which spans 360°, assuming the circularity of that measure is valid. Any transformation which "linearizes" a circular measure must necessarily privilege
Converting a circular outcome variable to a linear one You cannot validly linearize a circular measure which spans 360°, assuming the circularity of that measure is valid. Any transformation which "linearizes" a circular measure must necessarily privilege some value as being maximally linearly distant from some other v...
Converting a circular outcome variable to a linear one You cannot validly linearize a circular measure which spans 360°, assuming the circularity of that measure is valid. Any transformation which "linearizes" a circular measure must necessarily privilege
46,422
Converting a circular outcome variable to a linear one
Since you are interested in deviation from 0 (and not the direction), it would be appropriate to use $|\theta|$ as your variable. You've defined the problem in a way such that $-90$ and $+90$ (and similarly, $-2$ and $+2$) are the same outcome so one can take the absolute value and replace the circular problem with a l...
Converting a circular outcome variable to a linear one
Since you are interested in deviation from 0 (and not the direction), it would be appropriate to use $|\theta|$ as your variable. You've defined the problem in a way such that $-90$ and $+90$ (and sim
Converting a circular outcome variable to a linear one Since you are interested in deviation from 0 (and not the direction), it would be appropriate to use $|\theta|$ as your variable. You've defined the problem in a way such that $-90$ and $+90$ (and similarly, $-2$ and $+2$) are the same outcome so one can take the a...
Converting a circular outcome variable to a linear one Since you are interested in deviation from 0 (and not the direction), it would be appropriate to use $|\theta|$ as your variable. You've defined the problem in a way such that $-90$ and $+90$ (and sim
46,423
Is there a scenario where Bayes update results in no belief update when the prior has nonzero probability mass everywhere?
Let $X \sim U(a, a+1)$ for some unknown $a$ which is either 0 or 1. Suppose your prior on $a$ is uniform. Then suppose you observe $x = 1$. You can see via a symmetry argument that your posterior should be the same as your prior, since this gives you no information about $a$.
Is there a scenario where Bayes update results in no belief update when the prior has nonzero probab
Let $X \sim U(a, a+1)$ for some unknown $a$ which is either 0 or 1. Suppose your prior on $a$ is uniform. Then suppose you observe $x = 1$. You can see via a symmetry argument that your posterior sho
Is there a scenario where Bayes update results in no belief update when the prior has nonzero probability mass everywhere? Let $X \sim U(a, a+1)$ for some unknown $a$ which is either 0 or 1. Suppose your prior on $a$ is uniform. Then suppose you observe $x = 1$. You can see via a symmetry argument that your posterior ...
Is there a scenario where Bayes update results in no belief update when the prior has nonzero probab Let $X \sim U(a, a+1)$ for some unknown $a$ which is either 0 or 1. Suppose your prior on $a$ is uniform. Then suppose you observe $x = 1$. You can see via a symmetry argument that your posterior sho
46,424
PCA leads to some highly Correlated Principal Components
You're right that the principal components should all be mutually orthogonal, so this is not expected. I think you probably have columns in your data matrix which are linearly dependent. If the column rank of your data matrix is < 64, it is not possible to find 64 mutually orthogonal vectors in its column space. It mig...
PCA leads to some highly Correlated Principal Components
You're right that the principal components should all be mutually orthogonal, so this is not expected. I think you probably have columns in your data matrix which are linearly dependent. If the column
PCA leads to some highly Correlated Principal Components You're right that the principal components should all be mutually orthogonal, so this is not expected. I think you probably have columns in your data matrix which are linearly dependent. If the column rank of your data matrix is < 64, it is not possible to find 6...
PCA leads to some highly Correlated Principal Components You're right that the principal components should all be mutually orthogonal, so this is not expected. I think you probably have columns in your data matrix which are linearly dependent. If the column
46,425
Likelihood term in Cox Proportional Hazards Model
Rewrite your last expression in terms of both the baseline hazard $h_0(t)$ and the covariate-associated hazard ratios: $$\frac{h_0(t_j)\exp(\beta x_j)}{\sum_k h_0(t_j)\exp(\beta x_k)}= \frac{\exp(\beta x_j)}{\sum_k \exp(\beta x_k)}$$ where $k$ represents the people at risk in time $t_j$. That's the value of the proport...
Likelihood term in Cox Proportional Hazards Model
Rewrite your last expression in terms of both the baseline hazard $h_0(t)$ and the covariate-associated hazard ratios: $$\frac{h_0(t_j)\exp(\beta x_j)}{\sum_k h_0(t_j)\exp(\beta x_k)}= \frac{\exp(\bet
Likelihood term in Cox Proportional Hazards Model Rewrite your last expression in terms of both the baseline hazard $h_0(t)$ and the covariate-associated hazard ratios: $$\frac{h_0(t_j)\exp(\beta x_j)}{\sum_k h_0(t_j)\exp(\beta x_k)}= \frac{\exp(\beta x_j)}{\sum_k \exp(\beta x_k)}$$ where $k$ represents the people at r...
Likelihood term in Cox Proportional Hazards Model Rewrite your last expression in terms of both the baseline hazard $h_0(t)$ and the covariate-associated hazard ratios: $$\frac{h_0(t_j)\exp(\beta x_j)}{\sum_k h_0(t_j)\exp(\beta x_k)}= \frac{\exp(\bet
46,426
Likelihood term in Cox Proportional Hazards Model
The $h()$ is not a probability, it is a hazard, although they are monotonically related. The Cox model is not a full likelihood procedure, it maximizes a partial likelihood. Even though we don't directly estimate the hazard function as a nuisance parameter (which would be a conditional likelihood approach), we pretend ...
Likelihood term in Cox Proportional Hazards Model
The $h()$ is not a probability, it is a hazard, although they are monotonically related. The Cox model is not a full likelihood procedure, it maximizes a partial likelihood. Even though we don't direc
Likelihood term in Cox Proportional Hazards Model The $h()$ is not a probability, it is a hazard, although they are monotonically related. The Cox model is not a full likelihood procedure, it maximizes a partial likelihood. Even though we don't directly estimate the hazard function as a nuisance parameter (which would ...
Likelihood term in Cox Proportional Hazards Model The $h()$ is not a probability, it is a hazard, although they are monotonically related. The Cox model is not a full likelihood procedure, it maximizes a partial likelihood. Even though we don't direc
46,427
Insignificant F-test in linear regression - when to stop?
Short Answer After spending several days thinking about this and running simulations, I can't agree with the usual recommendations. But if someone can see a flaw in my logic (which there easily might be) please do comment. My conclusion is this. If your goal is to look at a series of predictors of interest one-by-one...
Insignificant F-test in linear regression - when to stop?
Short Answer After spending several days thinking about this and running simulations, I can't agree with the usual recommendations. But if someone can see a flaw in my logic (which there easily might
Insignificant F-test in linear regression - when to stop? Short Answer After spending several days thinking about this and running simulations, I can't agree with the usual recommendations. But if someone can see a flaw in my logic (which there easily might be) please do comment. My conclusion is this. If your goal i...
Insignificant F-test in linear regression - when to stop? Short Answer After spending several days thinking about this and running simulations, I can't agree with the usual recommendations. But if someone can see a flaw in my logic (which there easily might
46,428
Subscript notation in expectations (variational autoencoder)
It means expectation with respect to $q_{\phi}(\mathbf{z} | \mathbf{x}^{(i)})$. So: $$\mathbb{E}_{q_{\phi}(\mathbf{z} | \mathbf{x}^{(i)})}[\log p_{\theta}(\mathbf{x}^{(i)} | \mathbf{z})] = \int_{\mathbb{R}^d} q_{\phi}(\mathbf{z} | \mathbf{x}^{(i)}) \log p_{\theta}(\mathbf{x}^{(i)} | \mathbf{z}) d \mathbf{z} $$ Where wi...
Subscript notation in expectations (variational autoencoder)
It means expectation with respect to $q_{\phi}(\mathbf{z} | \mathbf{x}^{(i)})$. So: $$\mathbb{E}_{q_{\phi}(\mathbf{z} | \mathbf{x}^{(i)})}[\log p_{\theta}(\mathbf{x}^{(i)} | \mathbf{z})] = \int_{\math
Subscript notation in expectations (variational autoencoder) It means expectation with respect to $q_{\phi}(\mathbf{z} | \mathbf{x}^{(i)})$. So: $$\mathbb{E}_{q_{\phi}(\mathbf{z} | \mathbf{x}^{(i)})}[\log p_{\theta}(\mathbf{x}^{(i)} | \mathbf{z})] = \int_{\mathbb{R}^d} q_{\phi}(\mathbf{z} | \mathbf{x}^{(i)}) \log p_{\t...
Subscript notation in expectations (variational autoencoder) It means expectation with respect to $q_{\phi}(\mathbf{z} | \mathbf{x}^{(i)})$. So: $$\mathbb{E}_{q_{\phi}(\mathbf{z} | \mathbf{x}^{(i)})}[\log p_{\theta}(\mathbf{x}^{(i)} | \mathbf{z})] = \int_{\math
46,429
Independence of variables in expectation
No, it's not enough. Although $X$ and $Y$ are independent, the events $\{X<Y\}$ and $\{X>2\}$ are not. Let's say $Y$ is a constant random variable and is equal to $2$ with probability $1$. Then, $$E[I(X<2)I(X>2)]=0$$ But, $E[I(X<2)]E[I(X>2)]$ depends on $X$.
Independence of variables in expectation
No, it's not enough. Although $X$ and $Y$ are independent, the events $\{X<Y\}$ and $\{X>2\}$ are not. Let's say $Y$ is a constant random variable and is equal to $2$ with probability $1$. Then, $$E[I
Independence of variables in expectation No, it's not enough. Although $X$ and $Y$ are independent, the events $\{X<Y\}$ and $\{X>2\}$ are not. Let's say $Y$ is a constant random variable and is equal to $2$ with probability $1$. Then, $$E[I(X<2)I(X>2)]=0$$ But, $E[I(X<2)]E[I(X>2)]$ depends on $X$.
Independence of variables in expectation No, it's not enough. Although $X$ and $Y$ are independent, the events $\{X<Y\}$ and $\{X>2\}$ are not. Let's say $Y$ is a constant random variable and is equal to $2$ with probability $1$. Then, $$E[I
46,430
What is the expected distance to the nearest molecule?
Consider $d$ dimensions. The distribution to the nearest neighbor of any point can be approximated by supposing $N$ neighbors are independently, uniformly, and randomly situated within a radius of one unit from that point (where the distance unit and $N$ are chosen to reproduce the molecular density; preferably $N$ is...
What is the expected distance to the nearest molecule?
Consider $d$ dimensions. The distribution to the nearest neighbor of any point can be approximated by supposing $N$ neighbors are independently, uniformly, and randomly situated within a radius of on
What is the expected distance to the nearest molecule? Consider $d$ dimensions. The distribution to the nearest neighbor of any point can be approximated by supposing $N$ neighbors are independently, uniformly, and randomly situated within a radius of one unit from that point (where the distance unit and $N$ are chose...
What is the expected distance to the nearest molecule? Consider $d$ dimensions. The distribution to the nearest neighbor of any point can be approximated by supposing $N$ neighbors are independently, uniformly, and randomly situated within a radius of on
46,431
Why this OLS fitting will converge to (0,-0.5)?
This is a law of large numbers in action. Let $\rho$ be the parameter (the lag-1 correlation) and let $\varepsilon_i$ be a sequence of iid standard Normal variables, so that for $i=1, 2, \ldots,$ the model is $$y_{i+1} = \rho y_i + \varepsilon_{i+1}$$ and $y_0=0.$ Therefore the first differences are $$x_{i+1} = y_{i+1...
Why this OLS fitting will converge to (0,-0.5)?
This is a law of large numbers in action. Let $\rho$ be the parameter (the lag-1 correlation) and let $\varepsilon_i$ be a sequence of iid standard Normal variables, so that for $i=1, 2, \ldots,$ the
Why this OLS fitting will converge to (0,-0.5)? This is a law of large numbers in action. Let $\rho$ be the parameter (the lag-1 correlation) and let $\varepsilon_i$ be a sequence of iid standard Normal variables, so that for $i=1, 2, \ldots,$ the model is $$y_{i+1} = \rho y_i + \varepsilon_{i+1}$$ and $y_0=0.$ Theref...
Why this OLS fitting will converge to (0,-0.5)? This is a law of large numbers in action. Let $\rho$ be the parameter (the lag-1 correlation) and let $\varepsilon_i$ be a sequence of iid standard Normal variables, so that for $i=1, 2, \ldots,$ the
46,432
Trick to remember when to reject null (p-values vs alpha)
This surely will not top the list of possible "cool undergraduate-level tips", but simply recalling the definition of a p-value might be helpful (quoted from Wikipedia): The probability of obtaining test results at least as extreme as the results actually observed, under the assumption that the null hypothesis is corr...
Trick to remember when to reject null (p-values vs alpha)
This surely will not top the list of possible "cool undergraduate-level tips", but simply recalling the definition of a p-value might be helpful (quoted from Wikipedia): The probability of obtaining
Trick to remember when to reject null (p-values vs alpha) This surely will not top the list of possible "cool undergraduate-level tips", but simply recalling the definition of a p-value might be helpful (quoted from Wikipedia): The probability of obtaining test results at least as extreme as the results actually obser...
Trick to remember when to reject null (p-values vs alpha) This surely will not top the list of possible "cool undergraduate-level tips", but simply recalling the definition of a p-value might be helpful (quoted from Wikipedia): The probability of obtaining
46,433
Trick to remember when to reject null (p-values vs alpha)
The standard mnemonic for remembering how to make a conclusion in a hypothesis test is: If p is low, the null must go! As to why this is the case, the best explanation of a classical hypothesis test is that it is the inductive anologue of a proof by contradiction. In a proof by contradiction we begin with a null hyp...
Trick to remember when to reject null (p-values vs alpha)
The standard mnemonic for remembering how to make a conclusion in a hypothesis test is: If p is low, the null must go! As to why this is the case, the best explanation of a classical hypothesis test
Trick to remember when to reject null (p-values vs alpha) The standard mnemonic for remembering how to make a conclusion in a hypothesis test is: If p is low, the null must go! As to why this is the case, the best explanation of a classical hypothesis test is that it is the inductive anologue of a proof by contradict...
Trick to remember when to reject null (p-values vs alpha) The standard mnemonic for remembering how to make a conclusion in a hypothesis test is: If p is low, the null must go! As to why this is the case, the best explanation of a classical hypothesis test
46,434
Trick to remember when to reject null (p-values vs alpha)
Fisher is said to have given the interpretion of $p$-values as a "measure of surprise", given you believe in the null hypothesis. This may actually be confusing, since low $p$-value then indicates strong surprise. Instead, we can introduce $p$-values as "measure of compatibility with the null". (suggested by Christian ...
Trick to remember when to reject null (p-values vs alpha)
Fisher is said to have given the interpretion of $p$-values as a "measure of surprise", given you believe in the null hypothesis. This may actually be confusing, since low $p$-value then indicates str
Trick to remember when to reject null (p-values vs alpha) Fisher is said to have given the interpretion of $p$-values as a "measure of surprise", given you believe in the null hypothesis. This may actually be confusing, since low $p$-value then indicates strong surprise. Instead, we can introduce $p$-values as "measure...
Trick to remember when to reject null (p-values vs alpha) Fisher is said to have given the interpretion of $p$-values as a "measure of surprise", given you believe in the null hypothesis. This may actually be confusing, since low $p$-value then indicates str
46,435
Trick to remember when to reject null (p-values vs alpha)
I've found that some of my students are helped by thinking of the p-value as a percentile. They are familiar with the concepts of being in the top 10% of a class by GPAs, or "among the 1%" in terms of wealth. So for your example, a p-value of 0.04 means "Our observed value of the test statistic $T$ was among the top 4%...
Trick to remember when to reject null (p-values vs alpha)
I've found that some of my students are helped by thinking of the p-value as a percentile. They are familiar with the concepts of being in the top 10% of a class by GPAs, or "among the 1%" in terms of
Trick to remember when to reject null (p-values vs alpha) I've found that some of my students are helped by thinking of the p-value as a percentile. They are familiar with the concepts of being in the top 10% of a class by GPAs, or "among the 1%" in terms of wealth. So for your example, a p-value of 0.04 means "Our obs...
Trick to remember when to reject null (p-values vs alpha) I've found that some of my students are helped by thinking of the p-value as a percentile. They are familiar with the concepts of being in the top 10% of a class by GPAs, or "among the 1%" in terms of
46,436
SGD for Gaussian Process estimation
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This conference paper from NeurIPs 2020 may contain wh...
SGD for Gaussian Process estimation
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
SGD for Gaussian Process estimation Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This conference pa...
SGD for Gaussian Process estimation Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
46,437
$\text{Var}(y)$ in linear regression
Since $y_i \sim \mathcal N(\beta_0+\beta_1x_i,\sigma^2 )$, the variance of each sample $y_i$ is $\sigma^2$. This is a conditional variance, $\operatorname{Var}(y|x)$. The sample variance of all samples $y$ is a marginal variance. It's given by the common formulas $$\operatorname{Var}(y) =\mathbb E\left[y^2\right]- E\le...
$\text{Var}(y)$ in linear regression
Since $y_i \sim \mathcal N(\beta_0+\beta_1x_i,\sigma^2 )$, the variance of each sample $y_i$ is $\sigma^2$. This is a conditional variance, $\operatorname{Var}(y|x)$. The sample variance of all sample
$\text{Var}(y)$ in linear regression Since $y_i \sim \mathcal N(\beta_0+\beta_1x_i,\sigma^2 )$, the variance of each sample $y_i$ is $\sigma^2$. This is a conditional variance, $\operatorname{Var}(y|x)$. The sample variance of all samples $y$ is a marginal variance. It's given by the common formulas $$\operatorname{Var...
$\text{Var}(y)$ in linear regression Since $y_i \sim \mathcal N(\beta_0+\beta_1x_i,\sigma^2 )$, the variance of each sample $y_i$ is $\sigma^2$. This is a conditional variance, $\operatorname{Var}(y|x)$. The sample variance of all sample
46,438
Regression with flexible functional form
This sounds like a great job for GAMs via the mgcv package. Use a penalized smoothing spline to estimate $g$ and add an additive effect of $X$. The model would look like gam(y ~ x + s(z). library(mgcv) #> Loading required package: nlme #> This is mgcv 1.8-31. For overview type 'help("mgcv-package")'. z = rnorm(1000...
Regression with flexible functional form
This sounds like a great job for GAMs via the mgcv package. Use a penalized smoothing spline to estimate $g$ and add an additive effect of $X$. The model would look like gam(y ~ x + s(z). library(mg
Regression with flexible functional form This sounds like a great job for GAMs via the mgcv package. Use a penalized smoothing spline to estimate $g$ and add an additive effect of $X$. The model would look like gam(y ~ x + s(z). library(mgcv) #> Loading required package: nlme #> This is mgcv 1.8-31. For overview type...
Regression with flexible functional form This sounds like a great job for GAMs via the mgcv package. Use a penalized smoothing spline to estimate $g$ and add an additive effect of $X$. The model would look like gam(y ~ x + s(z). library(mg
46,439
Regression with flexible functional form
This model is a partially linear regression models, and in your case, $g(Z)$ is a nuisance parameter. See page 62 of this link for a primer on the subject. Of especial note in application is Robinson's Transformation (Section 7.7 on page 62 of the linked file). Inference is particularly tricky in these settings, since ...
Regression with flexible functional form
This model is a partially linear regression models, and in your case, $g(Z)$ is a nuisance parameter. See page 62 of this link for a primer on the subject. Of especial note in application is Robinson'
Regression with flexible functional form This model is a partially linear regression models, and in your case, $g(Z)$ is a nuisance parameter. See page 62 of this link for a primer on the subject. Of especial note in application is Robinson's Transformation (Section 7.7 on page 62 of the linked file). Inference is part...
Regression with flexible functional form This model is a partially linear regression models, and in your case, $g(Z)$ is a nuisance parameter. See page 62 of this link for a primer on the subject. Of especial note in application is Robinson'
46,440
Is smoothing an appropriate solution to deal with model diagnostics in a GAMLSS?
The overall and predictor-specific worm plots share the feature that "different shapes indicate different inadequacies in the model", as explained in the article Analysis of longitudinal multilevel experiments using GAMLSSs by Gustavo Thomas et al: https://arxiv.org/pdf/1810.03085.pdf. Section 12.4 of the book Flexible...
Is smoothing an appropriate solution to deal with model diagnostics in a GAMLSS?
The overall and predictor-specific worm plots share the feature that "different shapes indicate different inadequacies in the model", as explained in the article Analysis of longitudinal multilevel ex
Is smoothing an appropriate solution to deal with model diagnostics in a GAMLSS? The overall and predictor-specific worm plots share the feature that "different shapes indicate different inadequacies in the model", as explained in the article Analysis of longitudinal multilevel experiments using GAMLSSs by Gustavo Thom...
Is smoothing an appropriate solution to deal with model diagnostics in a GAMLSS? The overall and predictor-specific worm plots share the feature that "different shapes indicate different inadequacies in the model", as explained in the article Analysis of longitudinal multilevel ex
46,441
Is smoothing an appropriate solution to deal with model diagnostics in a GAMLSS?
A worm plot is basically a qq plot, so what you are doing is trying to find the best functional form of the covariates that yields a normal quantile Residual. This indicates a better fit. You checked the information criterion, and you could also do a likelihood ratio test. But if the model has a better fit, there isn't...
Is smoothing an appropriate solution to deal with model diagnostics in a GAMLSS?
A worm plot is basically a qq plot, so what you are doing is trying to find the best functional form of the covariates that yields a normal quantile Residual. This indicates a better fit. You checked
Is smoothing an appropriate solution to deal with model diagnostics in a GAMLSS? A worm plot is basically a qq plot, so what you are doing is trying to find the best functional form of the covariates that yields a normal quantile Residual. This indicates a better fit. You checked the information criterion, and you coul...
Is smoothing an appropriate solution to deal with model diagnostics in a GAMLSS? A worm plot is basically a qq plot, so what you are doing is trying to find the best functional form of the covariates that yields a normal quantile Residual. This indicates a better fit. You checked
46,442
Impossible to overfit when the data generating process is deterministic?
If the DGP is noiseless, it is not possible to encounter overfitting problem. That’s true. In fact you can see the overfitting also as the problem to fit the noise (irreducible error) and not only the signal. For example in regression context you can improve the fit, at most in $R^2$ term the perfect fit can achieved,...
Impossible to overfit when the data generating process is deterministic?
If the DGP is noiseless, it is not possible to encounter overfitting problem. That’s true. In fact you can see the overfitting also as the problem to fit the noise (irreducible error) and not only the
Impossible to overfit when the data generating process is deterministic? If the DGP is noiseless, it is not possible to encounter overfitting problem. That’s true. In fact you can see the overfitting also as the problem to fit the noise (irreducible error) and not only the signal. For example in regression context you...
Impossible to overfit when the data generating process is deterministic? If the DGP is noiseless, it is not possible to encounter overfitting problem. That’s true. In fact you can see the overfitting also as the problem to fit the noise (irreducible error) and not only the
46,443
Impossible to overfit when the data generating process is deterministic?
I agree that overfitting is not possible when the data-generating process is deterministic. However, this is not "too good to be true" because generalization is still a problem. Consider that we can take our model $\hat{f}$ to be a Lagrange polynomial (or any other "look-up-table"-like interpolator) of whatever order i...
Impossible to overfit when the data generating process is deterministic?
I agree that overfitting is not possible when the data-generating process is deterministic. However, this is not "too good to be true" because generalization is still a problem. Consider that we can t
Impossible to overfit when the data generating process is deterministic? I agree that overfitting is not possible when the data-generating process is deterministic. However, this is not "too good to be true" because generalization is still a problem. Consider that we can take our model $\hat{f}$ to be a Lagrange polyno...
Impossible to overfit when the data generating process is deterministic? I agree that overfitting is not possible when the data-generating process is deterministic. However, this is not "too good to be true" because generalization is still a problem. Consider that we can t
46,444
Impossible to overfit when the data generating process is deterministic?
We can treat Machine Learning book by Mitchell (1997) as an authoritative reference on this subject. On p. 67 he defines overfitting Definition: Given a hypothesis space $H$, a hypothesis $h \in H$ is said to overfit the training data if there exists some alternative hypothesis $h' \in H$, such that $h$ has smaller er...
Impossible to overfit when the data generating process is deterministic?
We can treat Machine Learning book by Mitchell (1997) as an authoritative reference on this subject. On p. 67 he defines overfitting Definition: Given a hypothesis space $H$, a hypothesis $h \in H$ i
Impossible to overfit when the data generating process is deterministic? We can treat Machine Learning book by Mitchell (1997) as an authoritative reference on this subject. On p. 67 he defines overfitting Definition: Given a hypothesis space $H$, a hypothesis $h \in H$ is said to overfit the training data if there ex...
Impossible to overfit when the data generating process is deterministic? We can treat Machine Learning book by Mitchell (1997) as an authoritative reference on this subject. On p. 67 he defines overfitting Definition: Given a hypothesis space $H$, a hypothesis $h \in H$ i
46,445
Is it a must to include a random slope in a mixed model?
There is considerable disagreement on this topic. I like to keep it simple. If you have a priori reasons to believe that the fixed effect in question should vary by subject (or whatever the grouping variable is) then you should fit random slopes. Obviously, this is provided that the data supports such a model. Often a ...
Is it a must to include a random slope in a mixed model?
There is considerable disagreement on this topic. I like to keep it simple. If you have a priori reasons to believe that the fixed effect in question should vary by subject (or whatever the grouping v
Is it a must to include a random slope in a mixed model? There is considerable disagreement on this topic. I like to keep it simple. If you have a priori reasons to believe that the fixed effect in question should vary by subject (or whatever the grouping variable is) then you should fit random slopes. Obviously, this ...
Is it a must to include a random slope in a mixed model? There is considerable disagreement on this topic. I like to keep it simple. If you have a priori reasons to believe that the fixed effect in question should vary by subject (or whatever the grouping v
46,446
How to measure whether a discrete distribution is uniform or not?
Your suggestion should work. I'm going to make another suggestion, which also yields an integer value for the discrepancy from uniformity. As indicated in comments, we don't really have enough information to say whether it's better for your application. The usual chi-squared goodness of fit statistic is $\sum_i (O_i-E_...
How to measure whether a discrete distribution is uniform or not?
Your suggestion should work. I'm going to make another suggestion, which also yields an integer value for the discrepancy from uniformity. As indicated in comments, we don't really have enough informa
How to measure whether a discrete distribution is uniform or not? Your suggestion should work. I'm going to make another suggestion, which also yields an integer value for the discrepancy from uniformity. As indicated in comments, we don't really have enough information to say whether it's better for your application. ...
How to measure whether a discrete distribution is uniform or not? Your suggestion should work. I'm going to make another suggestion, which also yields an integer value for the discrepancy from uniformity. As indicated in comments, we don't really have enough informa
46,447
How to measure whether a discrete distribution is uniform or not?
You can just as well use entropy in the discrete case as in the continuous case. The discrete uniform distribution on, say, $\{ 1,2,\dotsc,n \}$ also maximizes entropy among all distributions on that same support. Note that it does not matter if that support set is integers on just indices into some discrete set $\{ x...
How to measure whether a discrete distribution is uniform or not?
You can just as well use entropy in the discrete case as in the continuous case. The discrete uniform distribution on, say, $\{ 1,2,\dotsc,n \}$ also maximizes entropy among all distributions on that
How to measure whether a discrete distribution is uniform or not? You can just as well use entropy in the discrete case as in the continuous case. The discrete uniform distribution on, say, $\{ 1,2,\dotsc,n \}$ also maximizes entropy among all distributions on that same support. Note that it does not matter if that su...
How to measure whether a discrete distribution is uniform or not? You can just as well use entropy in the discrete case as in the continuous case. The discrete uniform distribution on, say, $\{ 1,2,\dotsc,n \}$ also maximizes entropy among all distributions on that
46,448
Conditional intraclass correlation (ICC) from a linear mixed model as evidence for test-retest reliability?
Yes, you can do this and interpret it as you think. I have read about such an interpretation in the second chapter of Sophia Rabe-Hesketh and Anders Skrondal's Multilevel and Longitudinal Modeling using Stata book (Volume 1). A more detailed explanation follows. Edit: I also added a simulation to demonstrate what is go...
Conditional intraclass correlation (ICC) from a linear mixed model as evidence for test-retest relia
Yes, you can do this and interpret it as you think. I have read about such an interpretation in the second chapter of Sophia Rabe-Hesketh and Anders Skrondal's Multilevel and Longitudinal Modeling usi
Conditional intraclass correlation (ICC) from a linear mixed model as evidence for test-retest reliability? Yes, you can do this and interpret it as you think. I have read about such an interpretation in the second chapter of Sophia Rabe-Hesketh and Anders Skrondal's Multilevel and Longitudinal Modeling using Stata boo...
Conditional intraclass correlation (ICC) from a linear mixed model as evidence for test-retest relia Yes, you can do this and interpret it as you think. I have read about such an interpretation in the second chapter of Sophia Rabe-Hesketh and Anders Skrondal's Multilevel and Longitudinal Modeling usi
46,449
Conditional intraclass correlation (ICC) from a linear mixed model as evidence for test-retest reliability?
This post really helped me and I wanted to thank you. In case other users ran to the same issue I had - I am adding a slight change to the simulation above. The only thing here is that this shows that Pearson corr for two times measurements is exactly the same as $\rho$. Nothing special - only nice to see the numbers m...
Conditional intraclass correlation (ICC) from a linear mixed model as evidence for test-retest relia
This post really helped me and I wanted to thank you. In case other users ran to the same issue I had - I am adding a slight change to the simulation above. The only thing here is that this shows that
Conditional intraclass correlation (ICC) from a linear mixed model as evidence for test-retest reliability? This post really helped me and I wanted to thank you. In case other users ran to the same issue I had - I am adding a slight change to the simulation above. The only thing here is that this shows that Pearson cor...
Conditional intraclass correlation (ICC) from a linear mixed model as evidence for test-retest relia This post really helped me and I wanted to thank you. In case other users ran to the same issue I had - I am adding a slight change to the simulation above. The only thing here is that this shows that
46,450
Does a significant interaction necessarily implies that at least one of the simple effects will be significant?
Considering that the interaction between two variables is significant, is it always the case that at least one of the two simple effects will be significant? No, not at all. If so, is there any proof or a counter example? Yes, it is easy to create a counter example: > set.seed(1) > N <- 200 > A <- rep(c(0,0,1,1), t...
Does a significant interaction necessarily implies that at least one of the simple effects will be s
Considering that the interaction between two variables is significant, is it always the case that at least one of the two simple effects will be significant? No, not at all. If so, is there any proo
Does a significant interaction necessarily implies that at least one of the simple effects will be significant? Considering that the interaction between two variables is significant, is it always the case that at least one of the two simple effects will be significant? No, not at all. If so, is there any proof or a c...
Does a significant interaction necessarily implies that at least one of the simple effects will be s Considering that the interaction between two variables is significant, is it always the case that at least one of the two simple effects will be significant? No, not at all. If so, is there any proo
46,451
Understanding why a $p$-value is too small
A few general thoughts: It's very rare that real-world data follow a specific distribution exactly. This doesn't stop us from using a specific distribution as a model in order to answer questions. A model doesn't have to be perfect, but good enough for the purpose. With such a huge sample size, even tiny deviations fr...
Understanding why a $p$-value is too small
A few general thoughts: It's very rare that real-world data follow a specific distribution exactly. This doesn't stop us from using a specific distribution as a model in order to answer questions. A
Understanding why a $p$-value is too small A few general thoughts: It's very rare that real-world data follow a specific distribution exactly. This doesn't stop us from using a specific distribution as a model in order to answer questions. A model doesn't have to be perfect, but good enough for the purpose. With such ...
Understanding why a $p$-value is too small A few general thoughts: It's very rare that real-world data follow a specific distribution exactly. This doesn't stop us from using a specific distribution as a model in order to answer questions. A
46,452
Amount of Possible Bootstrap Samples
A standard technique is the "stars and bars" construction. By "distinct bootstrap resample" what you mean is a sequence of $N$ elements of a set of size $N$ without paying attention to their order. Enumerate this set as $\{x_1, x_2, \ldots, x_N\}.$ Corresponding to any such sequence is the unique ordered sequence in ...
Amount of Possible Bootstrap Samples
A standard technique is the "stars and bars" construction. By "distinct bootstrap resample" what you mean is a sequence of $N$ elements of a set of size $N$ without paying attention to their order. E
Amount of Possible Bootstrap Samples A standard technique is the "stars and bars" construction. By "distinct bootstrap resample" what you mean is a sequence of $N$ elements of a set of size $N$ without paying attention to their order. Enumerate this set as $\{x_1, x_2, \ldots, x_N\}.$ Corresponding to any such sequen...
Amount of Possible Bootstrap Samples A standard technique is the "stars and bars" construction. By "distinct bootstrap resample" what you mean is a sequence of $N$ elements of a set of size $N$ without paying attention to their order. E
46,453
Can we validate accuracy using precision and recall?
Assuming we know the sample size $N$ we can get the Accuracy from knowing Precision and Recall. Precision is defined as $\frac{TP}{TP+FP}$ and Recall is defined as $\frac{TP}{TP+FN}$, $TP$ is the number of True Positives, $FP$ is the number of False Positives and $FN$ is the number of True Negatives. Now given that $N ...
Can we validate accuracy using precision and recall?
Assuming we know the sample size $N$ we can get the Accuracy from knowing Precision and Recall. Precision is defined as $\frac{TP}{TP+FP}$ and Recall is defined as $\frac{TP}{TP+FN}$, $TP$ is the numb
Can we validate accuracy using precision and recall? Assuming we know the sample size $N$ we can get the Accuracy from knowing Precision and Recall. Precision is defined as $\frac{TP}{TP+FP}$ and Recall is defined as $\frac{TP}{TP+FN}$, $TP$ is the number of True Positives, $FP$ is the number of False Positives and $FN...
Can we validate accuracy using precision and recall? Assuming we know the sample size $N$ we can get the Accuracy from knowing Precision and Recall. Precision is defined as $\frac{TP}{TP+FP}$ and Recall is defined as $\frac{TP}{TP+FN}$, $TP$ is the numb
46,454
Can we validate accuracy using precision and recall?
No, because you know nothing about true negatives, $TN$. Think about the confusion matrix with $FN,FP,TP$ entries known, which are used to calculate precision and recall, which means you have more information than precision/recall. But, even with these three known, you can adjust $TN$ as much as you can to change the a...
Can we validate accuracy using precision and recall?
No, because you know nothing about true negatives, $TN$. Think about the confusion matrix with $FN,FP,TP$ entries known, which are used to calculate precision and recall, which means you have more inf
Can we validate accuracy using precision and recall? No, because you know nothing about true negatives, $TN$. Think about the confusion matrix with $FN,FP,TP$ entries known, which are used to calculate precision and recall, which means you have more information than precision/recall. But, even with these three known, y...
Can we validate accuracy using precision and recall? No, because you know nothing about true negatives, $TN$. Think about the confusion matrix with $FN,FP,TP$ entries known, which are used to calculate precision and recall, which means you have more inf
46,455
Can we validate accuracy using precision and recall?
No, it's not possible to calculate the accuracy solely based on Precision and Recall. Building up on the previous answers, even if you know the sample size $N$, you'd still need more information. Given that: $N = TP+TN+FP+FN \implies TN = N-(TP+FP+FN)$ Precision is defined as $P = \frac{TP}{TP+FN}$ Recall is defined a...
Can we validate accuracy using precision and recall?
No, it's not possible to calculate the accuracy solely based on Precision and Recall. Building up on the previous answers, even if you know the sample size $N$, you'd still need more information. Give
Can we validate accuracy using precision and recall? No, it's not possible to calculate the accuracy solely based on Precision and Recall. Building up on the previous answers, even if you know the sample size $N$, you'd still need more information. Given that: $N = TP+TN+FP+FN \implies TN = N-(TP+FP+FN)$ Precision is ...
Can we validate accuracy using precision and recall? No, it's not possible to calculate the accuracy solely based on Precision and Recall. Building up on the previous answers, even if you know the sample size $N$, you'd still need more information. Give
46,456
How is the Herfindahl-Hirschman index different from entropy?
In biology, these are called measures of diversity, and while that application is different, there must be some value in the comparison. See for example this wiki or this book by Anne Magurran. In that application $p_i$ is population share (probability that an individual sampled from the population is of species $i$.) ...
How is the Herfindahl-Hirschman index different from entropy?
In biology, these are called measures of diversity, and while that application is different, there must be some value in the comparison. See for example this wiki or this book by Anne Magurran. In tha
How is the Herfindahl-Hirschman index different from entropy? In biology, these are called measures of diversity, and while that application is different, there must be some value in the comparison. See for example this wiki or this book by Anne Magurran. In that application $p_i$ is population share (probability that ...
How is the Herfindahl-Hirschman index different from entropy? In biology, these are called measures of diversity, and while that application is different, there must be some value in the comparison. See for example this wiki or this book by Anne Magurran. In tha
46,457
How is the Herfindahl-Hirschman index different from entropy?
I believe many sources refer to them as similar simply because both functionals are often used towards the same goal - quantifying the diversity/information of a given probability distribution. The HHI index in fact has many other names in different scientific disciplines, most notably the Simpson index. An extensive a...
How is the Herfindahl-Hirschman index different from entropy?
I believe many sources refer to them as similar simply because both functionals are often used towards the same goal - quantifying the diversity/information of a given probability distribution. The HH
How is the Herfindahl-Hirschman index different from entropy? I believe many sources refer to them as similar simply because both functionals are often used towards the same goal - quantifying the diversity/information of a given probability distribution. The HHI index in fact has many other names in different scientif...
How is the Herfindahl-Hirschman index different from entropy? I believe many sources refer to them as similar simply because both functionals are often used towards the same goal - quantifying the diversity/information of a given probability distribution. The HH
46,458
How is the Herfindahl-Hirschman index different from entropy?
A few comments. Let $P = (p_1, p_2, \ldots, p_N)$ be a probability distribution (so that $0 \le p_i \le 1$ and $\sum_i p_i = 1$). The measures are conceptually very closely related. The entropy is the expected surprise of a random draw from the distribution $P$ (where the surprise of an event with probability $p$ is ...
How is the Herfindahl-Hirschman index different from entropy?
A few comments. Let $P = (p_1, p_2, \ldots, p_N)$ be a probability distribution (so that $0 \le p_i \le 1$ and $\sum_i p_i = 1$). The measures are conceptually very closely related. The entropy is t
How is the Herfindahl-Hirschman index different from entropy? A few comments. Let $P = (p_1, p_2, \ldots, p_N)$ be a probability distribution (so that $0 \le p_i \le 1$ and $\sum_i p_i = 1$). The measures are conceptually very closely related. The entropy is the expected surprise of a random draw from the distributio...
How is the Herfindahl-Hirschman index different from entropy? A few comments. Let $P = (p_1, p_2, \ldots, p_N)$ be a probability distribution (so that $0 \le p_i \le 1$ and $\sum_i p_i = 1$). The measures are conceptually very closely related. The entropy is t
46,459
How is the Herfindahl-Hirschman index different from entropy?
The first thing to notice is that each of these measures is in opposite directions, and they are also on different scales. In order to compare them in the same direction and scale, I am going to compare scaled versions of the negated HHI and entropy. Specifically, I will begin by comparing the following functions: $$...
How is the Herfindahl-Hirschman index different from entropy?
The first thing to notice is that each of these measures is in opposite directions, and they are also on different scales. In order to compare them in the same direction and scale, I am going to comp
How is the Herfindahl-Hirschman index different from entropy? The first thing to notice is that each of these measures is in opposite directions, and they are also on different scales. In order to compare them in the same direction and scale, I am going to compare scaled versions of the negated HHI and entropy. Speci...
How is the Herfindahl-Hirschman index different from entropy? The first thing to notice is that each of these measures is in opposite directions, and they are also on different scales. In order to compare them in the same direction and scale, I am going to comp
46,460
How many tests should we do to estimate the percentage of people who contracted COVID-19 in Lombardy?
This is actually a handbook example of determining the sample size needed for estimating binomial proportion (e.g. Jones et al, 2004, Naing, 2003 for other references and examples). First of all, to make it more precise, we are talking about finding such sample size, that with probability $\alpha$, the difference betwe...
How many tests should we do to estimate the percentage of people who contracted COVID-19 in Lombardy
This is actually a handbook example of determining the sample size needed for estimating binomial proportion (e.g. Jones et al, 2004, Naing, 2003 for other references and examples). First of all, to m
How many tests should we do to estimate the percentage of people who contracted COVID-19 in Lombardy? This is actually a handbook example of determining the sample size needed for estimating binomial proportion (e.g. Jones et al, 2004, Naing, 2003 for other references and examples). First of all, to make it more precis...
How many tests should we do to estimate the percentage of people who contracted COVID-19 in Lombardy This is actually a handbook example of determining the sample size needed for estimating binomial proportion (e.g. Jones et al, 2004, Naing, 2003 for other references and examples). First of all, to m
46,461
What is the meaning of $\sqrt{\mathrm{var}(X)\mathrm{var}(P)-[\mathrm{cov}(X,P)]^2}$?
It is the square root of the determinant of the covariance matrix (between $X$ and $P$). The determinant of the covariance matrix is called as Generalized Variance, which quantifies the co-variability of multivariate random variables to a scalar. What you write is the square root of it, so I believe it won't be too odd...
What is the meaning of $\sqrt{\mathrm{var}(X)\mathrm{var}(P)-[\mathrm{cov}(X,P)]^2}$?
It is the square root of the determinant of the covariance matrix (between $X$ and $P$). The determinant of the covariance matrix is called as Generalized Variance, which quantifies the co-variability
What is the meaning of $\sqrt{\mathrm{var}(X)\mathrm{var}(P)-[\mathrm{cov}(X,P)]^2}$? It is the square root of the determinant of the covariance matrix (between $X$ and $P$). The determinant of the covariance matrix is called as Generalized Variance, which quantifies the co-variability of multivariate random variables ...
What is the meaning of $\sqrt{\mathrm{var}(X)\mathrm{var}(P)-[\mathrm{cov}(X,P)]^2}$? It is the square root of the determinant of the covariance matrix (between $X$ and $P$). The determinant of the covariance matrix is called as Generalized Variance, which quantifies the co-variability
46,462
Least Squares removing first $k$ observations Woodbury formula?
You've basically laid out the key facts, I think you just need a hint on how to fit them all together. Here's a quick-and-dirty overview. I think it's easier to see how to accomplish your goal if you build up from the Sherman-Morrison formula, which is just a special case of the Woodbury matrix identity. The Sherman-Mo...
Least Squares removing first $k$ observations Woodbury formula?
You've basically laid out the key facts, I think you just need a hint on how to fit them all together. Here's a quick-and-dirty overview. I think it's easier to see how to accomplish your goal if you
Least Squares removing first $k$ observations Woodbury formula? You've basically laid out the key facts, I think you just need a hint on how to fit them all together. Here's a quick-and-dirty overview. I think it's easier to see how to accomplish your goal if you build up from the Sherman-Morrison formula, which is jus...
Least Squares removing first $k$ observations Woodbury formula? You've basically laid out the key facts, I think you just need a hint on how to fit them all together. Here's a quick-and-dirty overview. I think it's easier to see how to accomplish your goal if you
46,463
Least Squares removing first $k$ observations Woodbury formula?
Over here and here, the leave-one-out (LOOCV) formula uses Sherman-Morrison formula in its derivation. Deriving the leave-$k$-out would require the general formula by Woodbury, as you have suspected. Here I use subscript $k$ as the indices for the rows to be left out from the training set, $(k)$ as the whole vector or ...
Least Squares removing first $k$ observations Woodbury formula?
Over here and here, the leave-one-out (LOOCV) formula uses Sherman-Morrison formula in its derivation. Deriving the leave-$k$-out would require the general formula by Woodbury, as you have suspected.
Least Squares removing first $k$ observations Woodbury formula? Over here and here, the leave-one-out (LOOCV) formula uses Sherman-Morrison formula in its derivation. Deriving the leave-$k$-out would require the general formula by Woodbury, as you have suspected. Here I use subscript $k$ as the indices for the rows to ...
Least Squares removing first $k$ observations Woodbury formula? Over here and here, the leave-one-out (LOOCV) formula uses Sherman-Morrison formula in its derivation. Deriving the leave-$k$-out would require the general formula by Woodbury, as you have suspected.
46,464
Probability of drawing the unfair die
The question seeks to find the probability that the die drawn is unfair given that it was thrown $5$ times and all throws were $3$s. Hence, we seek to calculate $\mathrm{P}(\mathrm{Unfair}\,|\,\text{5 threes})$. According to Bayes' theorem, we have: $$ \mathrm{P}(\mathrm{Unfair}\,|\,\text{5 threes}) = \frac{\mathrm{P}(...
Probability of drawing the unfair die
The question seeks to find the probability that the die drawn is unfair given that it was thrown $5$ times and all throws were $3$s. Hence, we seek to calculate $\mathrm{P}(\mathrm{Unfair}\,|\,\text{5
Probability of drawing the unfair die The question seeks to find the probability that the die drawn is unfair given that it was thrown $5$ times and all throws were $3$s. Hence, we seek to calculate $\mathrm{P}(\mathrm{Unfair}\,|\,\text{5 threes})$. According to Bayes' theorem, we have: $$ \mathrm{P}(\mathrm{Unfair}\,|...
Probability of drawing the unfair die The question seeks to find the probability that the die drawn is unfair given that it was thrown $5$ times and all throws were $3$s. Hence, we seek to calculate $\mathrm{P}(\mathrm{Unfair}\,|\,\text{5
46,465
Probability of drawing the unfair die
According to Bayes’ theorem: P(A | B) = ( P(A) * P(B | A) ) / ( P(A) * P(B | A) + P(not A) * P(B | not A) ) P(A) = P(unfair) = 1 / 10 P(not A) = P(fair) = 9 / 10 P(B | A) = P(5 threes | unfair) = 1 P(B | not A) = P(5 threes | fair) = 1 / (6^5) P(A | B) = ( 1 / 10 * 1 ) / ( 1 / 10 * 1 + 9 /10 * (1 / (6**5) ...
Probability of drawing the unfair die
According to Bayes’ theorem: P(A | B) = ( P(A) * P(B | A) ) / ( P(A) * P(B | A) + P(not A) * P(B | not A) ) P(A) = P(unfair) = 1 / 10 P(not A) = P(fair) = 9 / 10 P(B | A) = P(5 threes | unfair
Probability of drawing the unfair die According to Bayes’ theorem: P(A | B) = ( P(A) * P(B | A) ) / ( P(A) * P(B | A) + P(not A) * P(B | not A) ) P(A) = P(unfair) = 1 / 10 P(not A) = P(fair) = 9 / 10 P(B | A) = P(5 threes | unfair) = 1 P(B | not A) = P(5 threes | fair) = 1 / (6^5) P(A | B) = ( 1 / 10 * 1 ) / ...
Probability of drawing the unfair die According to Bayes’ theorem: P(A | B) = ( P(A) * P(B | A) ) / ( P(A) * P(B | A) + P(not A) * P(B | not A) ) P(A) = P(unfair) = 1 / 10 P(not A) = P(fair) = 9 / 10 P(B | A) = P(5 threes | unfair
46,466
Recurring problem with retrospective data collection study designs I'm seeing
You are right that this is a very common scenario in medical research. "I should note that these studies are not meant to invent a new method of treatment or change protocols, they are used to see what variables are of interest for future research." OK, I take this to mean that you are interested in causal inference,...
Recurring problem with retrospective data collection study designs I'm seeing
You are right that this is a very common scenario in medical research. "I should note that these studies are not meant to invent a new method of treatment or change protocols, they are used to see wh
Recurring problem with retrospective data collection study designs I'm seeing You are right that this is a very common scenario in medical research. "I should note that these studies are not meant to invent a new method of treatment or change protocols, they are used to see what variables are of interest for future re...
Recurring problem with retrospective data collection study designs I'm seeing You are right that this is a very common scenario in medical research. "I should note that these studies are not meant to invent a new method of treatment or change protocols, they are used to see wh
46,467
Reverse causality opposite definitions
Reverse causality is particularly problematic for DAGs because it often implies either a reversal of a causal path, or feedback loop (which would make it a Directed Cyclic Graph) rendering the usual DAG analysis invalid. Nevertheless, a lot can still be said using DAGs even where reverse causality is present or suspect...
Reverse causality opposite definitions
Reverse causality is particularly problematic for DAGs because it often implies either a reversal of a causal path, or feedback loop (which would make it a Directed Cyclic Graph) rendering the usual D
Reverse causality opposite definitions Reverse causality is particularly problematic for DAGs because it often implies either a reversal of a causal path, or feedback loop (which would make it a Directed Cyclic Graph) rendering the usual DAG analysis invalid. Nevertheless, a lot can still be said using DAGs even where ...
Reverse causality opposite definitions Reverse causality is particularly problematic for DAGs because it often implies either a reversal of a causal path, or feedback loop (which would make it a Directed Cyclic Graph) rendering the usual D
46,468
How to test for statistical significance with multiple visits and technical replicates?
However this solution doesn't take into account the two replicates for each visit or separate visits. Correct. How would I do this? You need to account for repeated visits for each patient, and for repeated replicates within each visit for each patient. This is because measurements for the same patient are likely ...
How to test for statistical significance with multiple visits and technical replicates?
However this solution doesn't take into account the two replicates for each visit or separate visits. Correct. How would I do this? You need to account for repeated visits for each patient, and f
How to test for statistical significance with multiple visits and technical replicates? However this solution doesn't take into account the two replicates for each visit or separate visits. Correct. How would I do this? You need to account for repeated visits for each patient, and for repeated replicates within ea...
How to test for statistical significance with multiple visits and technical replicates? However this solution doesn't take into account the two replicates for each visit or separate visits. Correct. How would I do this? You need to account for repeated visits for each patient, and f
46,469
Variance Ratio Formula
You're right: indeed, there is an algebraic solution. The optimization must occur over the set of $c$ for which the denominator is nonzero. I will leave to interested readers the special case where there exist nonzero $c$ for which the denominator nevertheless is zero: this is equivalent to at least one of the compone...
Variance Ratio Formula
You're right: indeed, there is an algebraic solution. The optimization must occur over the set of $c$ for which the denominator is nonzero. I will leave to interested readers the special case where t
Variance Ratio Formula You're right: indeed, there is an algebraic solution. The optimization must occur over the set of $c$ for which the denominator is nonzero. I will leave to interested readers the special case where there exist nonzero $c$ for which the denominator nevertheless is zero: this is equivalent to at l...
Variance Ratio Formula You're right: indeed, there is an algebraic solution. The optimization must occur over the set of $c$ for which the denominator is nonzero. I will leave to interested readers the special case where t
46,470
How reparameterize Beta distribution?
There is always the obvious inverse cdf representation: $$X=F_{\alpha,\beta}^{-1}(U)$$ where $F_{\alpha,\beta}^{-1}(\cdot)$ is the inverse cdf (quantile function) of the Beta $\mathcal Be(\alpha,\beta)$ distribution. Otherwise, the Wikipedia page lists a large collection of connections with other standard distributions...
How reparameterize Beta distribution?
There is always the obvious inverse cdf representation: $$X=F_{\alpha,\beta}^{-1}(U)$$ where $F_{\alpha,\beta}^{-1}(\cdot)$ is the inverse cdf (quantile function) of the Beta $\mathcal Be(\alpha,\beta
How reparameterize Beta distribution? There is always the obvious inverse cdf representation: $$X=F_{\alpha,\beta}^{-1}(U)$$ where $F_{\alpha,\beta}^{-1}(\cdot)$ is the inverse cdf (quantile function) of the Beta $\mathcal Be(\alpha,\beta)$ distribution. Otherwise, the Wikipedia page lists a large collection of connect...
How reparameterize Beta distribution? There is always the obvious inverse cdf representation: $$X=F_{\alpha,\beta}^{-1}(U)$$ where $F_{\alpha,\beta}^{-1}(\cdot)$ is the inverse cdf (quantile function) of the Beta $\mathcal Be(\alpha,\beta
46,471
How reparameterize Beta distribution?
If you mean representing every beta-distributed random variable as some simple function of the two parameters $\alpha,\beta$ and some "standard beta" random variable, then probably it cannot be done. One alternative to the simple standard way of parameterizing this family of distributions that has crossed my mind is as...
How reparameterize Beta distribution?
If you mean representing every beta-distributed random variable as some simple function of the two parameters $\alpha,\beta$ and some "standard beta" random variable, then probably it cannot be done.
How reparameterize Beta distribution? If you mean representing every beta-distributed random variable as some simple function of the two parameters $\alpha,\beta$ and some "standard beta" random variable, then probably it cannot be done. One alternative to the simple standard way of parameterizing this family of distri...
How reparameterize Beta distribution? If you mean representing every beta-distributed random variable as some simple function of the two parameters $\alpha,\beta$ and some "standard beta" random variable, then probably it cannot be done.
46,472
Advantage & disadvantage of PCA vs kernel PCA
Kernel PCA (kPCA) actually includes regular PCA as a special case--they're equivalent if the linear kernel is used. But, they have different properties in general. Here are some points of comparison: Linear vs. nonlinear structure. kPCA can capture nonlinear structure in the data (if using a nonlinear kernel), whereas...
Advantage & disadvantage of PCA vs kernel PCA
Kernel PCA (kPCA) actually includes regular PCA as a special case--they're equivalent if the linear kernel is used. But, they have different properties in general. Here are some points of comparison:
Advantage & disadvantage of PCA vs kernel PCA Kernel PCA (kPCA) actually includes regular PCA as a special case--they're equivalent if the linear kernel is used. But, they have different properties in general. Here are some points of comparison: Linear vs. nonlinear structure. kPCA can capture nonlinear structure in t...
Advantage & disadvantage of PCA vs kernel PCA Kernel PCA (kPCA) actually includes regular PCA as a special case--they're equivalent if the linear kernel is used. But, they have different properties in general. Here are some points of comparison:
46,473
How do I deal with large amout missing values in a data set without dropping them?
Because NA values are informative for your dataset, you don't want to drop NAs or impute values. If a patient doesn't get an X-ray, they probably didn't break a bone. So you want to learn from NA values. A common approach is to add an indicator column for NA values.
How do I deal with large amout missing values in a data set without dropping them?
Because NA values are informative for your dataset, you don't want to drop NAs or impute values. If a patient doesn't get an X-ray, they probably didn't break a bone. So you want to learn from NA val
How do I deal with large amout missing values in a data set without dropping them? Because NA values are informative for your dataset, you don't want to drop NAs or impute values. If a patient doesn't get an X-ray, they probably didn't break a bone. So you want to learn from NA values. A common approach is to add an i...
How do I deal with large amout missing values in a data set without dropping them? Because NA values are informative for your dataset, you don't want to drop NAs or impute values. If a patient doesn't get an X-ray, they probably didn't break a bone. So you want to learn from NA val
46,474
How do I deal with large amout missing values in a data set without dropping them?
A linear mixed effects model would allow you to have individuals with missing data and not need to convert everything over to categories. If ever you have a continuous variable, use it as a continuum if at all possible. Here is a link to a paper that explains more about why. It is not just for psychologists, the same ...
How do I deal with large amout missing values in a data set without dropping them?
A linear mixed effects model would allow you to have individuals with missing data and not need to convert everything over to categories. If ever you have a continuous variable, use it as a continuum
How do I deal with large amout missing values in a data set without dropping them? A linear mixed effects model would allow you to have individuals with missing data and not need to convert everything over to categories. If ever you have a continuous variable, use it as a continuum if at all possible. Here is a link t...
How do I deal with large amout missing values in a data set without dropping them? A linear mixed effects model would allow you to have individuals with missing data and not need to convert everything over to categories. If ever you have a continuous variable, use it as a continuum
46,475
Alternatives to minimizing loss in regression
Rational choice theory says that any rational preference can be modeled with a utility function. Therefore any (rational) decision process can be encoded in a loss function and posed as an optimization problem. For example, L1 and L2 regularization can be viewed as encoding a preference for smaller parameters or more p...
Alternatives to minimizing loss in regression
Rational choice theory says that any rational preference can be modeled with a utility function. Therefore any (rational) decision process can be encoded in a loss function and posed as an optimizatio
Alternatives to minimizing loss in regression Rational choice theory says that any rational preference can be modeled with a utility function. Therefore any (rational) decision process can be encoded in a loss function and posed as an optimization problem. For example, L1 and L2 regularization can be viewed as encoding...
Alternatives to minimizing loss in regression Rational choice theory says that any rational preference can be modeled with a utility function. Therefore any (rational) decision process can be encoded in a loss function and posed as an optimizatio
46,476
Alternatives to minimizing loss in regression
But is accuracy the only important virtue of a model? The practical aspects of what a model's for is too nuanced for a theoretical discussion. Interpretation and generalizability come to mind. "Who will use this model?" should be a top line question in all statistical analyses. Friedman's statement is defensible in a ...
Alternatives to minimizing loss in regression
But is accuracy the only important virtue of a model? The practical aspects of what a model's for is too nuanced for a theoretical discussion. Interpretation and generalizability come to mind. "Who w
Alternatives to minimizing loss in regression But is accuracy the only important virtue of a model? The practical aspects of what a model's for is too nuanced for a theoretical discussion. Interpretation and generalizability come to mind. "Who will use this model?" should be a top line question in all statistical anal...
Alternatives to minimizing loss in regression But is accuracy the only important virtue of a model? The practical aspects of what a model's for is too nuanced for a theoretical discussion. Interpretation and generalizability come to mind. "Who w
46,477
An easy decision when to use a spline or a polynomial
My RMS book and course notes go into detail about this. Briefly, polynomials are too restrictive, allow a point in one part of the curve to too greatly influence the fit in other parts of the curve, and the fits are not as good as segmented polynomials (splines). Polynomials cannot well approximate threshold effects ...
An easy decision when to use a spline or a polynomial
My RMS book and course notes go into detail about this. Briefly, polynomials are too restrictive, allow a point in one part of the curve to too greatly influence the fit in other parts of the curve,
An easy decision when to use a spline or a polynomial My RMS book and course notes go into detail about this. Briefly, polynomials are too restrictive, allow a point in one part of the curve to too greatly influence the fit in other parts of the curve, and the fits are not as good as segmented polynomials (splines). ...
An easy decision when to use a spline or a polynomial My RMS book and course notes go into detail about this. Briefly, polynomials are too restrictive, allow a point in one part of the curve to too greatly influence the fit in other parts of the curve,
46,478
What is distribution of $\sin(x)$? If x is exponential distribution
The cumulative distribution function (cdf) of a variable $X$ with an exponential distribution can be written $$F_\lambda(x) = \Pr(X\le x) = 1 - e^{-\lambda x}.$$ Consequently, for any interval determined by $0\le a\le b,$ the chance $X$ lies in this interval is $$\Pr(a\lt X\le b) = F_\lambda(b)-F_\lambda(a) = e^{-\lamb...
What is distribution of $\sin(x)$? If x is exponential distribution
The cumulative distribution function (cdf) of a variable $X$ with an exponential distribution can be written $$F_\lambda(x) = \Pr(X\le x) = 1 - e^{-\lambda x}.$$ Consequently, for any interval determi
What is distribution of $\sin(x)$? If x is exponential distribution The cumulative distribution function (cdf) of a variable $X$ with an exponential distribution can be written $$F_\lambda(x) = \Pr(X\le x) = 1 - e^{-\lambda x}.$$ Consequently, for any interval determined by $0\le a\le b,$ the chance $X$ lies in this in...
What is distribution of $\sin(x)$? If x is exponential distribution The cumulative distribution function (cdf) of a variable $X$ with an exponential distribution can be written $$F_\lambda(x) = \Pr(X\le x) = 1 - e^{-\lambda x}.$$ Consequently, for any interval determi
46,479
Integrating with a multivariate Gaussian
The means and covariances already evaluate all the integrals you need, allowing this result to be obtained purely algebraically. It actually has nothing to do with Normal distributions (except insofar as they have finite covariances in the first place). Let $X=(X_1,X_2,\ldots,X_n)$ be a multivariate random variable w...
Integrating with a multivariate Gaussian
The means and covariances already evaluate all the integrals you need, allowing this result to be obtained purely algebraically. It actually has nothing to do with Normal distributions (except insofa
Integrating with a multivariate Gaussian The means and covariances already evaluate all the integrals you need, allowing this result to be obtained purely algebraically. It actually has nothing to do with Normal distributions (except insofar as they have finite covariances in the first place). Let $X=(X_1,X_2,\ldots,...
Integrating with a multivariate Gaussian The means and covariances already evaluate all the integrals you need, allowing this result to be obtained purely algebraically. It actually has nothing to do with Normal distributions (except insofa
46,480
Integrating with a multivariate Gaussian
The solution posted by whuber gets at this idea but I wanted to make the approach more explicitly use the trace operator, use: $$\mathbb{E}(u^TAu) = \mathbb{E}(tr(u^TAu)).$$ Note that the quadratic form inside the expectation is a scalar and trace of a scalar is the same scalar. Next use cyclic swap property of the tra...
Integrating with a multivariate Gaussian
The solution posted by whuber gets at this idea but I wanted to make the approach more explicitly use the trace operator, use: $$\mathbb{E}(u^TAu) = \mathbb{E}(tr(u^TAu)).$$ Note that the quadratic fo
Integrating with a multivariate Gaussian The solution posted by whuber gets at this idea but I wanted to make the approach more explicitly use the trace operator, use: $$\mathbb{E}(u^TAu) = \mathbb{E}(tr(u^TAu)).$$ Note that the quadratic form inside the expectation is a scalar and trace of a scalar is the same scalar....
Integrating with a multivariate Gaussian The solution posted by whuber gets at this idea but I wanted to make the approach more explicitly use the trace operator, use: $$\mathbb{E}(u^TAu) = \mathbb{E}(tr(u^TAu)).$$ Note that the quadratic fo
46,481
Difference between covariates and treatment confounders in propensity score matching
The definition of a confounder is somewhat complicated, but VanderWeele & Shpitser (2013) decided A pre-exposure covariate C is a confounder for the effect of A on Y if it is a member of some minimally sufficient adjustment set. A sufficient adjustment set is a set of variables conditioning on which is sufficient t...
Difference between covariates and treatment confounders in propensity score matching
The definition of a confounder is somewhat complicated, but VanderWeele & Shpitser (2013) decided A pre-exposure covariate C is a confounder for the effect of A on Y if it is a member of some minim
Difference between covariates and treatment confounders in propensity score matching The definition of a confounder is somewhat complicated, but VanderWeele & Shpitser (2013) decided A pre-exposure covariate C is a confounder for the effect of A on Y if it is a member of some minimally sufficient adjustment set. A ...
Difference between covariates and treatment confounders in propensity score matching The definition of a confounder is somewhat complicated, but VanderWeele & Shpitser (2013) decided A pre-exposure covariate C is a confounder for the effect of A on Y if it is a member of some minim
46,482
How is Logistic Regression related to Logistic Distribution?
One way of defining logistic regression is just introducing it as $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(Y=1 \mid X=x) = \frac{1}{1+e^{-\eta(x)}} $$ where $\eta(x)=\beta^T x$ is a linear predictor. This is just stating the model without saying where it comes from. Alternatively we can try to develop the model fro...
How is Logistic Regression related to Logistic Distribution?
One way of defining logistic regression is just introducing it as $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(Y=1 \mid X=x) = \frac{1}{1+e^{-\eta(x)}} $$ where $\eta(x)=\beta^T x$ is a linear predict
How is Logistic Regression related to Logistic Distribution? One way of defining logistic regression is just introducing it as $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(Y=1 \mid X=x) = \frac{1}{1+e^{-\eta(x)}} $$ where $\eta(x)=\beta^T x$ is a linear predictor. This is just stating the model without saying where it ...
How is Logistic Regression related to Logistic Distribution? One way of defining logistic regression is just introducing it as $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(Y=1 \mid X=x) = \frac{1}{1+e^{-\eta(x)}} $$ where $\eta(x)=\beta^T x$ is a linear predict
46,483
How is Logistic Regression related to Logistic Distribution?
One way to think of it is to consider the latent variable interpretation of logistic regression. In this interpretation, we consider a linear model for $Y^*$, a latent (i.e., unobserved) variable that represents the "propensity" for $Y=1$. So, we have $Y^*=X\beta + \epsilon$. We get the observed values of $Y$ as $Y=I(Y...
How is Logistic Regression related to Logistic Distribution?
One way to think of it is to consider the latent variable interpretation of logistic regression. In this interpretation, we consider a linear model for $Y^*$, a latent (i.e., unobserved) variable that
How is Logistic Regression related to Logistic Distribution? One way to think of it is to consider the latent variable interpretation of logistic regression. In this interpretation, we consider a linear model for $Y^*$, a latent (i.e., unobserved) variable that represents the "propensity" for $Y=1$. So, we have $Y^*=X\...
How is Logistic Regression related to Logistic Distribution? One way to think of it is to consider the latent variable interpretation of logistic regression. In this interpretation, we consider a linear model for $Y^*$, a latent (i.e., unobserved) variable that
46,484
Gaussian Processes: A Crucial Assumption?
This assumption is not universally valid (of course). Moreover, in many cases it is not even necessary to make! Relevant examples where it is obviously not valid are: strictly positive data (since a Gaussian has always a chance of being negative) or monotonic or convex data (same reason just for first and second deriva...
Gaussian Processes: A Crucial Assumption?
This assumption is not universally valid (of course). Moreover, in many cases it is not even necessary to make! Relevant examples where it is obviously not valid are: strictly positive data (since a G
Gaussian Processes: A Crucial Assumption? This assumption is not universally valid (of course). Moreover, in many cases it is not even necessary to make! Relevant examples where it is obviously not valid are: strictly positive data (since a Gaussian has always a chance of being negative) or monotonic or convex data (sa...
Gaussian Processes: A Crucial Assumption? This assumption is not universally valid (of course). Moreover, in many cases it is not even necessary to make! Relevant examples where it is obviously not valid are: strictly positive data (since a G
46,485
Gaussian Processes: A Crucial Assumption?
By definition, a random process is a collection of random variables indexed by the elements of some set $\mathbb T$ which is typically $\mathbb R$ or $\mathbb Z$. Thus, the random process is the set $\{X(t)\colon t \in \mathbb T\}$ where $X(t)$ is the called the $t$-th random variable. By definition, a Gaussian rando...
Gaussian Processes: A Crucial Assumption?
By definition, a random process is a collection of random variables indexed by the elements of some set $\mathbb T$ which is typically $\mathbb R$ or $\mathbb Z$. Thus, the random process is the set $
Gaussian Processes: A Crucial Assumption? By definition, a random process is a collection of random variables indexed by the elements of some set $\mathbb T$ which is typically $\mathbb R$ or $\mathbb Z$. Thus, the random process is the set $\{X(t)\colon t \in \mathbb T\}$ where $X(t)$ is the called the $t$-th random v...
Gaussian Processes: A Crucial Assumption? By definition, a random process is a collection of random variables indexed by the elements of some set $\mathbb T$ which is typically $\mathbb R$ or $\mathbb Z$. Thus, the random process is the set $
46,486
Why is sufficient statistics/data reduction normally taught in Statistics?
This answer is an oversimplification, bound to criticism, but I also believe it carries the essence behind the reason sufficient statistics are useful: the motivation for a sufficient statistics is the possibility it gives us of assessing information on the entire population without the need of all the data. Say you ge...
Why is sufficient statistics/data reduction normally taught in Statistics?
This answer is an oversimplification, bound to criticism, but I also believe it carries the essence behind the reason sufficient statistics are useful: the motivation for a sufficient statistics is th
Why is sufficient statistics/data reduction normally taught in Statistics? This answer is an oversimplification, bound to criticism, but I also believe it carries the essence behind the reason sufficient statistics are useful: the motivation for a sufficient statistics is the possibility it gives us of assessing inform...
Why is sufficient statistics/data reduction normally taught in Statistics? This answer is an oversimplification, bound to criticism, but I also believe it carries the essence behind the reason sufficient statistics are useful: the motivation for a sufficient statistics is th
46,487
Why is sufficient statistics/data reduction normally taught in Statistics?
What you've been told is certainly NOT true. Data reduction is as important as ever. See for example Donoho's work on Compressed Sensing and thresholding estimators. Wavelet estimators and regularised estimators also work similarly - aim is to compress data on as few coefficient as possible. There is a parallel too wit...
Why is sufficient statistics/data reduction normally taught in Statistics?
What you've been told is certainly NOT true. Data reduction is as important as ever. See for example Donoho's work on Compressed Sensing and thresholding estimators. Wavelet estimators and regularised
Why is sufficient statistics/data reduction normally taught in Statistics? What you've been told is certainly NOT true. Data reduction is as important as ever. See for example Donoho's work on Compressed Sensing and thresholding estimators. Wavelet estimators and regularised estimators also work similarly - aim is to c...
Why is sufficient statistics/data reduction normally taught in Statistics? What you've been told is certainly NOT true. Data reduction is as important as ever. See for example Donoho's work on Compressed Sensing and thresholding estimators. Wavelet estimators and regularised
46,488
Why is sufficient statistics/data reduction normally taught in Statistics?
You are correct in suggesting that the availability of almost limitless computational resources means that the importance of data reduction is lessened. For example, resampling statistics, at one time too computationally expensive for practical use, allow the entire sample to be utilised directly without assumption of ...
Why is sufficient statistics/data reduction normally taught in Statistics?
You are correct in suggesting that the availability of almost limitless computational resources means that the importance of data reduction is lessened. For example, resampling statistics, at one time
Why is sufficient statistics/data reduction normally taught in Statistics? You are correct in suggesting that the availability of almost limitless computational resources means that the importance of data reduction is lessened. For example, resampling statistics, at one time too computationally expensive for practical ...
Why is sufficient statistics/data reduction normally taught in Statistics? You are correct in suggesting that the availability of almost limitless computational resources means that the importance of data reduction is lessened. For example, resampling statistics, at one time
46,489
Where does linear regression fit into the bias-variance tradeoff?
OLS is an unbiased estimator assuming the model is true, which is to say, Effects are exactly linear All variables with non-zero effects are included All interactions are included no non-linear effects and other small model inadequacies. See my answer at Why do irrelevant regressors become statistically significant ...
Where does linear regression fit into the bias-variance tradeoff?
OLS is an unbiased estimator assuming the model is true, which is to say, Effects are exactly linear All variables with non-zero effects are included All interactions are included no non-linear effec
Where does linear regression fit into the bias-variance tradeoff? OLS is an unbiased estimator assuming the model is true, which is to say, Effects are exactly linear All variables with non-zero effects are included All interactions are included no non-linear effects and other small model inadequacies. See my answer...
Where does linear regression fit into the bias-variance tradeoff? OLS is an unbiased estimator assuming the model is true, which is to say, Effects are exactly linear All variables with non-zero effects are included All interactions are included no non-linear effec
46,490
Where does linear regression fit into the bias-variance tradeoff?
Linear regression is a general term. When used, $y=ax+b+\epsilon$ is what comes to mind first, however $y=ax^2+bx+c+\epsilon$ is also linear regression, i.e. $x_2=x, x_1=x^2$ and $y=ax_1+bx_2+c+\epsilon$. It's just we use polynomial features. The data (target) can be of parabolic nature but it can still be estimated v...
Where does linear regression fit into the bias-variance tradeoff?
Linear regression is a general term. When used, $y=ax+b+\epsilon$ is what comes to mind first, however $y=ax^2+bx+c+\epsilon$ is also linear regression, i.e. $x_2=x, x_1=x^2$ and $y=ax_1+bx_2+c+\epsi
Where does linear regression fit into the bias-variance tradeoff? Linear regression is a general term. When used, $y=ax+b+\epsilon$ is what comes to mind first, however $y=ax^2+bx+c+\epsilon$ is also linear regression, i.e. $x_2=x, x_1=x^2$ and $y=ax_1+bx_2+c+\epsilon$. It's just we use polynomial features. The data (...
Where does linear regression fit into the bias-variance tradeoff? Linear regression is a general term. When used, $y=ax+b+\epsilon$ is what comes to mind first, however $y=ax^2+bx+c+\epsilon$ is also linear regression, i.e. $x_2=x, x_1=x^2$ and $y=ax_1+bx_2+c+\epsi
46,491
Resources for hierarchical modelling in R
If you just want a practical guide to fitting mixed models / multilevel models, then the link provided by Mark White is a very good one: https://rpsychologist.com/r-guide-longitudinal-lme-lmer However, if you seek to understand the theory, then I would highly recommend looking at mixed models - of which multilevel mode...
Resources for hierarchical modelling in R
If you just want a practical guide to fitting mixed models / multilevel models, then the link provided by Mark White is a very good one: https://rpsychologist.com/r-guide-longitudinal-lme-lmer However
Resources for hierarchical modelling in R If you just want a practical guide to fitting mixed models / multilevel models, then the link provided by Mark White is a very good one: https://rpsychologist.com/r-guide-longitudinal-lme-lmer However, if you seek to understand the theory, then I would highly recommend looking ...
Resources for hierarchical modelling in R If you just want a practical guide to fitting mixed models / multilevel models, then the link provided by Mark White is a very good one: https://rpsychologist.com/r-guide-longitudinal-lme-lmer However
46,492
Resources for hierarchical modelling in R
My favorite is: https://rpsychologist.com/r-guide-longitudinal-lme-lmer. He shows both commonly-used packages, and he includes the equations alongside the code—so you can easily reference back to books from there.
Resources for hierarchical modelling in R
My favorite is: https://rpsychologist.com/r-guide-longitudinal-lme-lmer. He shows both commonly-used packages, and he includes the equations alongside the code—so you can easily reference back to book
Resources for hierarchical modelling in R My favorite is: https://rpsychologist.com/r-guide-longitudinal-lme-lmer. He shows both commonly-used packages, and he includes the equations alongside the code—so you can easily reference back to books from there.
Resources for hierarchical modelling in R My favorite is: https://rpsychologist.com/r-guide-longitudinal-lme-lmer. He shows both commonly-used packages, and he includes the equations alongside the code—so you can easily reference back to book
46,493
Resources for hierarchical modelling in R
Take a look at these: GLMM FAQ Ben Bolker and others: https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html nlme package: https://cran.r-project.org/web/packages/nlme/nlme.pdf, which allows non- linear mixed models and correlations structures glmmTMB: https://cran.r-project.org/web/packages/glmmTMB/glmmTMB.pdf Enjoy!
Resources for hierarchical modelling in R
Take a look at these: GLMM FAQ Ben Bolker and others: https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html nlme package: https://cran.r-project.org/web/packages/nlme/nlme.pdf, which allows non- lin
Resources for hierarchical modelling in R Take a look at these: GLMM FAQ Ben Bolker and others: https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html nlme package: https://cran.r-project.org/web/packages/nlme/nlme.pdf, which allows non- linear mixed models and correlations structures glmmTMB: https://cran.r-project.o...
Resources for hierarchical modelling in R Take a look at these: GLMM FAQ Ben Bolker and others: https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html nlme package: https://cran.r-project.org/web/packages/nlme/nlme.pdf, which allows non- lin
46,494
Resources for hierarchical modelling in R
I built R a package recently on Bayesian network modeling. In the package description page you'll find varies examples of hierarchical models, their CPDs, graphical models structures, learning/inference algorithms and the corresponding R code.
Resources for hierarchical modelling in R
I built R a package recently on Bayesian network modeling. In the package description page you'll find varies examples of hierarchical models, their CPDs, graphical models structures, learning/inferen
Resources for hierarchical modelling in R I built R a package recently on Bayesian network modeling. In the package description page you'll find varies examples of hierarchical models, their CPDs, graphical models structures, learning/inference algorithms and the corresponding R code.
Resources for hierarchical modelling in R I built R a package recently on Bayesian network modeling. In the package description page you'll find varies examples of hierarchical models, their CPDs, graphical models structures, learning/inferen
46,495
Resources for hierarchical modelling in R
I find this resource very helpful, it also contains methods for cross-sectional nested data: https://methodenlehre.github.io/intro-to-rstats/hierarchical-linear-models.html
Resources for hierarchical modelling in R
I find this resource very helpful, it also contains methods for cross-sectional nested data: https://methodenlehre.github.io/intro-to-rstats/hierarchical-linear-models.html
Resources for hierarchical modelling in R I find this resource very helpful, it also contains methods for cross-sectional nested data: https://methodenlehre.github.io/intro-to-rstats/hierarchical-linear-models.html
Resources for hierarchical modelling in R I find this resource very helpful, it also contains methods for cross-sectional nested data: https://methodenlehre.github.io/intro-to-rstats/hierarchical-linear-models.html
46,496
Do GEE and GLM estimate the same coefficients?
Yes. GEE and GLM will indeed have the same coefficients, but different standard errors. To check, run an example in R. I've taken this example from Chapter 25 of Applied Regression Analysis and Other Multivariable Methods, 5th by Kleinbaum, et. al (just because it's on my desk and references GEE and GLM): library(gee...
Do GEE and GLM estimate the same coefficients?
Yes. GEE and GLM will indeed have the same coefficients, but different standard errors. To check, run an example in R. I've taken this example from Chapter 25 of Applied Regression Analysis and Othe
Do GEE and GLM estimate the same coefficients? Yes. GEE and GLM will indeed have the same coefficients, but different standard errors. To check, run an example in R. I've taken this example from Chapter 25 of Applied Regression Analysis and Other Multivariable Methods, 5th by Kleinbaum, et. al (just because it's on m...
Do GEE and GLM estimate the same coefficients? Yes. GEE and GLM will indeed have the same coefficients, but different standard errors. To check, run an example in R. I've taken this example from Chapter 25 of Applied Regression Analysis and Othe
46,497
Do GEE and GLM estimate the same coefficients?
It depends on exactly what you mean and what you're assuming. If you use the independence working correlation, the parameter estimates $\hat\beta$ in glm and GEE will be identical, with only the standard errors being potentially different If you use another working correlation, the parameter estimates $\hat\beta$ will...
Do GEE and GLM estimate the same coefficients?
It depends on exactly what you mean and what you're assuming. If you use the independence working correlation, the parameter estimates $\hat\beta$ in glm and GEE will be identical, with only the stan
Do GEE and GLM estimate the same coefficients? It depends on exactly what you mean and what you're assuming. If you use the independence working correlation, the parameter estimates $\hat\beta$ in glm and GEE will be identical, with only the standard errors being potentially different If you use another working correl...
Do GEE and GLM estimate the same coefficients? It depends on exactly what you mean and what you're assuming. If you use the independence working correlation, the parameter estimates $\hat\beta$ in glm and GEE will be identical, with only the stan
46,498
Do GEE and GLM estimate the same coefficients?
I think it may not. The estimated equations, in their formulas, depend on the inverse of the working covariance matrix. If we change it, the beta coefficients will change too, because the entire equation will change. While in the GLM the working correlation is not applicable - it's fixed, independent. And it can be sho...
Do GEE and GLM estimate the same coefficients?
I think it may not. The estimated equations, in their formulas, depend on the inverse of the working covariance matrix. If we change it, the beta coefficients will change too, because the entire equat
Do GEE and GLM estimate the same coefficients? I think it may not. The estimated equations, in their formulas, depend on the inverse of the working covariance matrix. If we change it, the beta coefficients will change too, because the entire equation will change. While in the GLM the working correlation is not applicab...
Do GEE and GLM estimate the same coefficients? I think it may not. The estimated equations, in their formulas, depend on the inverse of the working covariance matrix. If we change it, the beta coefficients will change too, because the entire equat
46,499
How to justify that $(Y_1,Y_2)$ is not bivariate normal without finding its exact distribution?
without explicitly finding the distribution of $(Y_1,Y_2)$ can I justify that the distribution is not jointly normal? One obvious way would be to see that $Y_1$ and $Y_2$ cannot be opposite in sign, and therefore cannot be bivariate normal. Equivalently, note that $Y_1Y_2=\text{sign}(X_1)X_1\,\text{sign}(X_2)X_2$ $=|...
How to justify that $(Y_1,Y_2)$ is not bivariate normal without finding its exact distribution?
without explicitly finding the distribution of $(Y_1,Y_2)$ can I justify that the distribution is not jointly normal? One obvious way would be to see that $Y_1$ and $Y_2$ cannot be opposite in sign,
How to justify that $(Y_1,Y_2)$ is not bivariate normal without finding its exact distribution? without explicitly finding the distribution of $(Y_1,Y_2)$ can I justify that the distribution is not jointly normal? One obvious way would be to see that $Y_1$ and $Y_2$ cannot be opposite in sign, and therefore cannot be ...
How to justify that $(Y_1,Y_2)$ is not bivariate normal without finding its exact distribution? without explicitly finding the distribution of $(Y_1,Y_2)$ can I justify that the distribution is not jointly normal? One obvious way would be to see that $Y_1$ and $Y_2$ cannot be opposite in sign,
46,500
How to justify that $(Y_1,Y_2)$ is not bivariate normal without finding its exact distribution?
To see what happens, let's explicitly find the distribution. You could see it as a transformation from the entire plane to the first and third quadrants. Transform the first quadrant ($X_1>0, X_2>0$) to itself $Y_1,Y_2 = X_1,X_2$. Mirror the third quadrant ($X_1<0, X_2<0$) to the first trough the origin $Y_1,Y_2 = -...
How to justify that $(Y_1,Y_2)$ is not bivariate normal without finding its exact distribution?
To see what happens, let's explicitly find the distribution. You could see it as a transformation from the entire plane to the first and third quadrants. Transform the first quadrant ($X_1>0, X_2>0
How to justify that $(Y_1,Y_2)$ is not bivariate normal without finding its exact distribution? To see what happens, let's explicitly find the distribution. You could see it as a transformation from the entire plane to the first and third quadrants. Transform the first quadrant ($X_1>0, X_2>0$) to itself $Y_1,Y_2 = ...
How to justify that $(Y_1,Y_2)$ is not bivariate normal without finding its exact distribution? To see what happens, let's explicitly find the distribution. You could see it as a transformation from the entire plane to the first and third quadrants. Transform the first quadrant ($X_1>0, X_2>0