idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
42,701 | Observations for a bivariate Gaussian mixture | In an alternative way you could use the requirement and sufficient condition that you have independence iff the combined probability of $X_1$ and $X_2$ is the product of the probability of $X_1$ and $X_2$.
$$f_{(X_1,X_2)} = f_{X_1} f_{X_2}$$
And this you could work out more easily (more easily in the sense that the p... | Observations for a bivariate Gaussian mixture | In an alternative way you could use the requirement and sufficient condition that you have independence iff the combined probability of $X_1$ and $X_2$ is the product of the probability of $X_1$ and $ | Observations for a bivariate Gaussian mixture
In an alternative way you could use the requirement and sufficient condition that you have independence iff the combined probability of $X_1$ and $X_2$ is the product of the probability of $X_1$ and $X_2$.
$$f_{(X_1,X_2)} = f_{X_1} f_{X_2}$$
And this you could work out mo... | Observations for a bivariate Gaussian mixture
In an alternative way you could use the requirement and sufficient condition that you have independence iff the combined probability of $X_1$ and $X_2$ is the product of the probability of $X_1$ and $ |
42,702 | Compare glm.nb vs glm(..., negative.binomial(k), ..) models | The chi-squared test is valid here, it is testing the hypothesis that
$$H_0: \phi = 1$$ with $\phi = \frac{1}{\theta}$ the overdispersion parameter. It basically performs a likelihood-ratio test to compare both models (see ?anova). This is valid since your model with fixed overdispersion parameter is nested in the one... | Compare glm.nb vs glm(..., negative.binomial(k), ..) models | The chi-squared test is valid here, it is testing the hypothesis that
$$H_0: \phi = 1$$ with $\phi = \frac{1}{\theta}$ the overdispersion parameter. It basically performs a likelihood-ratio test to c | Compare glm.nb vs glm(..., negative.binomial(k), ..) models
The chi-squared test is valid here, it is testing the hypothesis that
$$H_0: \phi = 1$$ with $\phi = \frac{1}{\theta}$ the overdispersion parameter. It basically performs a likelihood-ratio test to compare both models (see ?anova). This is valid since your mo... | Compare glm.nb vs glm(..., negative.binomial(k), ..) models
The chi-squared test is valid here, it is testing the hypothesis that
$$H_0: \phi = 1$$ with $\phi = \frac{1}{\theta}$ the overdispersion parameter. It basically performs a likelihood-ratio test to c |
42,703 | Compare glm.nb vs glm(..., negative.binomial(k), ..) models | ANOVA is used to compare nested models, i.e., models $M_1$ and $M_2$ where all predictors that appear in $M_1$ also appear in $M_2$, but $M_2$ contains additional ones. ANOVA then answers the question whether the additional predictors explain more variance than we would expect by chance alone.
Of course, one could also... | Compare glm.nb vs glm(..., negative.binomial(k), ..) models | ANOVA is used to compare nested models, i.e., models $M_1$ and $M_2$ where all predictors that appear in $M_1$ also appear in $M_2$, but $M_2$ contains additional ones. ANOVA then answers the question | Compare glm.nb vs glm(..., negative.binomial(k), ..) models
ANOVA is used to compare nested models, i.e., models $M_1$ and $M_2$ where all predictors that appear in $M_1$ also appear in $M_2$, but $M_2$ contains additional ones. ANOVA then answers the question whether the additional predictors explain more variance tha... | Compare glm.nb vs glm(..., negative.binomial(k), ..) models
ANOVA is used to compare nested models, i.e., models $M_1$ and $M_2$ where all predictors that appear in $M_1$ also appear in $M_2$, but $M_2$ contains additional ones. ANOVA then answers the question |
42,704 | Hints for exercise 7.3 from The elements of statistical learning | Some hints: You correctly note
$$ X_{-i}^TX_{-i}=X^TX-\vec{x}_i\vec{x}_i^T$$($\vec{x_i}$ is a column vector),
and that you need to find $$\hat{\vec{\beta}}_{-i} = (X_{-i}^TX_{-i})^{-1}X_{-i}^T\vec{y}_{-i},$$ the estimated coefficients obtained by leaving out sample $i$. This will lead you to the new predicted value fo... | Hints for exercise 7.3 from The elements of statistical learning | Some hints: You correctly note
$$ X_{-i}^TX_{-i}=X^TX-\vec{x}_i\vec{x}_i^T$$($\vec{x_i}$ is a column vector),
and that you need to find $$\hat{\vec{\beta}}_{-i} = (X_{-i}^TX_{-i})^{-1}X_{-i}^T\vec{y} | Hints for exercise 7.3 from The elements of statistical learning
Some hints: You correctly note
$$ X_{-i}^TX_{-i}=X^TX-\vec{x}_i\vec{x}_i^T$$($\vec{x_i}$ is a column vector),
and that you need to find $$\hat{\vec{\beta}}_{-i} = (X_{-i}^TX_{-i})^{-1}X_{-i}^T\vec{y}_{-i},$$ the estimated coefficients obtained by leaving... | Hints for exercise 7.3 from The elements of statistical learning
Some hints: You correctly note
$$ X_{-i}^TX_{-i}=X^TX-\vec{x}_i\vec{x}_i^T$$($\vec{x_i}$ is a column vector),
and that you need to find $$\hat{\vec{\beta}}_{-i} = (X_{-i}^TX_{-i})^{-1}X_{-i}^T\vec{y} |
42,705 | How many Americans, randomly chosen, are needed to have a 50% chance two live in the same or adjacent states? | I'll answer question b) because it's more general, and question a) can just be thought of as a special case of b) where the adjacency matrix is simply the identity matrix. I'll give you the exact method, though approximate methods might be called for because the computation of the exact solution scales rapidly with num... | How many Americans, randomly chosen, are needed to have a 50% chance two live in the same or adjacen | I'll answer question b) because it's more general, and question a) can just be thought of as a special case of b) where the adjacency matrix is simply the identity matrix. I'll give you the exact meth | How many Americans, randomly chosen, are needed to have a 50% chance two live in the same or adjacent states?
I'll answer question b) because it's more general, and question a) can just be thought of as a special case of b) where the adjacency matrix is simply the identity matrix. I'll give you the exact method, though... | How many Americans, randomly chosen, are needed to have a 50% chance two live in the same or adjacen
I'll answer question b) because it's more general, and question a) can just be thought of as a special case of b) where the adjacency matrix is simply the identity matrix. I'll give you the exact meth |
42,706 | How many Americans, randomly chosen, are needed to have a 50% chance two live in the same or adjacent states? | It is possible to solve this using Markov Matrices to model the random process of selecting people. This approach requires quite a bet of effort to set up but it does have a structured way to get your answer.
Markov matrices are used to model a random process which can move between discrete "states" (to avoid confusion... | How many Americans, randomly chosen, are needed to have a 50% chance two live in the same or adjacen | It is possible to solve this using Markov Matrices to model the random process of selecting people. This approach requires quite a bet of effort to set up but it does have a structured way to get your | How many Americans, randomly chosen, are needed to have a 50% chance two live in the same or adjacent states?
It is possible to solve this using Markov Matrices to model the random process of selecting people. This approach requires quite a bet of effort to set up but it does have a structured way to get your answer.
M... | How many Americans, randomly chosen, are needed to have a 50% chance two live in the same or adjacen
It is possible to solve this using Markov Matrices to model the random process of selecting people. This approach requires quite a bet of effort to set up but it does have a structured way to get your |
42,707 | Why is $z$-value more meaningful than $p$-value for very low $p$-values? | A lower $p$-value does not indicate the assumptions were violated. Some really low $p$-values like $2.22\times10^{-16}$ just indicate the limit of the machine, something called machine epsilon. Once a number gets that low, the machine can't go any lower, so spits it out or zero.
The reason one could report the $z$ at t... | Why is $z$-value more meaningful than $p$-value for very low $p$-values? | A lower $p$-value does not indicate the assumptions were violated. Some really low $p$-values like $2.22\times10^{-16}$ just indicate the limit of the machine, something called machine epsilon. Once a | Why is $z$-value more meaningful than $p$-value for very low $p$-values?
A lower $p$-value does not indicate the assumptions were violated. Some really low $p$-values like $2.22\times10^{-16}$ just indicate the limit of the machine, something called machine epsilon. Once a number gets that low, the machine can't go any... | Why is $z$-value more meaningful than $p$-value for very low $p$-values?
A lower $p$-value does not indicate the assumptions were violated. Some really low $p$-values like $2.22\times10^{-16}$ just indicate the limit of the machine, something called machine epsilon. Once a |
42,708 | How to compute expected values of compound events? | This is an exercise in using indicator variables. An indicator has a value of $1$ to signify some condition holds and has a value of $0$ otherwise. Seemingly difficult problems about probability and expectation can have simple solutions that exploit indicators and linearity of expectation--even when the random variab... | How to compute expected values of compound events? | This is an exercise in using indicator variables. An indicator has a value of $1$ to signify some condition holds and has a value of $0$ otherwise. Seemingly difficult problems about probability and | How to compute expected values of compound events?
This is an exercise in using indicator variables. An indicator has a value of $1$ to signify some condition holds and has a value of $0$ otherwise. Seemingly difficult problems about probability and expectation can have simple solutions that exploit indicators and li... | How to compute expected values of compound events?
This is an exercise in using indicator variables. An indicator has a value of $1$ to signify some condition holds and has a value of $0$ otherwise. Seemingly difficult problems about probability and |
42,709 | Understanding direction of greatest variance in PCA | Your confusion here comes from misunderstanding of how Cartesian coordinates work. Remember: the orthogonal distances of the points from the axis labeled $\mathbf{v}$ are the $u$ coordinates. That is, they measure the distance parallel to the vector $\mathbf{u}$ from the origin. You are absolutely correct that the ... | Understanding direction of greatest variance in PCA | Your confusion here comes from misunderstanding of how Cartesian coordinates work. Remember: the orthogonal distances of the points from the axis labeled $\mathbf{v}$ are the $u$ coordinates. That i | Understanding direction of greatest variance in PCA
Your confusion here comes from misunderstanding of how Cartesian coordinates work. Remember: the orthogonal distances of the points from the axis labeled $\mathbf{v}$ are the $u$ coordinates. That is, they measure the distance parallel to the vector $\mathbf{u}$ fro... | Understanding direction of greatest variance in PCA
Your confusion here comes from misunderstanding of how Cartesian coordinates work. Remember: the orthogonal distances of the points from the axis labeled $\mathbf{v}$ are the $u$ coordinates. That i |
42,710 | Understanding direction of greatest variance in PCA | The direction of greatest variance represents the direction in which you would encounter all the greatest variation in the data points (Minimum, maximum, average) values; their variance or possibly range should be highest. In our case, it is the direction 'u'. In the direction 'v', the data points do not vary as much a... | Understanding direction of greatest variance in PCA | The direction of greatest variance represents the direction in which you would encounter all the greatest variation in the data points (Minimum, maximum, average) values; their variance or possibly ra | Understanding direction of greatest variance in PCA
The direction of greatest variance represents the direction in which you would encounter all the greatest variation in the data points (Minimum, maximum, average) values; their variance or possibly range should be highest. In our case, it is the direction 'u'. In the ... | Understanding direction of greatest variance in PCA
The direction of greatest variance represents the direction in which you would encounter all the greatest variation in the data points (Minimum, maximum, average) values; their variance or possibly ra |
42,711 | In Gaussian Process binary classification, why are sigmoid functions preferred over Gaussian functions? | I believe they mention this in the footnote to chapter 3 (first page)
One may choose to ignore the discreteness of the target values, and use a regression treatment, where all targets happen to be say ±1 for binary classification. This is known as least-squares classification, see section 6.5.
Looking at 6.5 http://w... | In Gaussian Process binary classification, why are sigmoid functions preferred over Gaussian functio | I believe they mention this in the footnote to chapter 3 (first page)
One may choose to ignore the discreteness of the target values, and use a regression treatment, where all targets happen to be sa | In Gaussian Process binary classification, why are sigmoid functions preferred over Gaussian functions?
I believe they mention this in the footnote to chapter 3 (first page)
One may choose to ignore the discreteness of the target values, and use a regression treatment, where all targets happen to be say ±1 for binary ... | In Gaussian Process binary classification, why are sigmoid functions preferred over Gaussian functio
I believe they mention this in the footnote to chapter 3 (first page)
One may choose to ignore the discreteness of the target values, and use a regression treatment, where all targets happen to be sa |
42,712 | In Gaussian Process binary classification, why are sigmoid functions preferred over Gaussian functions? | The problem with this approach is that the number of terms in $p(\mathbf y|\mathbf f)$ would grow exponentially with the number of negatively-labelled points in the training set, so the closed-form solution to (3.9) would have exponential time complexity. More specifically, if we assume, without loss of generality, tha... | In Gaussian Process binary classification, why are sigmoid functions preferred over Gaussian functio | The problem with this approach is that the number of terms in $p(\mathbf y|\mathbf f)$ would grow exponentially with the number of negatively-labelled points in the training set, so the closed-form so | In Gaussian Process binary classification, why are sigmoid functions preferred over Gaussian functions?
The problem with this approach is that the number of terms in $p(\mathbf y|\mathbf f)$ would grow exponentially with the number of negatively-labelled points in the training set, so the closed-form solution to (3.9) ... | In Gaussian Process binary classification, why are sigmoid functions preferred over Gaussian functio
The problem with this approach is that the number of terms in $p(\mathbf y|\mathbf f)$ would grow exponentially with the number of negatively-labelled points in the training set, so the closed-form so |
42,713 | How to determine random effects in mixed model | You can test if the variance in slopes (and covariance between slope and intercept) is significant by modeling one model with just the random intercept and another model with the random slope and intercept. Then you can do a nested model comparison between the two:
mod1 <- lmer(... + (1|state), ...)
mod2 <- lmer(... + ... | How to determine random effects in mixed model | You can test if the variance in slopes (and covariance between slope and intercept) is significant by modeling one model with just the random intercept and another model with the random slope and inte | How to determine random effects in mixed model
You can test if the variance in slopes (and covariance between slope and intercept) is significant by modeling one model with just the random intercept and another model with the random slope and intercept. Then you can do a nested model comparison between the two:
mod1 <-... | How to determine random effects in mixed model
You can test if the variance in slopes (and covariance between slope and intercept) is significant by modeling one model with just the random intercept and another model with the random slope and inte |
42,714 | How to determine random effects in mixed model | Exploratory analyses for describing correlation structures in dependent data include variograms for continuous spatio-temporal data, intraclass correlation coefficients for clustered data, lorelograms for binary outcomes.
Other descriptive statistics include bootstrapped or profile likelihood confidence intervals for v... | How to determine random effects in mixed model | Exploratory analyses for describing correlation structures in dependent data include variograms for continuous spatio-temporal data, intraclass correlation coefficients for clustered data, lorelograms | How to determine random effects in mixed model
Exploratory analyses for describing correlation structures in dependent data include variograms for continuous spatio-temporal data, intraclass correlation coefficients for clustered data, lorelograms for binary outcomes.
Other descriptive statistics include bootstrapped o... | How to determine random effects in mixed model
Exploratory analyses for describing correlation structures in dependent data include variograms for continuous spatio-temporal data, intraclass correlation coefficients for clustered data, lorelograms |
42,715 | How to do weight normalization in VGG network for style transfer? | "-is that capturing the activation maps for all images in imagenet (training set) and then adjust the relu weights based on those sums across all images, all positions for each filter element?"
I suppose your guess is correct. I've came across the normalised network that they used and inspected its activation matrices... | How to do weight normalization in VGG network for style transfer? | "-is that capturing the activation maps for all images in imagenet (training set) and then adjust the relu weights based on those sums across all images, all positions for each filter element?"
I sup | How to do weight normalization in VGG network for style transfer?
"-is that capturing the activation maps for all images in imagenet (training set) and then adjust the relu weights based on those sums across all images, all positions for each filter element?"
I suppose your guess is correct. I've came across the norma... | How to do weight normalization in VGG network for style transfer?
"-is that capturing the activation maps for all images in imagenet (training set) and then adjust the relu weights based on those sums across all images, all positions for each filter element?"
I sup |
42,716 | Alternatives to three dimensional scatter plot | I think what primarily needs to be added to your list is coplots, but let's work our way up to that. The starting point for visualizing two continuous variables should always be a scatterplot. With more than two variables, that generalizes naturally to a scatterplot matrix (although if you have lots of variables, you... | Alternatives to three dimensional scatter plot | I think what primarily needs to be added to your list is coplots, but let's work our way up to that. The starting point for visualizing two continuous variables should always be a scatterplot. With | Alternatives to three dimensional scatter plot
I think what primarily needs to be added to your list is coplots, but let's work our way up to that. The starting point for visualizing two continuous variables should always be a scatterplot. With more than two variables, that generalizes naturally to a scatterplot matr... | Alternatives to three dimensional scatter plot
I think what primarily needs to be added to your list is coplots, but let's work our way up to that. The starting point for visualizing two continuous variables should always be a scatterplot. With |
42,717 | Chinese Restaurant process (CRP) | This implementation is using the Polya urn representation of the Dirichlet process like described by Blackwell and MacQueen (1973). In the link you've provided this particular part of the process is described as "With probability α/(1+α) he sits down at a new table." Conceptually one can think of this as capturing th... | Chinese Restaurant process (CRP) | This implementation is using the Polya urn representation of the Dirichlet process like described by Blackwell and MacQueen (1973). In the link you've provided this particular part of the process is | Chinese Restaurant process (CRP)
This implementation is using the Polya urn representation of the Dirichlet process like described by Blackwell and MacQueen (1973). In the link you've provided this particular part of the process is described as "With probability α/(1+α) he sits down at a new table." Conceptually one ... | Chinese Restaurant process (CRP)
This implementation is using the Polya urn representation of the Dirichlet process like described by Blackwell and MacQueen (1973). In the link you've provided this particular part of the process is |
42,718 | Chinese Restaurant process (CRP) | The CRP is a model used with graphical models to simulate how many clusters you have.
It's not applied to data points. In fact, it is a prior, and does not depend on the data at all. | Chinese Restaurant process (CRP) | The CRP is a model used with graphical models to simulate how many clusters you have.
It's not applied to data points. In fact, it is a prior, and does not depend on the data at all. | Chinese Restaurant process (CRP)
The CRP is a model used with graphical models to simulate how many clusters you have.
It's not applied to data points. In fact, it is a prior, and does not depend on the data at all. | Chinese Restaurant process (CRP)
The CRP is a model used with graphical models to simulate how many clusters you have.
It's not applied to data points. In fact, it is a prior, and does not depend on the data at all. |
42,719 | Integrating previous model's parameters as priors for Bayesian modeling of new data | In general, informing a prior requires a lot of judgment calls (and justification in the write up). There are several steps:
Collect the relevant previous studies that could inform the present one. This step is much like collecting previous studies for meta-analysis. You want to be sure it doesn't suffer from the file... | Integrating previous model's parameters as priors for Bayesian modeling of new data | In general, informing a prior requires a lot of judgment calls (and justification in the write up). There are several steps:
Collect the relevant previous studies that could inform the present one. T | Integrating previous model's parameters as priors for Bayesian modeling of new data
In general, informing a prior requires a lot of judgment calls (and justification in the write up). There are several steps:
Collect the relevant previous studies that could inform the present one. This step is much like collecting pre... | Integrating previous model's parameters as priors for Bayesian modeling of new data
In general, informing a prior requires a lot of judgment calls (and justification in the write up). There are several steps:
Collect the relevant previous studies that could inform the present one. T |
42,720 | Flat prior in Bayesian? Confidence intervals in classical statistics turn into credible interval? | I am going to be snotty and say "no." Of course, an element of this is your wording of the question "can I." No. I forbid it. You cannot say that or anything at all like it. I also forbid you to say "turnip" for the entire month of May. Not just this May, but every May.
In a more serious vein, the answer is still... | Flat prior in Bayesian? Confidence intervals in classical statistics turn into credible interval? | I am going to be snotty and say "no." Of course, an element of this is your wording of the question "can I." No. I forbid it. You cannot say that or anything at all like it. I also forbid you to | Flat prior in Bayesian? Confidence intervals in classical statistics turn into credible interval?
I am going to be snotty and say "no." Of course, an element of this is your wording of the question "can I." No. I forbid it. You cannot say that or anything at all like it. I also forbid you to say "turnip" for the e... | Flat prior in Bayesian? Confidence intervals in classical statistics turn into credible interval?
I am going to be snotty and say "no." Of course, an element of this is your wording of the question "can I." No. I forbid it. You cannot say that or anything at all like it. I also forbid you to |
42,721 | Optimum approximate theory D-Optimal design | OK, this is a bit complicated, but i will try to explain some issues here.
First of all, you need to know that you can calculate the
theoretical (i.e. the maximum possible) and practically obtained
(i.e. the ones that you get in a given configuration of cards and
their sets) values for D-efficiency. The "quality" of t... | Optimum approximate theory D-Optimal design | OK, this is a bit complicated, but i will try to explain some issues here.
First of all, you need to know that you can calculate the
theoretical (i.e. the maximum possible) and practically obtained
( | Optimum approximate theory D-Optimal design
OK, this is a bit complicated, but i will try to explain some issues here.
First of all, you need to know that you can calculate the
theoretical (i.e. the maximum possible) and practically obtained
(i.e. the ones that you get in a given configuration of cards and
their sets)... | Optimum approximate theory D-Optimal design
OK, this is a bit complicated, but i will try to explain some issues here.
First of all, you need to know that you can calculate the
theoretical (i.e. the maximum possible) and practically obtained
( |
42,722 | Finding the best cookie recipe. Hyper-parameter optimization using noisy local comparison | If you're interested in use (more than in development), you should give a try to rankade, our ranking system. Rankade is free and easy to use, it can manage small to large playing groups (composed by players or 'cookies', as per your needs, or whatever), and it features rankings, stats, and more. It doesn't cover all o... | Finding the best cookie recipe. Hyper-parameter optimization using noisy local comparison | If you're interested in use (more than in development), you should give a try to rankade, our ranking system. Rankade is free and easy to use, it can manage small to large playing groups (composed by | Finding the best cookie recipe. Hyper-parameter optimization using noisy local comparison
If you're interested in use (more than in development), you should give a try to rankade, our ranking system. Rankade is free and easy to use, it can manage small to large playing groups (composed by players or 'cookies', as per y... | Finding the best cookie recipe. Hyper-parameter optimization using noisy local comparison
If you're interested in use (more than in development), you should give a try to rankade, our ranking system. Rankade is free and easy to use, it can manage small to large playing groups (composed by |
42,723 | What is the distribution of the maximum of independent non identical Binomial variables? | As whuber correctly points out in the comments, the random variable $X$ is discrete with support on the same space as the original random variables. Hence, the maximum possible value of $X$ is $m$, and it does not make sense to use a normal approximation (or any other approximation) that would allow a larger maximum t... | What is the distribution of the maximum of independent non identical Binomial variables? | As whuber correctly points out in the comments, the random variable $X$ is discrete with support on the same space as the original random variables. Hence, the maximum possible value of $X$ is $m$, a | What is the distribution of the maximum of independent non identical Binomial variables?
As whuber correctly points out in the comments, the random variable $X$ is discrete with support on the same space as the original random variables. Hence, the maximum possible value of $X$ is $m$, and it does not make sense to us... | What is the distribution of the maximum of independent non identical Binomial variables?
As whuber correctly points out in the comments, the random variable $X$ is discrete with support on the same space as the original random variables. Hence, the maximum possible value of $X$ is $m$, a |
42,724 | "ARIMA" versus "ARMA on differenced data" gives different prediction interval | The prediction intervals from ARIMA(p,1,q) for the original data as produced by the function Arima will be correct, while those from ARIMA(p,0,q) for differenced data produced by manually undifferencing the forecasts the way you do that will be incorrect.
Illustration
Suppose the last observed value is $x_t=100$. Supp... | "ARIMA" versus "ARMA on differenced data" gives different prediction interval | The prediction intervals from ARIMA(p,1,q) for the original data as produced by the function Arima will be correct, while those from ARIMA(p,0,q) for differenced data produced by manually undifferenci | "ARIMA" versus "ARMA on differenced data" gives different prediction interval
The prediction intervals from ARIMA(p,1,q) for the original data as produced by the function Arima will be correct, while those from ARIMA(p,0,q) for differenced data produced by manually undifferencing the forecasts the way you do that will ... | "ARIMA" versus "ARMA on differenced data" gives different prediction interval
The prediction intervals from ARIMA(p,1,q) for the original data as produced by the function Arima will be correct, while those from ARIMA(p,0,q) for differenced data produced by manually undifferenci |
42,725 | What can go wrong with MLE if I substitute some first-stage estimates instead of some parameters? | Your technique is essentially maximizing the conditional log-likelihood, conditioned on $\tilde \theta_{m+1},\ldots,\tilde \theta_k$. The complete maximum log-likelihood is the maximum of this conditional maximum across all these other parameters. This is very frequently used to produce likelihood scans, especially whe... | What can go wrong with MLE if I substitute some first-stage estimates instead of some parameters? | Your technique is essentially maximizing the conditional log-likelihood, conditioned on $\tilde \theta_{m+1},\ldots,\tilde \theta_k$. The complete maximum log-likelihood is the maximum of this conditi | What can go wrong with MLE if I substitute some first-stage estimates instead of some parameters?
Your technique is essentially maximizing the conditional log-likelihood, conditioned on $\tilde \theta_{m+1},\ldots,\tilde \theta_k$. The complete maximum log-likelihood is the maximum of this conditional maximum across al... | What can go wrong with MLE if I substitute some first-stage estimates instead of some parameters?
Your technique is essentially maximizing the conditional log-likelihood, conditioned on $\tilde \theta_{m+1},\ldots,\tilde \theta_k$. The complete maximum log-likelihood is the maximum of this conditi |
42,726 | Is there an accepted method to determine an approximate dimension for manifold learning | I am not quite sure if I understood your confusion correctly, if you accept the embedding principle (i.e. the "manifold assumption") the only way you can "decide" your dimension is to construct a projector to low-dimensional manifold. [Levina&Bickel] pointed out that eigenvalue(spectral projector) and projection are tw... | Is there an accepted method to determine an approximate dimension for manifold learning | I am not quite sure if I understood your confusion correctly, if you accept the embedding principle (i.e. the "manifold assumption") the only way you can "decide" your dimension is to construct a proj | Is there an accepted method to determine an approximate dimension for manifold learning
I am not quite sure if I understood your confusion correctly, if you accept the embedding principle (i.e. the "manifold assumption") the only way you can "decide" your dimension is to construct a projector to low-dimensional manifol... | Is there an accepted method to determine an approximate dimension for manifold learning
I am not quite sure if I understood your confusion correctly, if you accept the embedding principle (i.e. the "manifold assumption") the only way you can "decide" your dimension is to construct a proj |
42,727 | Semi-Hidden Markov Model with parameters of the emission probabilities depending on regressors | Here is some quick R code to get you started. Major caveats: I wrote this myself, so it could be buggy, statistically incorrect, poorly styled... use at your own risk!
params_are_valid <- function(params) {
stopifnot("lambdas" %in% names(params)) # Poisson parameters for each hidden state
stopifnot(all(para... | Semi-Hidden Markov Model with parameters of the emission probabilities depending on regressors | Here is some quick R code to get you started. Major caveats: I wrote this myself, so it could be buggy, statistically incorrect, poorly styled... use at your own risk!
params_are_valid <- function(p | Semi-Hidden Markov Model with parameters of the emission probabilities depending on regressors
Here is some quick R code to get you started. Major caveats: I wrote this myself, so it could be buggy, statistically incorrect, poorly styled... use at your own risk!
params_are_valid <- function(params) {
stopifnot("l... | Semi-Hidden Markov Model with parameters of the emission probabilities depending on regressors
Here is some quick R code to get you started. Major caveats: I wrote this myself, so it could be buggy, statistically incorrect, poorly styled... use at your own risk!
params_are_valid <- function(p |
42,728 | Why do we have to use action-value function in model-free reinforcement learning instead of just state-value function? | This is only true when using temporal difference learning alone, i.e. Q-learning. In that setting you are learning the optimal state-action-value function Q* and then taking actions that maximize Q*. If instead, you learned V*, you know the real value of the state that you are in if you followed an optimal policy but t... | Why do we have to use action-value function in model-free reinforcement learning instead of just sta | This is only true when using temporal difference learning alone, i.e. Q-learning. In that setting you are learning the optimal state-action-value function Q* and then taking actions that maximize Q*. | Why do we have to use action-value function in model-free reinforcement learning instead of just state-value function?
This is only true when using temporal difference learning alone, i.e. Q-learning. In that setting you are learning the optimal state-action-value function Q* and then taking actions that maximize Q*. I... | Why do we have to use action-value function in model-free reinforcement learning instead of just sta
This is only true when using temporal difference learning alone, i.e. Q-learning. In that setting you are learning the optimal state-action-value function Q* and then taking actions that maximize Q*. |
42,729 | Minumum Sample Size for Permutation Test | You partially answered your own question. Consider the reason you're often performing a permutation test. It's usually in circumstances where you have little faith in any particular parametric distribution or for some other reason want a non-parametric solution. In that case, how does one estimate power? You could be d... | Minumum Sample Size for Permutation Test | You partially answered your own question. Consider the reason you're often performing a permutation test. It's usually in circumstances where you have little faith in any particular parametric distrib | Minumum Sample Size for Permutation Test
You partially answered your own question. Consider the reason you're often performing a permutation test. It's usually in circumstances where you have little faith in any particular parametric distribution or for some other reason want a non-parametric solution. In that case, ho... | Minumum Sample Size for Permutation Test
You partially answered your own question. Consider the reason you're often performing a permutation test. It's usually in circumstances where you have little faith in any particular parametric distrib |
42,730 | Minumum Sample Size for Permutation Test | I would wager permutation tests inflate the false negative rate in a small sample. Here's an extreme example to illustrate, but this applies with less extreme examples too:
r=1 with sample size 4
there are 4! = 24 permutations
Therefore: at least 1/24 ( = .042) permutations will have r=1 so p(r=1) >= 0.42. This is fa... | Minumum Sample Size for Permutation Test | I would wager permutation tests inflate the false negative rate in a small sample. Here's an extreme example to illustrate, but this applies with less extreme examples too:
r=1 with sample size 4
the | Minumum Sample Size for Permutation Test
I would wager permutation tests inflate the false negative rate in a small sample. Here's an extreme example to illustrate, but this applies with less extreme examples too:
r=1 with sample size 4
there are 4! = 24 permutations
Therefore: at least 1/24 ( = .042) permutations wi... | Minumum Sample Size for Permutation Test
I would wager permutation tests inflate the false negative rate in a small sample. Here's an extreme example to illustrate, but this applies with less extreme examples too:
r=1 with sample size 4
the |
42,731 | Robust time-series regression for outlier detection | I took your 90 days of data (24 hourly readings per day) and analyzed it using AUTOBOX a piece of software that I have helped develop using a 28 day forecast horizon. The documentation for the approach can be found in the User Guide available from the AFS website. I will try and give you you a general overview here. Th... | Robust time-series regression for outlier detection | I took your 90 days of data (24 hourly readings per day) and analyzed it using AUTOBOX a piece of software that I have helped develop using a 28 day forecast horizon. The documentation for the approac | Robust time-series regression for outlier detection
I took your 90 days of data (24 hourly readings per day) and analyzed it using AUTOBOX a piece of software that I have helped develop using a 28 day forecast horizon. The documentation for the approach can be found in the User Guide available from the AFS website. I w... | Robust time-series regression for outlier detection
I took your 90 days of data (24 hourly readings per day) and analyzed it using AUTOBOX a piece of software that I have helped develop using a 28 day forecast horizon. The documentation for the approac |
42,732 | Neural network language model - prediction for the word at the center or the right of context words | The task of finding missing words in a text sometimes referred to as text imputation, or sentence completion.
One paper exploring it with ANN: Solving Text Imputation Using Recurrent Neural Networks. Arathi Mani. CS224D report. 2016. http://cs224d.stanford.edu/reports/ManiArathi.pdf
In this paper, we have shown that t... | Neural network language model - prediction for the word at the center or the right of context words | The task of finding missing words in a text sometimes referred to as text imputation, or sentence completion.
One paper exploring it with ANN: Solving Text Imputation Using Recurrent Neural Networks. | Neural network language model - prediction for the word at the center or the right of context words
The task of finding missing words in a text sometimes referred to as text imputation, or sentence completion.
One paper exploring it with ANN: Solving Text Imputation Using Recurrent Neural Networks. Arathi Mani. CS224D ... | Neural network language model - prediction for the word at the center or the right of context words
The task of finding missing words in a text sometimes referred to as text imputation, or sentence completion.
One paper exploring it with ANN: Solving Text Imputation Using Recurrent Neural Networks. |
42,733 | Neural network language model - prediction for the word at the center or the right of context words | What you are describing is Tomas Mikolov's Word2vec model Word2vec. His implementation has 2 parts the Skip-gram model and the CBOW model. Paper here
CBOW, which is what you need, is trained to predict the target word t from the contextual words that surround it, c, i.e. the goal is to maximise P(t | c) over the traini... | Neural network language model - prediction for the word at the center or the right of context words | What you are describing is Tomas Mikolov's Word2vec model Word2vec. His implementation has 2 parts the Skip-gram model and the CBOW model. Paper here
CBOW, which is what you need, is trained to predic | Neural network language model - prediction for the word at the center or the right of context words
What you are describing is Tomas Mikolov's Word2vec model Word2vec. His implementation has 2 parts the Skip-gram model and the CBOW model. Paper here
CBOW, which is what you need, is trained to predict the target word t ... | Neural network language model - prediction for the word at the center or the right of context words
What you are describing is Tomas Mikolov's Word2vec model Word2vec. His implementation has 2 parts the Skip-gram model and the CBOW model. Paper here
CBOW, which is what you need, is trained to predic |
42,734 | How to approximate (log-)likelihood from model specification using particle filters | You can think of these different particle filters as different pieces of measurement equipment (like scales or rulers). When we're outside of pure geometry, it's difficult to know exactly how big an object is, and we might get slightly different answers if we measure it a few times with different equipment. Similarly... | How to approximate (log-)likelihood from model specification using particle filters | You can think of these different particle filters as different pieces of measurement equipment (like scales or rulers). When we're outside of pure geometry, it's difficult to know exactly how big an | How to approximate (log-)likelihood from model specification using particle filters
You can think of these different particle filters as different pieces of measurement equipment (like scales or rulers). When we're outside of pure geometry, it's difficult to know exactly how big an object is, and we might get slightly... | How to approximate (log-)likelihood from model specification using particle filters
You can think of these different particle filters as different pieces of measurement equipment (like scales or rulers). When we're outside of pure geometry, it's difficult to know exactly how big an |
42,735 | How to approximate (log-)likelihood from model specification using particle filters | Differing from your model, I can give you some ideas based on my experience. Let's say you have a state space model:
$y_t = ax_t + \alpha_t$
$x_{t+1} = bx_t + e_t, \quad e_t \sim N(0, 1)$
A regular assumption on a state space model is that $\alpha_t$ is also a random variable from Gaussian distribution, e.g., $N(0, 2)$... | How to approximate (log-)likelihood from model specification using particle filters | Differing from your model, I can give you some ideas based on my experience. Let's say you have a state space model:
$y_t = ax_t + \alpha_t$
$x_{t+1} = bx_t + e_t, \quad e_t \sim N(0, 1)$
A regular as | How to approximate (log-)likelihood from model specification using particle filters
Differing from your model, I can give you some ideas based on my experience. Let's say you have a state space model:
$y_t = ax_t + \alpha_t$
$x_{t+1} = bx_t + e_t, \quad e_t \sim N(0, 1)$
A regular assumption on a state space model is t... | How to approximate (log-)likelihood from model specification using particle filters
Differing from your model, I can give you some ideas based on my experience. Let's say you have a state space model:
$y_t = ax_t + \alpha_t$
$x_{t+1} = bx_t + e_t, \quad e_t \sim N(0, 1)$
A regular as |
42,736 | Using empirical Bayesian estimation (Gamma-Poisson) to analyze high arrival counts (n ~= 5000) | As a first observation, your z-scores are not going to give you what you want. A large z-score tells you that the new arrival count is anomalously large, not that the arrival curve is accelerating.
Secondly, I would strongly advise you start with a simpler approach. There are a few possibilities for this 'simpler appro... | Using empirical Bayesian estimation (Gamma-Poisson) to analyze high arrival counts (n ~= 5000) | As a first observation, your z-scores are not going to give you what you want. A large z-score tells you that the new arrival count is anomalously large, not that the arrival curve is accelerating.
Se | Using empirical Bayesian estimation (Gamma-Poisson) to analyze high arrival counts (n ~= 5000)
As a first observation, your z-scores are not going to give you what you want. A large z-score tells you that the new arrival count is anomalously large, not that the arrival curve is accelerating.
Secondly, I would strongly ... | Using empirical Bayesian estimation (Gamma-Poisson) to analyze high arrival counts (n ~= 5000)
As a first observation, your z-scores are not going to give you what you want. A large z-score tells you that the new arrival count is anomalously large, not that the arrival curve is accelerating.
Se |
42,737 | Expected root of quadratic random polynomial | Your $Z_1$ and $Z_2$ are not well defined until you have made a choice of which complex root to take. That choice could affect their distributions. (It actually does not, by virtue of the symmetries of $A$, $B$, and $C$ around $0$.)
Regardless, since $Z_1+Z_2=-B/A$ is well-defined, suppose you have made such a choic... | Expected root of quadratic random polynomial | Your $Z_1$ and $Z_2$ are not well defined until you have made a choice of which complex root to take. That choice could affect their distributions. (It actually does not, by virtue of the symmetrie | Expected root of quadratic random polynomial
Your $Z_1$ and $Z_2$ are not well defined until you have made a choice of which complex root to take. That choice could affect their distributions. (It actually does not, by virtue of the symmetries of $A$, $B$, and $C$ around $0$.)
Regardless, since $Z_1+Z_2=-B/A$ is wel... | Expected root of quadratic random polynomial
Your $Z_1$ and $Z_2$ are not well defined until you have made a choice of which complex root to take. That choice could affect their distributions. (It actually does not, by virtue of the symmetrie |
42,738 | What are periodic version of splines? | The Venables Ripley book discusses periodic splines. Basically, by specifying (correctly) the periodicity, the data are aggregated into replications over a period and splines are fit to interpolate the trend. For instance, using the AirPassengers dataset from R to model flight trends, I might use a categorical fixed ef... | What are periodic version of splines? | The Venables Ripley book discusses periodic splines. Basically, by specifying (correctly) the periodicity, the data are aggregated into replications over a period and splines are fit to interpolate th | What are periodic version of splines?
The Venables Ripley book discusses periodic splines. Basically, by specifying (correctly) the periodicity, the data are aggregated into replications over a period and splines are fit to interpolate the trend. For instance, using the AirPassengers dataset from R to model flight tren... | What are periodic version of splines?
The Venables Ripley book discusses periodic splines. Basically, by specifying (correctly) the periodicity, the data are aggregated into replications over a period and splines are fit to interpolate th |
42,739 | Fractional output dimensions of "sliding-windows" (convolutions, pooling etc) in neural networks | The fraction part comes from the stride operation. Without stride, the output size should be output_no_stride = input + 2*pad - filter + 1 = 224. With stride, the conventional formula to use is output_with_stride = floor((input + 2*pad - filter) / stride) + 1 = 112.
In many programming languages, the default behavior o... | Fractional output dimensions of "sliding-windows" (convolutions, pooling etc) in neural networks | The fraction part comes from the stride operation. Without stride, the output size should be output_no_stride = input + 2*pad - filter + 1 = 224. With stride, the conventional formula to use is output | Fractional output dimensions of "sliding-windows" (convolutions, pooling etc) in neural networks
The fraction part comes from the stride operation. Without stride, the output size should be output_no_stride = input + 2*pad - filter + 1 = 224. With stride, the conventional formula to use is output_with_stride = floor((i... | Fractional output dimensions of "sliding-windows" (convolutions, pooling etc) in neural networks
The fraction part comes from the stride operation. Without stride, the output size should be output_no_stride = input + 2*pad - filter + 1 = 224. With stride, the conventional formula to use is output |
42,740 | Convergence Criteria for Stochastic Gradient Descent | I would suggest having some held-out data that forms a validation dataset. You can compute your loss function on the validation dataset periodically (it would probably be too expensive after each iteration, so after each epoch seems to make sense) and stop training once the validation loss has stabilized.
If you're in ... | Convergence Criteria for Stochastic Gradient Descent | I would suggest having some held-out data that forms a validation dataset. You can compute your loss function on the validation dataset periodically (it would probably be too expensive after each iter | Convergence Criteria for Stochastic Gradient Descent
I would suggest having some held-out data that forms a validation dataset. You can compute your loss function on the validation dataset periodically (it would probably be too expensive after each iteration, so after each epoch seems to make sense) and stop training o... | Convergence Criteria for Stochastic Gradient Descent
I would suggest having some held-out data that forms a validation dataset. You can compute your loss function on the validation dataset periodically (it would probably be too expensive after each iter |
42,741 | Sample size needed for Gaussian process classification | Classification can need more points than regression, for a few reasons:
1) If there are only 2 classes, the response variable contains much less information than a continuous variable, which could take many values.
2) With a small number of points, it's especially easy to get complete separation, which makes the maximu... | Sample size needed for Gaussian process classification | Classification can need more points than regression, for a few reasons:
1) If there are only 2 classes, the response variable contains much less information than a continuous variable, which could tak | Sample size needed for Gaussian process classification
Classification can need more points than regression, for a few reasons:
1) If there are only 2 classes, the response variable contains much less information than a continuous variable, which could take many values.
2) With a small number of points, it's especially ... | Sample size needed for Gaussian process classification
Classification can need more points than regression, for a few reasons:
1) If there are only 2 classes, the response variable contains much less information than a continuous variable, which could tak |
42,742 | Sample size needed for Gaussian process classification | that rule of thumb is wrong. GP is used in high dimensional settings with d >> n without any problems (genomics, mri) lets say 500 000 voxels in one image used to classify 100 subjects into 2 classes. I don't know if there is any theoretical bound, actual number would depend a lot on the nature of the data at hand, so ... | Sample size needed for Gaussian process classification | that rule of thumb is wrong. GP is used in high dimensional settings with d >> n without any problems (genomics, mri) lets say 500 000 voxels in one image used to classify 100 subjects into 2 classes. | Sample size needed for Gaussian process classification
that rule of thumb is wrong. GP is used in high dimensional settings with d >> n without any problems (genomics, mri) lets say 500 000 voxels in one image used to classify 100 subjects into 2 classes. I don't know if there is any theoretical bound, actual number wo... | Sample size needed for Gaussian process classification
that rule of thumb is wrong. GP is used in high dimensional settings with d >> n without any problems (genomics, mri) lets say 500 000 voxels in one image used to classify 100 subjects into 2 classes. |
42,743 | How are calculations done for REML? | Here is a simple example with calculations that shows the idea. We work in the linear model $Y = X\beta + e, e\sim N(0, \Sigma(\theta))$, where $Y$ is the $N \times 1$ response vector, $X$ the $n\times p$ design matrix, and $\theta$ parametrizes the covariance matrix. Suppose interest lies in estimating $\theta$.
Assum... | How are calculations done for REML? | Here is a simple example with calculations that shows the idea. We work in the linear model $Y = X\beta + e, e\sim N(0, \Sigma(\theta))$, where $Y$ is the $N \times 1$ response vector, $X$ the $n\time | How are calculations done for REML?
Here is a simple example with calculations that shows the idea. We work in the linear model $Y = X\beta + e, e\sim N(0, \Sigma(\theta))$, where $Y$ is the $N \times 1$ response vector, $X$ the $n\times p$ design matrix, and $\theta$ parametrizes the covariance matrix. Suppose interes... | How are calculations done for REML?
Here is a simple example with calculations that shows the idea. We work in the linear model $Y = X\beta + e, e\sim N(0, \Sigma(\theta))$, where $Y$ is the $N \times 1$ response vector, $X$ the $n\time |
42,744 | Deviance vs Gini coefficient in GLM | As also mentioned in the link Scortchi supplies the Gini coefficient (or the proportional c-statistic or AUC) only contains information how well the model ranks the outcomes and no information about the calibration.
The deviance in a binary glm model is going twice the negative value of logarithmic scoring rule as sho... | Deviance vs Gini coefficient in GLM | As also mentioned in the link Scortchi supplies the Gini coefficient (or the proportional c-statistic or AUC) only contains information how well the model ranks the outcomes and no information about t | Deviance vs Gini coefficient in GLM
As also mentioned in the link Scortchi supplies the Gini coefficient (or the proportional c-statistic or AUC) only contains information how well the model ranks the outcomes and no information about the calibration.
The deviance in a binary glm model is going twice the negative valu... | Deviance vs Gini coefficient in GLM
As also mentioned in the link Scortchi supplies the Gini coefficient (or the proportional c-statistic or AUC) only contains information how well the model ranks the outcomes and no information about t |
42,745 | Why scale cost functions by 1/n in a neural network? | In terms of mini-batch learning, $n$ should be the size of the batch instead of the total amount of training data (which in your case should be infinite).
Gradients are scaled by $1/n$ because we are taking the average of the batch, so the same learning rate can be used regardless of the size of the batch.
Edit
I fo... | Why scale cost functions by 1/n in a neural network? | In terms of mini-batch learning, $n$ should be the size of the batch instead of the total amount of training data (which in your case should be infinite).
Gradients are scaled by $1/n$ because we ar | Why scale cost functions by 1/n in a neural network?
In terms of mini-batch learning, $n$ should be the size of the batch instead of the total amount of training data (which in your case should be infinite).
Gradients are scaled by $1/n$ because we are taking the average of the batch, so the same learning rate can be... | Why scale cost functions by 1/n in a neural network?
In terms of mini-batch learning, $n$ should be the size of the batch instead of the total amount of training data (which in your case should be infinite).
Gradients are scaled by $1/n$ because we ar |
42,746 | Cluster Boostrap with Unequally Sized Clusters | This is explained quite nicely in Sherman and leCessie's paper, "A comparison between bootstrap methods and generalized estimating equations for correlated outcomes in generlized linear models." On page 905, they note:
"If as often may be the case, there are blocks of different sizes, then the algorithm can be modifie... | Cluster Boostrap with Unequally Sized Clusters | This is explained quite nicely in Sherman and leCessie's paper, "A comparison between bootstrap methods and generalized estimating equations for correlated outcomes in generlized linear models." On p | Cluster Boostrap with Unequally Sized Clusters
This is explained quite nicely in Sherman and leCessie's paper, "A comparison between bootstrap methods and generalized estimating equations for correlated outcomes in generlized linear models." On page 905, they note:
"If as often may be the case, there are blocks of dif... | Cluster Boostrap with Unequally Sized Clusters
This is explained quite nicely in Sherman and leCessie's paper, "A comparison between bootstrap methods and generalized estimating equations for correlated outcomes in generlized linear models." On p |
42,747 | Cluster Boostrap with Unequally Sized Clusters | I wrote something in R for my own use, based on the quote from Sherman and Cessie (1997) in StatsStudent's answer.
It implements bootstrap replicates on clustered data with clusters of different sizes.
It makes sure that clusters sampled more than once (due to replacement) are treated as distinct clusters within bootst... | Cluster Boostrap with Unequally Sized Clusters | I wrote something in R for my own use, based on the quote from Sherman and Cessie (1997) in StatsStudent's answer.
It implements bootstrap replicates on clustered data with clusters of different sizes | Cluster Boostrap with Unequally Sized Clusters
I wrote something in R for my own use, based on the quote from Sherman and Cessie (1997) in StatsStudent's answer.
It implements bootstrap replicates on clustered data with clusters of different sizes.
It makes sure that clusters sampled more than once (due to replacement)... | Cluster Boostrap with Unequally Sized Clusters
I wrote something in R for my own use, based on the quote from Sherman and Cessie (1997) in StatsStudent's answer.
It implements bootstrap replicates on clustered data with clusters of different sizes |
42,748 | $\min D_\textrm{KL}(p(x_1,\dots,x_n) \mid\mid q_1(x_1)\cdots q_n(x_n))$ gives the marginals of $p(x_1,\dots,x_n)$? | Using logarithmic identities, we can rewrite the KL divergence as
$$\sum_{x_1, ..., x_n} p(x_1, ..., x_n) \log p(x_1, ..., x_n) - \sum_{i = 1}^n \sum_{x_1, ..., x_n} p(x_1, ..., x_n) \log q_i(x_i).$$
Note that only the second term depends on the univariate distributions over which we optimize, so we can focus on it and... | $\min D_\textrm{KL}(p(x_1,\dots,x_n) \mid\mid q_1(x_1)\cdots q_n(x_n))$ gives the marginals of $p(x_ | Using logarithmic identities, we can rewrite the KL divergence as
$$\sum_{x_1, ..., x_n} p(x_1, ..., x_n) \log p(x_1, ..., x_n) - \sum_{i = 1}^n \sum_{x_1, ..., x_n} p(x_1, ..., x_n) \log q_i(x_i).$$
| $\min D_\textrm{KL}(p(x_1,\dots,x_n) \mid\mid q_1(x_1)\cdots q_n(x_n))$ gives the marginals of $p(x_1,\dots,x_n)$?
Using logarithmic identities, we can rewrite the KL divergence as
$$\sum_{x_1, ..., x_n} p(x_1, ..., x_n) \log p(x_1, ..., x_n) - \sum_{i = 1}^n \sum_{x_1, ..., x_n} p(x_1, ..., x_n) \log q_i(x_i).$$
Note ... | $\min D_\textrm{KL}(p(x_1,\dots,x_n) \mid\mid q_1(x_1)\cdots q_n(x_n))$ gives the marginals of $p(x_
Using logarithmic identities, we can rewrite the KL divergence as
$$\sum_{x_1, ..., x_n} p(x_1, ..., x_n) \log p(x_1, ..., x_n) - \sum_{i = 1}^n \sum_{x_1, ..., x_n} p(x_1, ..., x_n) \log q_i(x_i).$$
|
42,749 | Expectation of log likelihood ratio | The integral in question is in fact the Kullback-Leibler divergance between $F(\cdot,θ_0)$ and $F(\cdot,\hat{θ})$. You cannot generally say that it converges to anything (especially considering the fact that you parametric assumption may be wrong). However, for certain distribution families it has good estimates.
You m... | Expectation of log likelihood ratio | The integral in question is in fact the Kullback-Leibler divergance between $F(\cdot,θ_0)$ and $F(\cdot,\hat{θ})$. You cannot generally say that it converges to anything (especially considering the fa | Expectation of log likelihood ratio
The integral in question is in fact the Kullback-Leibler divergance between $F(\cdot,θ_0)$ and $F(\cdot,\hat{θ})$. You cannot generally say that it converges to anything (especially considering the fact that you parametric assumption may be wrong). However, for certain distribution f... | Expectation of log likelihood ratio
The integral in question is in fact the Kullback-Leibler divergance between $F(\cdot,θ_0)$ and $F(\cdot,\hat{θ})$. You cannot generally say that it converges to anything (especially considering the fa |
42,750 | Expectation of log likelihood ratio | You cannot say, in general, that the finite sample expectation of the log likelihood ratio will be 1, even though it asymptotically converges in probability to 1. "Around 1" is a reasonable guess, but it's possible to make the finite sample bias arbitrarily large with any number of contrived distributions.
The actual d... | Expectation of log likelihood ratio | You cannot say, in general, that the finite sample expectation of the log likelihood ratio will be 1, even though it asymptotically converges in probability to 1. "Around 1" is a reasonable guess, but | Expectation of log likelihood ratio
You cannot say, in general, that the finite sample expectation of the log likelihood ratio will be 1, even though it asymptotically converges in probability to 1. "Around 1" is a reasonable guess, but it's possible to make the finite sample bias arbitrarily large with any number of c... | Expectation of log likelihood ratio
You cannot say, in general, that the finite sample expectation of the log likelihood ratio will be 1, even though it asymptotically converges in probability to 1. "Around 1" is a reasonable guess, but |
42,751 | Can a decision tree automatically detect the effect on the dependent variable from the product/quotient of two independent variables? | Yes and no. By having a sufficiently deep tree (at least two splits deep) and splitting on both $x_1$ and $x_2$, tree based model like xgboost (or LightGBM or catboost) can eventually approximate (given enough data) any relationship between $x_1\times x_2$ and your prediction target of interest. Of course, if you know ... | Can a decision tree automatically detect the effect on the dependent variable from the product/quoti | Yes and no. By having a sufficiently deep tree (at least two splits deep) and splitting on both $x_1$ and $x_2$, tree based model like xgboost (or LightGBM or catboost) can eventually approximate (giv | Can a decision tree automatically detect the effect on the dependent variable from the product/quotient of two independent variables?
Yes and no. By having a sufficiently deep tree (at least two splits deep) and splitting on both $x_1$ and $x_2$, tree based model like xgboost (or LightGBM or catboost) can eventually ap... | Can a decision tree automatically detect the effect on the dependent variable from the product/quoti
Yes and no. By having a sufficiently deep tree (at least two splits deep) and splitting on both $x_1$ and $x_2$, tree based model like xgboost (or LightGBM or catboost) can eventually approximate (giv |
42,752 | Can a decision tree automatically detect the effect on the dependent variable from the product/quotient of two independent variables? | The answer to your question depends on what class of split rules you allow in the fitting of a decision tree. If the only class of allowable splits are on a single variable you will never be able to capture the interaction behavior described in the post. In fact what you will see that allows you to diagnose something l... | Can a decision tree automatically detect the effect on the dependent variable from the product/quoti | The answer to your question depends on what class of split rules you allow in the fitting of a decision tree. If the only class of allowable splits are on a single variable you will never be able to c | Can a decision tree automatically detect the effect on the dependent variable from the product/quotient of two independent variables?
The answer to your question depends on what class of split rules you allow in the fitting of a decision tree. If the only class of allowable splits are on a single variable you will neve... | Can a decision tree automatically detect the effect on the dependent variable from the product/quoti
The answer to your question depends on what class of split rules you allow in the fitting of a decision tree. If the only class of allowable splits are on a single variable you will never be able to c |
42,753 | What is the distribution for the time before K successes happen in N trials? | Suppose that $X_1,X_2,\dotsc,X_N$ are iid with the unit exponential distribution with density $f(x) = e^{-x}, x\ge 0$. (You can adapt the results to some other rate). But, each $X_i$ (the waiting time before person $i$ makes his phone call) will only be realized with some probability $p$, and with probability $1-p$ the... | What is the distribution for the time before K successes happen in N trials? | Suppose that $X_1,X_2,\dotsc,X_N$ are iid with the unit exponential distribution with density $f(x) = e^{-x}, x\ge 0$. (You can adapt the results to some other rate). But, each $X_i$ (the waiting time | What is the distribution for the time before K successes happen in N trials?
Suppose that $X_1,X_2,\dotsc,X_N$ are iid with the unit exponential distribution with density $f(x) = e^{-x}, x\ge 0$. (You can adapt the results to some other rate). But, each $X_i$ (the waiting time before person $i$ makes his phone call) wi... | What is the distribution for the time before K successes happen in N trials?
Suppose that $X_1,X_2,\dotsc,X_N$ are iid with the unit exponential distribution with density $f(x) = e^{-x}, x\ge 0$. (You can adapt the results to some other rate). But, each $X_i$ (the waiting time |
42,754 | What is the distribution for the time before K successes happen in N trials? | If N is fixed and K is random, then the number of successes K is such that $K \sim Bin(N,p)$. You are not guaranteed to get K successes for any fixed K if N is also fixed.
Alternatively, if N is random and K is fixed, and you are wondering about the distribution of the number of trials, N, until K successes are achiev... | What is the distribution for the time before K successes happen in N trials? | If N is fixed and K is random, then the number of successes K is such that $K \sim Bin(N,p)$. You are not guaranteed to get K successes for any fixed K if N is also fixed.
Alternatively, if N is rand | What is the distribution for the time before K successes happen in N trials?
If N is fixed and K is random, then the number of successes K is such that $K \sim Bin(N,p)$. You are not guaranteed to get K successes for any fixed K if N is also fixed.
Alternatively, if N is random and K is fixed, and you are wondering ab... | What is the distribution for the time before K successes happen in N trials?
If N is fixed and K is random, then the number of successes K is such that $K \sim Bin(N,p)$. You are not guaranteed to get K successes for any fixed K if N is also fixed.
Alternatively, if N is rand |
42,755 | What is the distribution for the time before K successes happen in N trials? | The time is the $K$'th order statistic of $N'$ iid exponential distributions where $N'$ is binomial distributed with parameters $N, p$. Each order statistic is distributed as in this solution described below, which you mix over the outcomes of $N'$.
Let $\lambda$ be the rate of $T$. Then the first order statistic of ... | What is the distribution for the time before K successes happen in N trials? | The time is the $K$'th order statistic of $N'$ iid exponential distributions where $N'$ is binomial distributed with parameters $N, p$. Each order statistic is distributed as in this solution describ | What is the distribution for the time before K successes happen in N trials?
The time is the $K$'th order statistic of $N'$ iid exponential distributions where $N'$ is binomial distributed with parameters $N, p$. Each order statistic is distributed as in this solution described below, which you mix over the outcomes o... | What is the distribution for the time before K successes happen in N trials?
The time is the $K$'th order statistic of $N'$ iid exponential distributions where $N'$ is binomial distributed with parameters $N, p$. Each order statistic is distributed as in this solution describ |
42,756 | chi-squared goodness-of-fit: effect size and power | $\lambda =\omega^2N$, see Cohen, Jacob (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.), page 549, formula 12.7.1.
Hence the effect size you mention is Cohen's omega ($\omega$, sometimes written "w").
$\omega=\sqrt{\frac{\chi2}{N}}$. The $p_{0i}$ and $p_{1i}$ in the formula you give in your ques... | chi-squared goodness-of-fit: effect size and power | $\lambda =\omega^2N$, see Cohen, Jacob (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.), page 549, formula 12.7.1.
Hence the effect size you mention is Cohen's omega ($\omega$, | chi-squared goodness-of-fit: effect size and power
$\lambda =\omega^2N$, see Cohen, Jacob (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.), page 549, formula 12.7.1.
Hence the effect size you mention is Cohen's omega ($\omega$, sometimes written "w").
$\omega=\sqrt{\frac{\chi2}{N}}$. The $p_{0i}... | chi-squared goodness-of-fit: effect size and power
$\lambda =\omega^2N$, see Cohen, Jacob (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.), page 549, formula 12.7.1.
Hence the effect size you mention is Cohen's omega ($\omega$, |
42,757 | Dispersion of points on 2D or 3D | In situations like this, people often use the variance-covariance matrix. Along the main diagonal, the variance for each dimension is listed. Each $i, j$th off diagonal element (where $i\ne j$) lists the covariance of variables $i$ and $j$. In this way, every aspect of the dispersion is listed separately.
On the o... | Dispersion of points on 2D or 3D | In situations like this, people often use the variance-covariance matrix. Along the main diagonal, the variance for each dimension is listed. Each $i, j$th off diagonal element (where $i\ne j$) list | Dispersion of points on 2D or 3D
In situations like this, people often use the variance-covariance matrix. Along the main diagonal, the variance for each dimension is listed. Each $i, j$th off diagonal element (where $i\ne j$) lists the covariance of variables $i$ and $j$. In this way, every aspect of the dispersion... | Dispersion of points on 2D or 3D
In situations like this, people often use the variance-covariance matrix. Along the main diagonal, the variance for each dimension is listed. Each $i, j$th off diagonal element (where $i\ne j$) list |
42,758 | Identification of Bayesian models | "An unidentified Bayesian model is one in which the prior and posterior are exactly the same, and nothing is learned from the data". While this is not a main concerns, the mentioned point stays valid in Bayesian setting, if from some $\Theta_1 \ne \Theta_2$, $p(x|\Theta_1)=p(x|\Theta_2)$ then the posterior distribution... | Identification of Bayesian models | "An unidentified Bayesian model is one in which the prior and posterior are exactly the same, and nothing is learned from the data". While this is not a main concerns, the mentioned point stays valid | Identification of Bayesian models
"An unidentified Bayesian model is one in which the prior and posterior are exactly the same, and nothing is learned from the data". While this is not a main concerns, the mentioned point stays valid in Bayesian setting, if from some $\Theta_1 \ne \Theta_2$, $p(x|\Theta_1)=p(x|\Theta_2... | Identification of Bayesian models
"An unidentified Bayesian model is one in which the prior and posterior are exactly the same, and nothing is learned from the data". While this is not a main concerns, the mentioned point stays valid |
42,759 | Whether to Use Continuity Correction When Conducting a Test of Equality of 2 Proportions | So, Yates showed that the use of Pearson’s chi-squared has the implication of p–values which underestimate the true p–values based on the binomial distribution, but that you already know. Actually, statisticians tend to disagree about whether to use it: some statisticians argue that expected frequency lower that five s... | Whether to Use Continuity Correction When Conducting a Test of Equality of 2 Proportions | So, Yates showed that the use of Pearson’s chi-squared has the implication of p–values which underestimate the true p–values based on the binomial distribution, but that you already know. Actually, st | Whether to Use Continuity Correction When Conducting a Test of Equality of 2 Proportions
So, Yates showed that the use of Pearson’s chi-squared has the implication of p–values which underestimate the true p–values based on the binomial distribution, but that you already know. Actually, statisticians tend to disagree ab... | Whether to Use Continuity Correction When Conducting a Test of Equality of 2 Proportions
So, Yates showed that the use of Pearson’s chi-squared has the implication of p–values which underestimate the true p–values based on the binomial distribution, but that you already know. Actually, st |
42,760 | Removing interaction term from repeated measures two-way ANOVA in R: Anova() function in car package | While I'm no expert in repeated measures ANOVA, I have some familiarity with the Anova() function in car.
Type I or sequential Anova estimates a sequence of models in an effectively arbitrary order, each time permanently removing the previously tested regressor from the subsequent step. Many of its steps are not neces... | Removing interaction term from repeated measures two-way ANOVA in R: Anova() function in car package | While I'm no expert in repeated measures ANOVA, I have some familiarity with the Anova() function in car.
Type I or sequential Anova estimates a sequence of models in an effectively arbitrary order, | Removing interaction term from repeated measures two-way ANOVA in R: Anova() function in car package
While I'm no expert in repeated measures ANOVA, I have some familiarity with the Anova() function in car.
Type I or sequential Anova estimates a sequence of models in an effectively arbitrary order, each time permanent... | Removing interaction term from repeated measures two-way ANOVA in R: Anova() function in car package
While I'm no expert in repeated measures ANOVA, I have some familiarity with the Anova() function in car.
Type I or sequential Anova estimates a sequence of models in an effectively arbitrary order, |
42,761 | The meaning of tensors in the neural network community [duplicate] | Tensors in the neural network community = vector (1D-tensor), matrix/array (2D-tensor), or multi-dimensional array (nD-tensor, with $n > 2$).
Examples:
Related: Why the sudden fascination with tensors? | The meaning of tensors in the neural network community [duplicate] | Tensors in the neural network community = vector (1D-tensor), matrix/array (2D-tensor), or multi-dimensional array (nD-tensor, with $n > 2$).
Examples:
Related: Why the sudden fascination with te | The meaning of tensors in the neural network community [duplicate]
Tensors in the neural network community = vector (1D-tensor), matrix/array (2D-tensor), or multi-dimensional array (nD-tensor, with $n > 2$).
Examples:
Related: Why the sudden fascination with tensors? | The meaning of tensors in the neural network community [duplicate]
Tensors in the neural network community = vector (1D-tensor), matrix/array (2D-tensor), or multi-dimensional array (nD-tensor, with $n > 2$).
Examples:
Related: Why the sudden fascination with te |
42,762 | Confidence-interval / p-value duality vs. frequentist interpretation of CIs | You use different null hypotheses in each situation.
When performing a hypothesis test, you set the null hypothesis to some value you are attempting to test the implausibility of. Let's consider the following model:
$$ Y = \beta * X + \epsilon $$
You will collect some data and with it, compute an estimate of $\beta$, w... | Confidence-interval / p-value duality vs. frequentist interpretation of CIs | You use different null hypotheses in each situation.
When performing a hypothesis test, you set the null hypothesis to some value you are attempting to test the implausibility of. Let's consider the f | Confidence-interval / p-value duality vs. frequentist interpretation of CIs
You use different null hypotheses in each situation.
When performing a hypothesis test, you set the null hypothesis to some value you are attempting to test the implausibility of. Let's consider the following model:
$$ Y = \beta * X + \epsilon ... | Confidence-interval / p-value duality vs. frequentist interpretation of CIs
You use different null hypotheses in each situation.
When performing a hypothesis test, you set the null hypothesis to some value you are attempting to test the implausibility of. Let's consider the f |
42,763 | Relation between covariance and joint distribution | I would ask the opposite question: what is the implications of zero covariance for any surviving dependence? The two variables can certainly be stochastically dependent even though their covariance is zero, but what kind of dependencies, and what kind of bivariate joint distributions, are excluded if covariance is zero... | Relation between covariance and joint distribution | I would ask the opposite question: what is the implications of zero covariance for any surviving dependence? The two variables can certainly be stochastically dependent even though their covariance is | Relation between covariance and joint distribution
I would ask the opposite question: what is the implications of zero covariance for any surviving dependence? The two variables can certainly be stochastically dependent even though their covariance is zero, but what kind of dependencies, and what kind of bivariate join... | Relation between covariance and joint distribution
I would ask the opposite question: what is the implications of zero covariance for any surviving dependence? The two variables can certainly be stochastically dependent even though their covariance is |
42,764 | Can I get a Cholesky decomposition from the inverse of a matrix? | You can avoid inverting the matrix by generating draws by means of the eigendecomposition method. According to this method, the draws are generated by doing this product:
$$
(V D)^\top X^\top \,,
$$
where $V$ is the eigenvectors of the matrix, $D$ is a diagonal
matrix containing the square roots of the eigenvalues and... | Can I get a Cholesky decomposition from the inverse of a matrix? | You can avoid inverting the matrix by generating draws by means of the eigendecomposition method. According to this method, the draws are generated by doing this product:
$$
(V D)^\top X^\top \,,
$$
w | Can I get a Cholesky decomposition from the inverse of a matrix?
You can avoid inverting the matrix by generating draws by means of the eigendecomposition method. According to this method, the draws are generated by doing this product:
$$
(V D)^\top X^\top \,,
$$
where $V$ is the eigenvectors of the matrix, $D$ is a di... | Can I get a Cholesky decomposition from the inverse of a matrix?
You can avoid inverting the matrix by generating draws by means of the eigendecomposition method. According to this method, the draws are generated by doing this product:
$$
(V D)^\top X^\top \,,
$$
w |
42,765 | Can I calculate this Bayesian line without needing to simulate every point? | As Kahneman & Tversky explain, and as you have accurately stated, the plot converts a probability $p$ (on the horizontal axis) to odds, multiplies that by an odds ratio $\alpha = 5.44$, then converts it back to a probability $q$ plotted on the vertical axis. It shows how Bayes' Rule works for a fixed odds ratio applie... | Can I calculate this Bayesian line without needing to simulate every point? | As Kahneman & Tversky explain, and as you have accurately stated, the plot converts a probability $p$ (on the horizontal axis) to odds, multiplies that by an odds ratio $\alpha = 5.44$, then converts | Can I calculate this Bayesian line without needing to simulate every point?
As Kahneman & Tversky explain, and as you have accurately stated, the plot converts a probability $p$ (on the horizontal axis) to odds, multiplies that by an odds ratio $\alpha = 5.44$, then converts it back to a probability $q$ plotted on the ... | Can I calculate this Bayesian line without needing to simulate every point?
As Kahneman & Tversky explain, and as you have accurately stated, the plot converts a probability $p$ (on the horizontal axis) to odds, multiplies that by an odds ratio $\alpha = 5.44$, then converts |
42,766 | Why do I end up with a highly correlated matrix when I multiply two strictly positive random matrices? | It helps to think about the underlying meaning of the scores that result from each approach. Denote the binary user-item scores as $X_{hi}$ for user $h=1,...,H$ on item $i=1,...,I$ and the item-attribute scores as $T_{ia}$ for item $i=1,...,I$ and attribute $a=1,...,A$.
First consider using the dot product (without no... | Why do I end up with a highly correlated matrix when I multiply two strictly positive random matrice | It helps to think about the underlying meaning of the scores that result from each approach. Denote the binary user-item scores as $X_{hi}$ for user $h=1,...,H$ on item $i=1,...,I$ and the item-attrib | Why do I end up with a highly correlated matrix when I multiply two strictly positive random matrices?
It helps to think about the underlying meaning of the scores that result from each approach. Denote the binary user-item scores as $X_{hi}$ for user $h=1,...,H$ on item $i=1,...,I$ and the item-attribute scores as $T_... | Why do I end up with a highly correlated matrix when I multiply two strictly positive random matrice
It helps to think about the underlying meaning of the scores that result from each approach. Denote the binary user-item scores as $X_{hi}$ for user $h=1,...,H$ on item $i=1,...,I$ and the item-attrib |
42,767 | Magic Urn Problem | What am I missing?
How can it be that even though the equations that I used to solve the problem (and remember I didn't know the answer was going to be 3R/3B at that point) yield the correct answer even with the inclusion of an impossible scenario (2 pairs of red)?
$$0.5 = p\,(2\,R\,|after\,Red\,pair)+p\,(2\,B\,|aft... | Magic Urn Problem | What am I missing?
How can it be that even though the equations that I used to solve the problem (and remember I didn't know the answer was going to be 3R/3B at that point) yield the correct answer | Magic Urn Problem
What am I missing?
How can it be that even though the equations that I used to solve the problem (and remember I didn't know the answer was going to be 3R/3B at that point) yield the correct answer even with the inclusion of an impossible scenario (2 pairs of red)?
$$0.5 = p\,(2\,R\,|after\,Red\,pa... | Magic Urn Problem
What am I missing?
How can it be that even though the equations that I used to solve the problem (and remember I didn't know the answer was going to be 3R/3B at that point) yield the correct answer |
42,768 | Magic Urn Problem | You miss that for n<4 this:
$$0.5 = p\,(2\,R\,|after\,Red\,pair)+p\,(2\,B\,|after\,Red\,pair)= \frac{{n-2\choose 2}}{{2n-2\choose2}} + \frac{{n\choose 2}}{{2n-2\choose2}}$$
does not hold.
First, for $n=1$ the problem makes no sense, because you cannot take 2 balls and then 2 more balls.
For $n=2$ or $n=3$, $p\,(2\,R\,|... | Magic Urn Problem | You miss that for n<4 this:
$$0.5 = p\,(2\,R\,|after\,Red\,pair)+p\,(2\,B\,|after\,Red\,pair)= \frac{{n-2\choose 2}}{{2n-2\choose2}} + \frac{{n\choose 2}}{{2n-2\choose2}}$$
does not hold.
First, for $ | Magic Urn Problem
You miss that for n<4 this:
$$0.5 = p\,(2\,R\,|after\,Red\,pair)+p\,(2\,B\,|after\,Red\,pair)= \frac{{n-2\choose 2}}{{2n-2\choose2}} + \frac{{n\choose 2}}{{2n-2\choose2}}$$
does not hold.
First, for $n=1$ the problem makes no sense, because you cannot take 2 balls and then 2 more balls.
For $n=2$ or $... | Magic Urn Problem
You miss that for n<4 this:
$$0.5 = p\,(2\,R\,|after\,Red\,pair)+p\,(2\,B\,|after\,Red\,pair)= \frac{{n-2\choose 2}}{{2n-2\choose2}} + \frac{{n\choose 2}}{{2n-2\choose2}}$$
does not hold.
First, for $ |
42,769 | Magic Urn Problem | This may not be formal but I figured:
\begin{align}
\frac{(n-2)(n-3)}{(2n-2)(2n-3)} + \frac{(n)(n-1)}{(2n-2)(2n-3)} &= .50 \\[10pt]
\frac{(n-2)(n-3)+(n)(n-1)}{(2n-2)(2n-3)} &= .50
\end{align}
Solving for $n$, QED $n=3$.
So it starts with 3 of each color, i.e., 6 balls total. | Magic Urn Problem | This may not be formal but I figured:
\begin{align}
\frac{(n-2)(n-3)}{(2n-2)(2n-3)} + \frac{(n)(n-1)}{(2n-2)(2n-3)} &= .50 \\[10pt]
\frac{(n-2)(n-3)+(n)(n-1)}{(2n-2)(2n-3)} &= .50
\end{align}
Solving | Magic Urn Problem
This may not be formal but I figured:
\begin{align}
\frac{(n-2)(n-3)}{(2n-2)(2n-3)} + \frac{(n)(n-1)}{(2n-2)(2n-3)} &= .50 \\[10pt]
\frac{(n-2)(n-3)+(n)(n-1)}{(2n-2)(2n-3)} &= .50
\end{align}
Solving for $n$, QED $n=3$.
So it starts with 3 of each color, i.e., 6 balls total. | Magic Urn Problem
This may not be formal but I figured:
\begin{align}
\frac{(n-2)(n-3)}{(2n-2)(2n-3)} + \frac{(n)(n-1)}{(2n-2)(2n-3)} &= .50 \\[10pt]
\frac{(n-2)(n-3)+(n)(n-1)}{(2n-2)(2n-3)} &= .50
\end{align}
Solving |
42,770 | Magic Urn Problem | Let $n$ be the number of balls. Then solve:
$$
\left[\frac{\frac n 2 }{n-2} \right] \left[\frac{\frac n 2 - 1}{n-3} \right] = \frac 1 2
$$ | Magic Urn Problem | Let $n$ be the number of balls. Then solve:
$$
\left[\frac{\frac n 2 }{n-2} \right] \left[\frac{\frac n 2 - 1}{n-3} \right] = \frac 1 2
$$ | Magic Urn Problem
Let $n$ be the number of balls. Then solve:
$$
\left[\frac{\frac n 2 }{n-2} \right] \left[\frac{\frac n 2 - 1}{n-3} \right] = \frac 1 2
$$ | Magic Urn Problem
Let $n$ be the number of balls. Then solve:
$$
\left[\frac{\frac n 2 }{n-2} \right] \left[\frac{\frac n 2 - 1}{n-3} \right] = \frac 1 2
$$ |
42,771 | Comparing magnitude of coefficients in a logistic regression | The marginal effects from a logistic regression is the following:
The partial derivative essentially tells you the effect of a unit change in some variable x
The first part of the equation,, is always positive and would look like the curve below:
First thing to notice is that the marginal effect will depend on X. So... | Comparing magnitude of coefficients in a logistic regression | The marginal effects from a logistic regression is the following:
The partial derivative essentially tells you the effect of a unit change in some variable x
The first part of the equation,, is alwa | Comparing magnitude of coefficients in a logistic regression
The marginal effects from a logistic regression is the following:
The partial derivative essentially tells you the effect of a unit change in some variable x
The first part of the equation,, is always positive and would look like the curve below:
First thi... | Comparing magnitude of coefficients in a logistic regression
The marginal effects from a logistic regression is the following:
The partial derivative essentially tells you the effect of a unit change in some variable x
The first part of the equation,, is alwa |
42,772 | Comparing magnitude of coefficients in a logistic regression | I think you cannot. For a continuous variable such as Age, you can make the coefficient as big or small as you want, if you change your measurement unit(such as from second to 1000 years). For multiple regression you only can study the relation of one predictor variable with your outcome variable at one time and ho... | Comparing magnitude of coefficients in a logistic regression | I think you cannot. For a continuous variable such as Age, you can make the coefficient as big or small as you want, if you change your measurement unit(such as from second to 1000 years). For mul | Comparing magnitude of coefficients in a logistic regression
I think you cannot. For a continuous variable such as Age, you can make the coefficient as big or small as you want, if you change your measurement unit(such as from second to 1000 years). For multiple regression you only can study the relation of one pre... | Comparing magnitude of coefficients in a logistic regression
I think you cannot. For a continuous variable such as Age, you can make the coefficient as big or small as you want, if you change your measurement unit(such as from second to 1000 years). For mul |
42,773 | Approximating the distribution of a linear combination of beta-distributed independent random variables | If the skewness of the beta components are all low, then the absolute third moments should also be low*, and the normal approximation should tend to come in quite quickly (see the Berry-Esseen theorem for non-i.i.d. variates).
* I don't mean this comment as a general one, just in respect of beta variates. For example,... | Approximating the distribution of a linear combination of beta-distributed independent random variab | If the skewness of the beta components are all low, then the absolute third moments should also be low*, and the normal approximation should tend to come in quite quickly (see the Berry-Esseen theorem | Approximating the distribution of a linear combination of beta-distributed independent random variables
If the skewness of the beta components are all low, then the absolute third moments should also be low*, and the normal approximation should tend to come in quite quickly (see the Berry-Esseen theorem for non-i.i.d. ... | Approximating the distribution of a linear combination of beta-distributed independent random variab
If the skewness of the beta components are all low, then the absolute third moments should also be low*, and the normal approximation should tend to come in quite quickly (see the Berry-Esseen theorem |
42,774 | Berry-Esseen bound for binomial distribution | Please don't shoot me if this doesn't work (well) or addresses a different problem than you want.
If your goal is to get the best asymptotic approximation of the Binomial, as opposed to getting the best Berry Esseen bound for its own sake, then consider using an Edgeworth Expansion http://projecteuclid.org/download/pdf... | Berry-Esseen bound for binomial distribution | Please don't shoot me if this doesn't work (well) or addresses a different problem than you want.
If your goal is to get the best asymptotic approximation of the Binomial, as opposed to getting the be | Berry-Esseen bound for binomial distribution
Please don't shoot me if this doesn't work (well) or addresses a different problem than you want.
If your goal is to get the best asymptotic approximation of the Binomial, as opposed to getting the best Berry Esseen bound for its own sake, then consider using an Edgeworth Ex... | Berry-Esseen bound for binomial distribution
Please don't shoot me if this doesn't work (well) or addresses a different problem than you want.
If your goal is to get the best asymptotic approximation of the Binomial, as opposed to getting the be |
42,775 | testing contrast in two-way ANOVA using multcomp | This can be solved by using the ingenious combination of afex with lsmeans (and also multcomp if one desires so, but this is usually not necessary). Furthermore, thanks to afex functionality to aggregate automatically, dplyr is not needed.
library(afex)
require(lsmeans)
require(multcomp)
data(obk.long)
# Step 1: set u... | testing contrast in two-way ANOVA using multcomp | This can be solved by using the ingenious combination of afex with lsmeans (and also multcomp if one desires so, but this is usually not necessary). Furthermore, thanks to afex functionality to aggreg | testing contrast in two-way ANOVA using multcomp
This can be solved by using the ingenious combination of afex with lsmeans (and also multcomp if one desires so, but this is usually not necessary). Furthermore, thanks to afex functionality to aggregate automatically, dplyr is not needed.
library(afex)
require(lsmeans)
... | testing contrast in two-way ANOVA using multcomp
This can be solved by using the ingenious combination of afex with lsmeans (and also multcomp if one desires so, but this is usually not necessary). Furthermore, thanks to afex functionality to aggreg |
42,776 | Visualizing SVM results | Usually a dimension reduction technique is employed to visualize fit on many variables.
Usually again SVD is used to reduce dimensions and keep 2 components, and visualize.
Here's how it might look like -
Note that the x and y axes are the top 2 components of the SVD decomposition.
I haven't used R much lately, so I u... | Visualizing SVM results | Usually a dimension reduction technique is employed to visualize fit on many variables.
Usually again SVD is used to reduce dimensions and keep 2 components, and visualize.
Here's how it might look li | Visualizing SVM results
Usually a dimension reduction technique is employed to visualize fit on many variables.
Usually again SVD is used to reduce dimensions and keep 2 components, and visualize.
Here's how it might look like -
Note that the x and y axes are the top 2 components of the SVD decomposition.
I haven't us... | Visualizing SVM results
Usually a dimension reduction technique is employed to visualize fit on many variables.
Usually again SVD is used to reduce dimensions and keep 2 components, and visualize.
Here's how it might look li |
42,777 | Mixed effects - how to model random scaling of observations? | There is an identifiability issue. Suppose $\beta$ and $u$ work. Then, $\beta/2$ and $2u$ will work as well.
The basic solution is to build standard curves for your sensors and run them with your experiments. Then, your raw measurements would be transformed into measurements on the same scale.
Failing that, you wil... | Mixed effects - how to model random scaling of observations? | There is an identifiability issue. Suppose $\beta$ and $u$ work. Then, $\beta/2$ and $2u$ will work as well.
The basic solution is to build standard curves for your sensors and run them with your e | Mixed effects - how to model random scaling of observations?
There is an identifiability issue. Suppose $\beta$ and $u$ work. Then, $\beta/2$ and $2u$ will work as well.
The basic solution is to build standard curves for your sensors and run them with your experiments. Then, your raw measurements would be transform... | Mixed effects - how to model random scaling of observations?
There is an identifiability issue. Suppose $\beta$ and $u$ work. Then, $\beta/2$ and $2u$ will work as well.
The basic solution is to build standard curves for your sensors and run them with your e |
42,778 | split-split plot design with unbalanced repeated measures in lme4 or nlme (SAS translation) | The one practical thing I can tell you is that the denominator degrees-of-freedom business is available for lme4 models, using the pbkrtest package or various wrappers for it: see the ?pvalues man page from recent versions of lme4.
library("lme4")
options(contrasts=c("contr.sum","contr.poly"))
m1 <- lmer(output~0+mainp... | split-split plot design with unbalanced repeated measures in lme4 or nlme (SAS translation) | The one practical thing I can tell you is that the denominator degrees-of-freedom business is available for lme4 models, using the pbkrtest package or various wrappers for it: see the ?pvalues man pag | split-split plot design with unbalanced repeated measures in lme4 or nlme (SAS translation)
The one practical thing I can tell you is that the denominator degrees-of-freedom business is available for lme4 models, using the pbkrtest package or various wrappers for it: see the ?pvalues man page from recent versions of lm... | split-split plot design with unbalanced repeated measures in lme4 or nlme (SAS translation)
The one practical thing I can tell you is that the denominator degrees-of-freedom business is available for lme4 models, using the pbkrtest package or various wrappers for it: see the ?pvalues man pag |
42,779 | Which samples are used in random forests for calculating variable importance? | After each tree is grown, the values of a given predictor are randomly permuted in the out-of-bag sample (the one third of unique observations that are not part of the bootstrap sample) and the prediction error of the tree on the modified OOB sample is compared with the prediction error of the tree on the untouched OOB... | Which samples are used in random forests for calculating variable importance? | After each tree is grown, the values of a given predictor are randomly permuted in the out-of-bag sample (the one third of unique observations that are not part of the bootstrap sample) and the predic | Which samples are used in random forests for calculating variable importance?
After each tree is grown, the values of a given predictor are randomly permuted in the out-of-bag sample (the one third of unique observations that are not part of the bootstrap sample) and the prediction error of the tree on the modified OOB... | Which samples are used in random forests for calculating variable importance?
After each tree is grown, the values of a given predictor are randomly permuted in the out-of-bag sample (the one third of unique observations that are not part of the bootstrap sample) and the predic |
42,780 | Does linear regression assume all variables (predictors and response) to be multivariate normal? [duplicate] | As a general assertation this is just plain wrong, I agree complexly with @Glen_b. For a review of the classical linear model assumptions see this
But essentially normality of the error term ensures, that the distribution of the $\hat{\beta}_k$ is exactly normal. Instead of just being approximated by the t distribution... | Does linear regression assume all variables (predictors and response) to be multivariate normal? [du | As a general assertation this is just plain wrong, I agree complexly with @Glen_b. For a review of the classical linear model assumptions see this
But essentially normality of the error term ensures, | Does linear regression assume all variables (predictors and response) to be multivariate normal? [duplicate]
As a general assertation this is just plain wrong, I agree complexly with @Glen_b. For a review of the classical linear model assumptions see this
But essentially normality of the error term ensures, that the di... | Does linear regression assume all variables (predictors and response) to be multivariate normal? [du
As a general assertation this is just plain wrong, I agree complexly with @Glen_b. For a review of the classical linear model assumptions see this
But essentially normality of the error term ensures, |
42,781 | Testing significant difference of correlation matrices | I'm looking at the same issue - I just came across the functions cortest.normal and cortest.jennrich, in the excellent psych package for R by William Revelle (see http://www.personality-project.org/r/html/cortest.mat.html). That page also contains references to articles on these tests. In you (our) case, the Jennrich t... | Testing significant difference of correlation matrices | I'm looking at the same issue - I just came across the functions cortest.normal and cortest.jennrich, in the excellent psych package for R by William Revelle (see http://www.personality-project.org/r/ | Testing significant difference of correlation matrices
I'm looking at the same issue - I just came across the functions cortest.normal and cortest.jennrich, in the excellent psych package for R by William Revelle (see http://www.personality-project.org/r/html/cortest.mat.html). That page also contains references to art... | Testing significant difference of correlation matrices
I'm looking at the same issue - I just came across the functions cortest.normal and cortest.jennrich, in the excellent psych package for R by William Revelle (see http://www.personality-project.org/r/ |
42,782 | Berry-Esseen Theorem with Continuity Correction | Well, if $S_n$ is discrete, then we have $F_n(x)=F_n(x+\frac12)$ for $x\in \mathbb Z$, in which case we recover the same Berry-Esseen bound for the continuity correction as without:
$$\left| F_n(x)-\Phi\left(\frac{x+\frac12}{\sqrt n}\right)\right| = \left| F_n\left(x+\frac12\right)-\Phi\left(\frac{x+\frac12}{\sqrt n}\r... | Berry-Esseen Theorem with Continuity Correction | Well, if $S_n$ is discrete, then we have $F_n(x)=F_n(x+\frac12)$ for $x\in \mathbb Z$, in which case we recover the same Berry-Esseen bound for the continuity correction as without:
$$\left| F_n(x)-\P | Berry-Esseen Theorem with Continuity Correction
Well, if $S_n$ is discrete, then we have $F_n(x)=F_n(x+\frac12)$ for $x\in \mathbb Z$, in which case we recover the same Berry-Esseen bound for the continuity correction as without:
$$\left| F_n(x)-\Phi\left(\frac{x+\frac12}{\sqrt n}\right)\right| = \left| F_n\left(x+\fra... | Berry-Esseen Theorem with Continuity Correction
Well, if $S_n$ is discrete, then we have $F_n(x)=F_n(x+\frac12)$ for $x\in \mathbb Z$, in which case we recover the same Berry-Esseen bound for the continuity correction as without:
$$\left| F_n(x)-\P |
42,783 | Cholesky factorization and forward substitution less accurate than inversion? | For most cases Cholesky factorization should be a faster and more numerically stable method for solving a linear system of equation such as $Ax=b$ given that $A$ is describing a positive definite matrix. The standard workhorse behind the solution of linear systems is the QR decomposition; it does need the system $A$ to... | Cholesky factorization and forward substitution less accurate than inversion? | For most cases Cholesky factorization should be a faster and more numerically stable method for solving a linear system of equation such as $Ax=b$ given that $A$ is describing a positive definite matr | Cholesky factorization and forward substitution less accurate than inversion?
For most cases Cholesky factorization should be a faster and more numerically stable method for solving a linear system of equation such as $Ax=b$ given that $A$ is describing a positive definite matrix. The standard workhorse behind the solu... | Cholesky factorization and forward substitution less accurate than inversion?
For most cases Cholesky factorization should be a faster and more numerically stable method for solving a linear system of equation such as $Ax=b$ given that $A$ is describing a positive definite matr |
42,784 | How does the presence of factors affect the interpretation of the other coefficients in a regression? | Does this imply that the coefficient colourWhite:age = -0.0373729 is strictly limited to describing only the interaction between colour and age for people who are unemployed, non-citizen and arrested in 1997?
Yes, that is exactly what it means. If you want to investigate this interaction for the other years, you would... | How does the presence of factors affect the interpretation of the other coefficients in a regression | Does this imply that the coefficient colourWhite:age = -0.0373729 is strictly limited to describing only the interaction between colour and age for people who are unemployed, non-citizen and arrested | How does the presence of factors affect the interpretation of the other coefficients in a regression?
Does this imply that the coefficient colourWhite:age = -0.0373729 is strictly limited to describing only the interaction between colour and age for people who are unemployed, non-citizen and arrested in 1997?
Yes, tha... | How does the presence of factors affect the interpretation of the other coefficients in a regression
Does this imply that the coefficient colourWhite:age = -0.0373729 is strictly limited to describing only the interaction between colour and age for people who are unemployed, non-citizen and arrested |
42,785 | How do v-structures in graphical models reflect real world data? | You understand v-structures, but let's recall formally what they mean. What that v-structure (applied to your example) encodes is: A: Student IQ and C: Test difficulty are independent, i.e. $$I(A,C), \ i.e. \ P(A|C) = P(A)$$ but they are dependent given B: Test score. I.e. $$D(A,C|B), \ i.e. \ P(A|C,B) \neq P(A|C)$$ ... | How do v-structures in graphical models reflect real world data? | You understand v-structures, but let's recall formally what they mean. What that v-structure (applied to your example) encodes is: A: Student IQ and C: Test difficulty are independent, i.e. $$I(A,C), | How do v-structures in graphical models reflect real world data?
You understand v-structures, but let's recall formally what they mean. What that v-structure (applied to your example) encodes is: A: Student IQ and C: Test difficulty are independent, i.e. $$I(A,C), \ i.e. \ P(A|C) = P(A)$$ but they are dependent given ... | How do v-structures in graphical models reflect real world data?
You understand v-structures, but let's recall formally what they mean. What that v-structure (applied to your example) encodes is: A: Student IQ and C: Test difficulty are independent, i.e. $$I(A,C), |
42,786 | k-means and other non-parametric methods for clustering 1 dimensional data | K-means finds partitions in a single vector based on any heterogeneity in that vector. It won't automatically find two clusters unless you tell it to find two clusters out of the n possible clusters where n is the finite number of observations in your sample. It's only by generating up to n clusters and then using some... | k-means and other non-parametric methods for clustering 1 dimensional data | K-means finds partitions in a single vector based on any heterogeneity in that vector. It won't automatically find two clusters unless you tell it to find two clusters out of the n possible clusters w | k-means and other non-parametric methods for clustering 1 dimensional data
K-means finds partitions in a single vector based on any heterogeneity in that vector. It won't automatically find two clusters unless you tell it to find two clusters out of the n possible clusters where n is the finite number of observations i... | k-means and other non-parametric methods for clustering 1 dimensional data
K-means finds partitions in a single vector based on any heterogeneity in that vector. It won't automatically find two clusters unless you tell it to find two clusters out of the n possible clusters w |
42,787 | k-means and other non-parametric methods for clustering 1 dimensional data | K-means assigns objects to the nearest mean.
This makes sense mathematically as it minimizes the squared errors.
But if you look at your data set, the right gaussian has a larger variance than the left. Since k-means does not take this into account, the result will be suboptimal. GMM should work better on this particul... | k-means and other non-parametric methods for clustering 1 dimensional data | K-means assigns objects to the nearest mean.
This makes sense mathematically as it minimizes the squared errors.
But if you look at your data set, the right gaussian has a larger variance than the lef | k-means and other non-parametric methods for clustering 1 dimensional data
K-means assigns objects to the nearest mean.
This makes sense mathematically as it minimizes the squared errors.
But if you look at your data set, the right gaussian has a larger variance than the left. Since k-means does not take this into acco... | k-means and other non-parametric methods for clustering 1 dimensional data
K-means assigns objects to the nearest mean.
This makes sense mathematically as it minimizes the squared errors.
But if you look at your data set, the right gaussian has a larger variance than the lef |
42,788 | Difference between Kaplan Meier Estimator and the Empirical CDF | As I understand from a comment, the OP didn't realize that the Kaplan-Meier estimate is nothing but the empirical estimate of the survival function in case when there is no censoring.
Let me tell a word about that. Consider two independent random variables $X$ and $Y$ with continuous distributions, and independent repl... | Difference between Kaplan Meier Estimator and the Empirical CDF | As I understand from a comment, the OP didn't realize that the Kaplan-Meier estimate is nothing but the empirical estimate of the survival function in case when there is no censoring.
Let me tell a wo | Difference between Kaplan Meier Estimator and the Empirical CDF
As I understand from a comment, the OP didn't realize that the Kaplan-Meier estimate is nothing but the empirical estimate of the survival function in case when there is no censoring.
Let me tell a word about that. Consider two independent random variables... | Difference between Kaplan Meier Estimator and the Empirical CDF
As I understand from a comment, the OP didn't realize that the Kaplan-Meier estimate is nothing but the empirical estimate of the survival function in case when there is no censoring.
Let me tell a wo |
42,789 | How high is too high with Cronbach's alpha? | When I have ran into this kind of error, I had made a mistake during the coding process (I'm using SPSS). I usually use numbers like 99 or 999 to mark missing values. Once I forgot to specify the missing values to be excluded at the variable view, and I begun the scale analysis. I got very high C-alphas, around 0.95, j... | How high is too high with Cronbach's alpha? | When I have ran into this kind of error, I had made a mistake during the coding process (I'm using SPSS). I usually use numbers like 99 or 999 to mark missing values. Once I forgot to specify the miss | How high is too high with Cronbach's alpha?
When I have ran into this kind of error, I had made a mistake during the coding process (I'm using SPSS). I usually use numbers like 99 or 999 to mark missing values. Once I forgot to specify the missing values to be excluded at the variable view, and I begun the scale analys... | How high is too high with Cronbach's alpha?
When I have ran into this kind of error, I had made a mistake during the coding process (I'm using SPSS). I usually use numbers like 99 or 999 to mark missing values. Once I forgot to specify the miss |
42,790 | why the non-seasonal and seasonal parts are multiplied in ARIMA models? | why are they multiplied from the first place?
To produce a model where the seasonal component enters multiplicatively? It makes a kind of intuitive sense that it might work that way, and often seems to work okay in practice. Indeed, you so often see it (at least approximately) in the diagnostic plots (ACF and PACF) th... | why the non-seasonal and seasonal parts are multiplied in ARIMA models? | why are they multiplied from the first place?
To produce a model where the seasonal component enters multiplicatively? It makes a kind of intuitive sense that it might work that way, and often seems | why the non-seasonal and seasonal parts are multiplied in ARIMA models?
why are they multiplied from the first place?
To produce a model where the seasonal component enters multiplicatively? It makes a kind of intuitive sense that it might work that way, and often seems to work okay in practice. Indeed, you so often s... | why the non-seasonal and seasonal parts are multiplied in ARIMA models?
why are they multiplied from the first place?
To produce a model where the seasonal component enters multiplicatively? It makes a kind of intuitive sense that it might work that way, and often seems |
42,791 | Understanding the background to chi-square test for tables | From a memorable, intuitive perspective, your account is fine. The considerations of degrees of freedom rest on the understanding that each standardized residual,
$$Z_i = \frac{O_i-E_i}{\sqrt{E_i}},$$
is close enough to having a standard Normal distribution that the sum of their squares
$$X^2 = \sum Z_i^2 = \sum_i^n \... | Understanding the background to chi-square test for tables | From a memorable, intuitive perspective, your account is fine. The considerations of degrees of freedom rest on the understanding that each standardized residual,
$$Z_i = \frac{O_i-E_i}{\sqrt{E_i}},$ | Understanding the background to chi-square test for tables
From a memorable, intuitive perspective, your account is fine. The considerations of degrees of freedom rest on the understanding that each standardized residual,
$$Z_i = \frac{O_i-E_i}{\sqrt{E_i}},$$
is close enough to having a standard Normal distribution th... | Understanding the background to chi-square test for tables
From a memorable, intuitive perspective, your account is fine. The considerations of degrees of freedom rest on the understanding that each standardized residual,
$$Z_i = \frac{O_i-E_i}{\sqrt{E_i}},$ |
42,792 | How to differentiate with respect to a matrix? | Matrix calculus is used in such cases. Your equation looks like it's from OLS (least squares) theory. In those you differentiate by vector $x$ some quadratic forms like $\frac{\partial (x'A'Ax)}{\partial x}$. Look up relevant formulae in my link above.
If you really are up to differentiating by matrices not vectors, yo... | How to differentiate with respect to a matrix? | Matrix calculus is used in such cases. Your equation looks like it's from OLS (least squares) theory. In those you differentiate by vector $x$ some quadratic forms like $\frac{\partial (x'A'Ax)}{\part | How to differentiate with respect to a matrix?
Matrix calculus is used in such cases. Your equation looks like it's from OLS (least squares) theory. In those you differentiate by vector $x$ some quadratic forms like $\frac{\partial (x'A'Ax)}{\partial x}$. Look up relevant formulae in my link above.
If you really are up... | How to differentiate with respect to a matrix?
Matrix calculus is used in such cases. Your equation looks like it's from OLS (least squares) theory. In those you differentiate by vector $x$ some quadratic forms like $\frac{\partial (x'A'Ax)}{\part |
42,793 | How to select the best ARIMA order with low MAPE in R | This was the first that came to mind, and is just an example but it is slow and it should be done once per integration order you wish to consider. Looping through the 10 first AR and MA orders (again only example), saving the MAPE accuracy measures in a matrix "x". Then identifying the smallest value in the matrix afte... | How to select the best ARIMA order with low MAPE in R | This was the first that came to mind, and is just an example but it is slow and it should be done once per integration order you wish to consider. Looping through the 10 first AR and MA orders (again | How to select the best ARIMA order with low MAPE in R
This was the first that came to mind, and is just an example but it is slow and it should be done once per integration order you wish to consider. Looping through the 10 first AR and MA orders (again only example), saving the MAPE accuracy measures in a matrix "x". ... | How to select the best ARIMA order with low MAPE in R
This was the first that came to mind, and is just an example but it is slow and it should be done once per integration order you wish to consider. Looping through the 10 first AR and MA orders (again |
42,794 | Computing VaR with AR-GARCH | This is best done through simulation. See my MATLAB code example and explanation below:
%% Get S&P 500 price series
d=fetch(yahoo,'^GSPC','Adj Close','1-jan-2014','30-dec-2014');
n = 1; % # of shares
p = d(end:-1:1,2); % share price, the dates are backwards
PV0 = n*p(end); % portfolio value today
%%
r=price2ret(p,[],'C... | Computing VaR with AR-GARCH | This is best done through simulation. See my MATLAB code example and explanation below:
%% Get S&P 500 price series
d=fetch(yahoo,'^GSPC','Adj Close','1-jan-2014','30-dec-2014');
n = 1; % # of shares
| Computing VaR with AR-GARCH
This is best done through simulation. See my MATLAB code example and explanation below:
%% Get S&P 500 price series
d=fetch(yahoo,'^GSPC','Adj Close','1-jan-2014','30-dec-2014');
n = 1; % # of shares
p = d(end:-1:1,2); % share price, the dates are backwards
PV0 = n*p(end); % portfolio value ... | Computing VaR with AR-GARCH
This is best done through simulation. See my MATLAB code example and explanation below:
%% Get S&P 500 price series
d=fetch(yahoo,'^GSPC','Adj Close','1-jan-2014','30-dec-2014');
n = 1; % # of shares
|
42,795 | Understanding confidence intervals in Firth penalized logistic regression | The fact that firth=FALSE doesn't give similar results to glm is puzzling to me -- hopefully someone else can answer. As far as pl goes, though, you're almost always better off with profile confidence intervals. The Wald confidence intervals assume that the (implicit) log-likelihood surface is locally quadratic, whic... | Understanding confidence intervals in Firth penalized logistic regression | The fact that firth=FALSE doesn't give similar results to glm is puzzling to me -- hopefully someone else can answer. As far as pl goes, though, you're almost always better off with profile confidenc | Understanding confidence intervals in Firth penalized logistic regression
The fact that firth=FALSE doesn't give similar results to glm is puzzling to me -- hopefully someone else can answer. As far as pl goes, though, you're almost always better off with profile confidence intervals. The Wald confidence intervals as... | Understanding confidence intervals in Firth penalized logistic regression
The fact that firth=FALSE doesn't give similar results to glm is puzzling to me -- hopefully someone else can answer. As far as pl goes, though, you're almost always better off with profile confidenc |
42,796 | Determining probability mass function (PMF) using Bayesian approach | The posterior probability over the set of true media (A,B,C) is a true probability distribution, conditional on the observed medium, A in your case. You only have to apply Bayes' theorem $$P(A\text{ true}|A\text{ observed})=P(A\text{ observed}|A\text{ true})P(A\text{ true})/P(A\text{ observed})$$ The first term on the ... | Determining probability mass function (PMF) using Bayesian approach | The posterior probability over the set of true media (A,B,C) is a true probability distribution, conditional on the observed medium, A in your case. You only have to apply Bayes' theorem $$P(A\text{ t | Determining probability mass function (PMF) using Bayesian approach
The posterior probability over the set of true media (A,B,C) is a true probability distribution, conditional on the observed medium, A in your case. You only have to apply Bayes' theorem $$P(A\text{ true}|A\text{ observed})=P(A\text{ observed}|A\text{ ... | Determining probability mass function (PMF) using Bayesian approach
The posterior probability over the set of true media (A,B,C) is a true probability distribution, conditional on the observed medium, A in your case. You only have to apply Bayes' theorem $$P(A\text{ t |
42,797 | Confidence interval before the first event in a Kaplan–Meier curve | Various ways to estimate Kaplan-Meier (KM) confidence intervals (CI) in difficult situations like this--before the first event time, with heavy censoring, or at late times when few are still at risk--have been discussed by Fay et al., "Pointwise confidence intervals for a survival distribution with small samples or hea... | Confidence interval before the first event in a Kaplan–Meier curve | Various ways to estimate Kaplan-Meier (KM) confidence intervals (CI) in difficult situations like this--before the first event time, with heavy censoring, or at late times when few are still at risk-- | Confidence interval before the first event in a Kaplan–Meier curve
Various ways to estimate Kaplan-Meier (KM) confidence intervals (CI) in difficult situations like this--before the first event time, with heavy censoring, or at late times when few are still at risk--have been discussed by Fay et al., "Pointwise confide... | Confidence interval before the first event in a Kaplan–Meier curve
Various ways to estimate Kaplan-Meier (KM) confidence intervals (CI) in difficult situations like this--before the first event time, with heavy censoring, or at late times when few are still at risk-- |
42,798 | Confidence interval before the first event in a Kaplan–Meier curve | seems to me that there should be a way to compute confidence interval on survival prior to the first failure using likelihood concept
the likelihood of seeing $k=0$ zero failures from $n$ units is given by binomial probability probability mass function $B(n,p)$ https://en.wikipedia.org/wiki/Binomial_distribution
if $\a... | Confidence interval before the first event in a Kaplan–Meier curve | seems to me that there should be a way to compute confidence interval on survival prior to the first failure using likelihood concept
the likelihood of seeing $k=0$ zero failures from $n$ units is giv | Confidence interval before the first event in a Kaplan–Meier curve
seems to me that there should be a way to compute confidence interval on survival prior to the first failure using likelihood concept
the likelihood of seeing $k=0$ zero failures from $n$ units is given by binomial probability probability mass function ... | Confidence interval before the first event in a Kaplan–Meier curve
seems to me that there should be a way to compute confidence interval on survival prior to the first failure using likelihood concept
the likelihood of seeing $k=0$ zero failures from $n$ units is giv |
42,799 | What is "fitted function" in the context of boosted regression tree? | Your fitted model is best viewed as a function that consumes data points and returns predictions, this is the fitted function in its greatest generality. For example, in linear regression, the fitted model can be expressed as a vector of estimated model coefficients $(\beta_0, \beta_1, \ldots, \beta_n)$, and the fitte... | What is "fitted function" in the context of boosted regression tree? | Your fitted model is best viewed as a function that consumes data points and returns predictions, this is the fitted function in its greatest generality. For example, in linear regression, the fitted | What is "fitted function" in the context of boosted regression tree?
Your fitted model is best viewed as a function that consumes data points and returns predictions, this is the fitted function in its greatest generality. For example, in linear regression, the fitted model can be expressed as a vector of estimated mo... | What is "fitted function" in the context of boosted regression tree?
Your fitted model is best viewed as a function that consumes data points and returns predictions, this is the fitted function in its greatest generality. For example, in linear regression, the fitted |
42,800 | Tests of equal variances- am I doing this right? | Variance alone isn't suitable, since you're trying to compare closeness to a particular location on the circle. You could have small variance but be nowhere near 0.
If you compare absolute deviation from target, you'd have two sets of values on $[0,\pi]$ (or $[0,180]$ if you prefer to work in degrees), for which you're... | Tests of equal variances- am I doing this right? | Variance alone isn't suitable, since you're trying to compare closeness to a particular location on the circle. You could have small variance but be nowhere near 0.
If you compare absolute deviation f | Tests of equal variances- am I doing this right?
Variance alone isn't suitable, since you're trying to compare closeness to a particular location on the circle. You could have small variance but be nowhere near 0.
If you compare absolute deviation from target, you'd have two sets of values on $[0,\pi]$ (or $[0,180]$ if... | Tests of equal variances- am I doing this right?
Variance alone isn't suitable, since you're trying to compare closeness to a particular location on the circle. You could have small variance but be nowhere near 0.
If you compare absolute deviation f |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.