idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
43,601 | Location test under a bounded non-stationarity? | Ok, I've thought of two possible ways to answer to this problem using Bayesian analysis. I will assume $\sigma$ to be known throughout this answer. First start with a "baby" case, where $n=2$ (or alternatively, only using the last two observations as a first approximation). You would usually start this by assuming a... | Location test under a bounded non-stationarity? | Ok, I've thought of two possible ways to answer to this problem using Bayesian analysis. I will assume $\sigma$ to be known throughout this answer. First start with a "baby" case, where $n=2$ (or al | Location test under a bounded non-stationarity?
Ok, I've thought of two possible ways to answer to this problem using Bayesian analysis. I will assume $\sigma$ to be known throughout this answer. First start with a "baby" case, where $n=2$ (or alternatively, only using the last two observations as a first approximati... | Location test under a bounded non-stationarity?
Ok, I've thought of two possible ways to answer to this problem using Bayesian analysis. I will assume $\sigma$ to be known throughout this answer. First start with a "baby" case, where $n=2$ (or al |
43,602 | cforest and randomForest classification prediction error | Could it be your value for the mtry parameter in cforest? With it set to 8, you're using bagging. Set it to mtry=3 and see how it compares to the randomForest algorithm | cforest and randomForest classification prediction error | Could it be your value for the mtry parameter in cforest? With it set to 8, you're using bagging. Set it to mtry=3 and see how it compares to the randomForest algorithm | cforest and randomForest classification prediction error
Could it be your value for the mtry parameter in cforest? With it set to 8, you're using bagging. Set it to mtry=3 and see how it compares to the randomForest algorithm | cforest and randomForest classification prediction error
Could it be your value for the mtry parameter in cforest? With it set to 8, you're using bagging. Set it to mtry=3 and see how it compares to the randomForest algorithm |
43,603 | cforest and randomForest classification prediction error | There are differences in implementations of randomForest and cforest, mainly in how predictions are computed from the forests. The differences are discussed in http://www.jstatsoft.org/v50/i11/paper which provides a framework for comparing errors in survival forests. | cforest and randomForest classification prediction error | There are differences in implementations of randomForest and cforest, mainly in how predictions are computed from the forests. The differences are discussed in http://www.jstatsoft.org/v50/i11/paper | cforest and randomForest classification prediction error
There are differences in implementations of randomForest and cforest, mainly in how predictions are computed from the forests. The differences are discussed in http://www.jstatsoft.org/v50/i11/paper which provides a framework for comparing errors in survival for... | cforest and randomForest classification prediction error
There are differences in implementations of randomForest and cforest, mainly in how predictions are computed from the forests. The differences are discussed in http://www.jstatsoft.org/v50/i11/paper |
43,604 | Conducting planned comparisons in mixed model using lmer | It sounds like you basically have a problem of model choice. I think this is best treated as a decision problem. You want to act as if the final model you select is the true model, so that you can make conclusions about your data.
So in decision theory, you need to specify a loss function, which says how you are goin... | Conducting planned comparisons in mixed model using lmer | It sounds like you basically have a problem of model choice. I think this is best treated as a decision problem. You want to act as if the final model you select is the true model, so that you can m | Conducting planned comparisons in mixed model using lmer
It sounds like you basically have a problem of model choice. I think this is best treated as a decision problem. You want to act as if the final model you select is the true model, so that you can make conclusions about your data.
So in decision theory, you nee... | Conducting planned comparisons in mixed model using lmer
It sounds like you basically have a problem of model choice. I think this is best treated as a decision problem. You want to act as if the final model you select is the true model, so that you can m |
43,605 | Comparing model fits across a set of nonlinear regression models | For each participant, compute the cross-validated (leave one out) prediction error per functional form and assign the participant the form with the smallest one. That should do something to keep the overfitting under control.
That approach ignores higher level problem structure: the population has groups that are assu... | Comparing model fits across a set of nonlinear regression models | For each participant, compute the cross-validated (leave one out) prediction error per functional form and assign the participant the form with the smallest one. That should do something to keep the | Comparing model fits across a set of nonlinear regression models
For each participant, compute the cross-validated (leave one out) prediction error per functional form and assign the participant the form with the smallest one. That should do something to keep the overfitting under control.
That approach ignores higher... | Comparing model fits across a set of nonlinear regression models
For each participant, compute the cross-validated (leave one out) prediction error per functional form and assign the participant the form with the smallest one. That should do something to keep the |
43,606 | Posterior consistency for scale-mixture shrinkage priors in low dimension? | Based on your reference I believe that you are estimating the vector $\boldsymbol{\beta}$ of size $p_n$ with a posterior distribution based on the observation of the vector $\mathbf{Y}$ of size $n$ in the model $$\mathbf{Y} = \mathbf{X} \boldsymbol{\beta} + \boldsymbol{\epsilon}$$ where $\mathbf{X}$ is a fixed regress... | Posterior consistency for scale-mixture shrinkage priors in low dimension? | Based on your reference I believe that you are estimating the vector $\boldsymbol{\beta}$ of size $p_n$ with a posterior distribution based on the observation of the vector $\mathbf{Y}$ of size $n$ in | Posterior consistency for scale-mixture shrinkage priors in low dimension?
Based on your reference I believe that you are estimating the vector $\boldsymbol{\beta}$ of size $p_n$ with a posterior distribution based on the observation of the vector $\mathbf{Y}$ of size $n$ in the model $$\mathbf{Y} = \mathbf{X} \boldsy... | Posterior consistency for scale-mixture shrinkage priors in low dimension?
Based on your reference I believe that you are estimating the vector $\boldsymbol{\beta}$ of size $p_n$ with a posterior distribution based on the observation of the vector $\mathbf{Y}$ of size $n$ in |
43,607 | z-score VS min-max normalization | The answer to your specific question about why z-score normalisation handles outliers better is largely to do with how standard deviations are calculated in the first place.
If there are outliers, then the effect that the deviation from the mean related to those outliers will have on the final statistic (i.e, the stand... | z-score VS min-max normalization | The answer to your specific question about why z-score normalisation handles outliers better is largely to do with how standard deviations are calculated in the first place.
If there are outliers, the | z-score VS min-max normalization
The answer to your specific question about why z-score normalisation handles outliers better is largely to do with how standard deviations are calculated in the first place.
If there are outliers, then the effect that the deviation from the mean related to those outliers will have on th... | z-score VS min-max normalization
The answer to your specific question about why z-score normalisation handles outliers better is largely to do with how standard deviations are calculated in the first place.
If there are outliers, the |
43,608 | PyMC3 implementation of Bayesian MMM: poor posterior inference | UPDATE:
I tried your dataset and did get the similar result. However, I noticed that the estimate for the noise (the variance of the error term) is relatively large. It indicates that the noise explains the most of the variation of this particular dataset, according to our model. At least, I got a different outcome for... | PyMC3 implementation of Bayesian MMM: poor posterior inference | UPDATE:
I tried your dataset and did get the similar result. However, I noticed that the estimate for the noise (the variance of the error term) is relatively large. It indicates that the noise explai | PyMC3 implementation of Bayesian MMM: poor posterior inference
UPDATE:
I tried your dataset and did get the similar result. However, I noticed that the estimate for the noise (the variance of the error term) is relatively large. It indicates that the noise explains the most of the variation of this particular dataset, ... | PyMC3 implementation of Bayesian MMM: poor posterior inference
UPDATE:
I tried your dataset and did get the similar result. However, I noticed that the estimate for the noise (the variance of the error term) is relatively large. It indicates that the noise explai |
43,609 | Probabilities arising from permutations | For arbitrary distributions $D$ and $E$ and for the permutation class of descending numbers, the algorithm I presented returns $n$ with the probability—
$\int_{-\infty}^{\infty} (\frac{F_E(z)^{n-1}}{(n-1)!} - \frac{F_E(z)^n}{n!}) dF_D(z)$ if $n \ge 1$, and
0 otherwise,
Where $F_D$ and $F_E$ are distribution functions... | Probabilities arising from permutations | For arbitrary distributions $D$ and $E$ and for the permutation class of descending numbers, the algorithm I presented returns $n$ with the probability—
$\int_{-\infty}^{\infty} (\frac{F_E(z)^{n-1}}{ | Probabilities arising from permutations
For arbitrary distributions $D$ and $E$ and for the permutation class of descending numbers, the algorithm I presented returns $n$ with the probability—
$\int_{-\infty}^{\infty} (\frac{F_E(z)^{n-1}}{(n-1)!} - \frac{F_E(z)^n}{n!}) dF_D(z)$ if $n \ge 1$, and
0 otherwise,
Where $F... | Probabilities arising from permutations
For arbitrary distributions $D$ and $E$ and for the permutation class of descending numbers, the algorithm I presented returns $n$ with the probability—
$\int_{-\infty}^{\infty} (\frac{F_E(z)^{n-1}}{ |
43,610 | Error “system is computationally singular” when running cox.zph for a Cox Model | I just had this problem, and solved it. Hopefully this will help you:
I had deleted one small group from my data set, based on a covariate with three levels. However, that covariate had been set to a factor with three levels. I noticed that there were NA’s showing up for that one level. When I recast the covariate a... | Error “system is computationally singular” when running cox.zph for a Cox Model | I just had this problem, and solved it. Hopefully this will help you:
I had deleted one small group from my data set, based on a covariate with three levels. However, that covariate had been set to a | Error “system is computationally singular” when running cox.zph for a Cox Model
I just had this problem, and solved it. Hopefully this will help you:
I had deleted one small group from my data set, based on a covariate with three levels. However, that covariate had been set to a factor with three levels. I noticed th... | Error “system is computationally singular” when running cox.zph for a Cox Model
I just had this problem, and solved it. Hopefully this will help you:
I had deleted one small group from my data set, based on a covariate with three levels. However, that covariate had been set to a |
43,611 | Improve precision/recall for class imbalance? | It's clear that your models are suffering from the imbalance in your data, which is a thing you'll need to fix. Now, on to your questions:
Any other feature engineering techniques i can do to improve predicting class 0? [have tried different things on text like TFIDF, Hashing Trick, selectKBest, SVD(), and maxAbsScale... | Improve precision/recall for class imbalance? | It's clear that your models are suffering from the imbalance in your data, which is a thing you'll need to fix. Now, on to your questions:
Any other feature engineering techniques i can do to improve | Improve precision/recall for class imbalance?
It's clear that your models are suffering from the imbalance in your data, which is a thing you'll need to fix. Now, on to your questions:
Any other feature engineering techniques i can do to improve predicting class 0? [have tried different things on text like TFIDF, Hash... | Improve precision/recall for class imbalance?
It's clear that your models are suffering from the imbalance in your data, which is a thing you'll need to fix. Now, on to your questions:
Any other feature engineering techniques i can do to improve |
43,612 | Standard Error of the cumulative value for time series | 1) One could estimate a model for each of the ten separately and then estimate the parameters globally across all 10 items leading to an F test. Do this for each time step.
2) You can use Monte Carlo techniques (boostrapping) to obtain density functions for the next k periods and then simply sum the pseudo observations... | Standard Error of the cumulative value for time series | 1) One could estimate a model for each of the ten separately and then estimate the parameters globally across all 10 items leading to an F test. Do this for each time step.
2) You can use Monte Carlo | Standard Error of the cumulative value for time series
1) One could estimate a model for each of the ten separately and then estimate the parameters globally across all 10 items leading to an F test. Do this for each time step.
2) You can use Monte Carlo techniques (boostrapping) to obtain density functions for the nex... | Standard Error of the cumulative value for time series
1) One could estimate a model for each of the ten separately and then estimate the parameters globally across all 10 items leading to an F test. Do this for each time step.
2) You can use Monte Carlo |
43,613 | Standard Error of the cumulative value for time series | 1) You can not use a bunch of pair-wise t-tests because this will massively increase the likelihood of a type 1 error. You need to perform a 2-step procedure to avoid this:
Step 1. If your null hypothesis is that all the means are equal, and the alternative is that the means are not equal, first use a 1-Way ANOVA Test ... | Standard Error of the cumulative value for time series | 1) You can not use a bunch of pair-wise t-tests because this will massively increase the likelihood of a type 1 error. You need to perform a 2-step procedure to avoid this:
Step 1. If your null hypoth | Standard Error of the cumulative value for time series
1) You can not use a bunch of pair-wise t-tests because this will massively increase the likelihood of a type 1 error. You need to perform a 2-step procedure to avoid this:
Step 1. If your null hypothesis is that all the means are equal, and the alternative is that... | Standard Error of the cumulative value for time series
1) You can not use a bunch of pair-wise t-tests because this will massively increase the likelihood of a type 1 error. You need to perform a 2-step procedure to avoid this:
Step 1. If your null hypoth |
43,614 | How to predict routes using clustering data | Because of my low reputation I'll use this reply as a comment.
Fitting a curve over the points of the blue course isn't enough? There are a lot of methods out there, one of them being the classic spline.
You can also try your own heuristic. I'll give some that came in mind.
Depending on the number of points you could... | How to predict routes using clustering data | Because of my low reputation I'll use this reply as a comment.
Fitting a curve over the points of the blue course isn't enough? There are a lot of methods out there, one of them being the classic spl | How to predict routes using clustering data
Because of my low reputation I'll use this reply as a comment.
Fitting a curve over the points of the blue course isn't enough? There are a lot of methods out there, one of them being the classic spline.
You can also try your own heuristic. I'll give some that came in mind.... | How to predict routes using clustering data
Because of my low reputation I'll use this reply as a comment.
Fitting a curve over the points of the blue course isn't enough? There are a lot of methods out there, one of them being the classic spl |
43,615 | Intuition: What is the difference between linear factor models and regular linear regression? | The difference lies not in the equations but in what they are used for. Whereas in linear regression X is an observed known value, in a linear factor model X is itself a random variable. The linear factor model is a statement about the joint distribution of X, Y and Z. Furthermore, linear factor models are often used t... | Intuition: What is the difference between linear factor models and regular linear regression? | The difference lies not in the equations but in what they are used for. Whereas in linear regression X is an observed known value, in a linear factor model X is itself a random variable. The linear fa | Intuition: What is the difference between linear factor models and regular linear regression?
The difference lies not in the equations but in what they are used for. Whereas in linear regression X is an observed known value, in a linear factor model X is itself a random variable. The linear factor model is a statement ... | Intuition: What is the difference between linear factor models and regular linear regression?
The difference lies not in the equations but in what they are used for. Whereas in linear regression X is an observed known value, in a linear factor model X is itself a random variable. The linear fa |
43,616 | Expected value of a "logistic uniform" multivariate | A couple of thoughts on this problem which might be of interest: your predictor can be interpreted as the Bayes classifier for a Gaussian mixture model. For example, if you take $r \in \mathbf{R}^d, Q \succ 0$, then
\begin{align}
y_j(x) &= \frac{\omega_j \mathcal{N} \left( x | \mu_j, Q^{-1} \right)}{\sum_{k=1}^n \omeg... | Expected value of a "logistic uniform" multivariate | A couple of thoughts on this problem which might be of interest: your predictor can be interpreted as the Bayes classifier for a Gaussian mixture model. For example, if you take $r \in \mathbf{R}^d, Q | Expected value of a "logistic uniform" multivariate
A couple of thoughts on this problem which might be of interest: your predictor can be interpreted as the Bayes classifier for a Gaussian mixture model. For example, if you take $r \in \mathbf{R}^d, Q \succ 0$, then
\begin{align}
y_j(x) &= \frac{\omega_j \mathcal{N} ... | Expected value of a "logistic uniform" multivariate
A couple of thoughts on this problem which might be of interest: your predictor can be interpreted as the Bayes classifier for a Gaussian mixture model. For example, if you take $r \in \mathbf{R}^d, Q |
43,617 | Expectation Maximization intuitive explanation | Essentially you want to hill climb by differentiating $p(w,r)$ with respect to $w$ and $r$ and adjust $w$ and $r$ by some small constant amount with sign corresponding to the largest increase in gradient and then repeat until you reach a maxima.
Since you're choosing $w$ and $r$ randomly and you haven't told us how $p(... | Expectation Maximization intuitive explanation | Essentially you want to hill climb by differentiating $p(w,r)$ with respect to $w$ and $r$ and adjust $w$ and $r$ by some small constant amount with sign corresponding to the largest increase in gradi | Expectation Maximization intuitive explanation
Essentially you want to hill climb by differentiating $p(w,r)$ with respect to $w$ and $r$ and adjust $w$ and $r$ by some small constant amount with sign corresponding to the largest increase in gradient and then repeat until you reach a maxima.
Since you're choosing $w$ a... | Expectation Maximization intuitive explanation
Essentially you want to hill climb by differentiating $p(w,r)$ with respect to $w$ and $r$ and adjust $w$ and $r$ by some small constant amount with sign corresponding to the largest increase in gradi |
43,618 | How to model an "order-invariant" function by neural networks | The easiest way would be to train a fully connected neural network on randomly ordered inputs, ideally on all permutations.
EDIT:
Alternatively, if you ask for a design of a network that is order invariant, you can do the following:
You would need to do a 1D convolutional neural network, where the stride size is equal ... | How to model an "order-invariant" function by neural networks | The easiest way would be to train a fully connected neural network on randomly ordered inputs, ideally on all permutations.
EDIT:
Alternatively, if you ask for a design of a network that is order inva | How to model an "order-invariant" function by neural networks
The easiest way would be to train a fully connected neural network on randomly ordered inputs, ideally on all permutations.
EDIT:
Alternatively, if you ask for a design of a network that is order invariant, you can do the following:
You would need to do a 1D... | How to model an "order-invariant" function by neural networks
The easiest way would be to train a fully connected neural network on randomly ordered inputs, ideally on all permutations.
EDIT:
Alternatively, if you ask for a design of a network that is order inva |
43,619 | How to model an "order-invariant" function by neural networks | This constraint has got to be the same as equal weights on all variables, or that in fact you're dealing with one variable. There's really only one variable in the model. | How to model an "order-invariant" function by neural networks | This constraint has got to be the same as equal weights on all variables, or that in fact you're dealing with one variable. There's really only one variable in the model. | How to model an "order-invariant" function by neural networks
This constraint has got to be the same as equal weights on all variables, or that in fact you're dealing with one variable. There's really only one variable in the model. | How to model an "order-invariant" function by neural networks
This constraint has got to be the same as equal weights on all variables, or that in fact you're dealing with one variable. There's really only one variable in the model. |
43,620 | Seasonal ARIMA Modelling in R [closed] | You can "force" seasonality by setting D=1 or adding regressors. If you think there is more complex seasonality you may consider using Fourier terms? See this link complex seasonality Hyndman | Seasonal ARIMA Modelling in R [closed] | You can "force" seasonality by setting D=1 or adding regressors. If you think there is more complex seasonality you may consider using Fourier terms? See this link complex seasonality Hyndman | Seasonal ARIMA Modelling in R [closed]
You can "force" seasonality by setting D=1 or adding regressors. If you think there is more complex seasonality you may consider using Fourier terms? See this link complex seasonality Hyndman | Seasonal ARIMA Modelling in R [closed]
You can "force" seasonality by setting D=1 or adding regressors. If you think there is more complex seasonality you may consider using Fourier terms? See this link complex seasonality Hyndman |
43,621 | Seasonal ARIMA Modelling in R [closed] | Try to use this command rather the one you are using for getting the parameters of ARIMA.
arima1 = auto.arima(data.train, trace=FALSE, test="kpss", ic="aic",
stepwise=FALSE, approximation=FALSE)
Sometimes using these commands gives the best model. | Seasonal ARIMA Modelling in R [closed] | Try to use this command rather the one you are using for getting the parameters of ARIMA.
arima1 = auto.arima(data.train, trace=FALSE, test="kpss", ic="aic",
stepwise=FALSE, appro | Seasonal ARIMA Modelling in R [closed]
Try to use this command rather the one you are using for getting the parameters of ARIMA.
arima1 = auto.arima(data.train, trace=FALSE, test="kpss", ic="aic",
stepwise=FALSE, approximation=FALSE)
Sometimes using these commands gives the best model. | Seasonal ARIMA Modelling in R [closed]
Try to use this command rather the one you are using for getting the parameters of ARIMA.
arima1 = auto.arima(data.train, trace=FALSE, test="kpss", ic="aic",
stepwise=FALSE, appro |
43,622 | Seasonal ARIMA Modelling in R [closed] | your data suggests the following model
with . The actual , fit and forecast is here . The data suggests a level shift (visually obvious) and two statically significant seasonal indicators (April and September )and a few anomalies (6). I used R to do the analysis. Unfortunately auto.arima makes some critical assumptio... | Seasonal ARIMA Modelling in R [closed] | your data suggests the following model
with . The actual , fit and forecast is here . The data suggests a level shift (visually obvious) and two statically significant seasonal indicators (April and | Seasonal ARIMA Modelling in R [closed]
your data suggests the following model
with . The actual , fit and forecast is here . The data suggests a level shift (visually obvious) and two statically significant seasonal indicators (April and September )and a few anomalies (6). I used R to do the analysis. Unfortunately a... | Seasonal ARIMA Modelling in R [closed]
your data suggests the following model
with . The actual , fit and forecast is here . The data suggests a level shift (visually obvious) and two statically significant seasonal indicators (April and |
43,623 | Seasonal ARIMA Modelling in R [closed] | maybe you can force the function auto.arima() to return the seasonal model by using like this
auto.arima(database,seasonal=T) | Seasonal ARIMA Modelling in R [closed] | maybe you can force the function auto.arima() to return the seasonal model by using like this
auto.arima(database,seasonal=T) | Seasonal ARIMA Modelling in R [closed]
maybe you can force the function auto.arima() to return the seasonal model by using like this
auto.arima(database,seasonal=T) | Seasonal ARIMA Modelling in R [closed]
maybe you can force the function auto.arima() to return the seasonal model by using like this
auto.arima(database,seasonal=T) |
43,624 | Multi-label classification: Predict product category | Since you have ~800 categories as the classification variable, in my understanding, the accuracy of the classification can be increased by better models than ridge regression model alone. Neural networks with multi layers can be more adept and also you can build an ensemble of models to arrive at the final classificati... | Multi-label classification: Predict product category | Since you have ~800 categories as the classification variable, in my understanding, the accuracy of the classification can be increased by better models than ridge regression model alone. Neural netwo | Multi-label classification: Predict product category
Since you have ~800 categories as the classification variable, in my understanding, the accuracy of the classification can be increased by better models than ridge regression model alone. Neural networks with multi layers can be more adept and also you can build an e... | Multi-label classification: Predict product category
Since you have ~800 categories as the classification variable, in my understanding, the accuracy of the classification can be increased by better models than ridge regression model alone. Neural netwo |
43,625 | Does bias in statistics and machine learning mean the same thing? | Yes, they mean the same thing.
This free chapter covers bias and variance of estimators: http://www.deeplearningbook.org/contents/ml.html
Please see section 5.4, which has a good explanation of what they are. | Does bias in statistics and machine learning mean the same thing? | Yes, they mean the same thing.
This free chapter covers bias and variance of estimators: http://www.deeplearningbook.org/contents/ml.html
Please see section 5.4, which has a good explanation of what t | Does bias in statistics and machine learning mean the same thing?
Yes, they mean the same thing.
This free chapter covers bias and variance of estimators: http://www.deeplearningbook.org/contents/ml.html
Please see section 5.4, which has a good explanation of what they are. | Does bias in statistics and machine learning mean the same thing?
Yes, they mean the same thing.
This free chapter covers bias and variance of estimators: http://www.deeplearningbook.org/contents/ml.html
Please see section 5.4, which has a good explanation of what t |
43,626 | Does bias in statistics and machine learning mean the same thing? | No, they don't. But they're similar.
In ML the learning bias is the set of wrong assumptions that a model makes to fit a dataset. That can be thought of as a measure of how well the model fits the training dataset.
On the other hand, the regular statistic bias is mathematically defined as the average of the absolute er... | Does bias in statistics and machine learning mean the same thing? | No, they don't. But they're similar.
In ML the learning bias is the set of wrong assumptions that a model makes to fit a dataset. That can be thought of as a measure of how well the model fits the tra | Does bias in statistics and machine learning mean the same thing?
No, they don't. But they're similar.
In ML the learning bias is the set of wrong assumptions that a model makes to fit a dataset. That can be thought of as a measure of how well the model fits the training dataset.
On the other hand, the regular statisti... | Does bias in statistics and machine learning mean the same thing?
No, they don't. But they're similar.
In ML the learning bias is the set of wrong assumptions that a model makes to fit a dataset. That can be thought of as a measure of how well the model fits the tra |
43,627 | Linear model with hidden variable | You already have this:
$$
\quad \frac{1}{b} (y - a) = \frac{1}{d}(z - c)
$$
So let's go one step further and solve it for $y$:
$$
\quad y = \frac{b}{d}(z - c) - a = \frac{b}{d} \cdot z - \frac{b}{d} \cdot c + a
$$
This means you can find $(a - \frac{b}{d}\cdot c)$ and the ratio $b/d$ by regressing $y$ on $z$. Similarly... | Linear model with hidden variable | You already have this:
$$
\quad \frac{1}{b} (y - a) = \frac{1}{d}(z - c)
$$
So let's go one step further and solve it for $y$:
$$
\quad y = \frac{b}{d}(z - c) - a = \frac{b}{d} \cdot z - \frac{b}{d} \ | Linear model with hidden variable
You already have this:
$$
\quad \frac{1}{b} (y - a) = \frac{1}{d}(z - c)
$$
So let's go one step further and solve it for $y$:
$$
\quad y = \frac{b}{d}(z - c) - a = \frac{b}{d} \cdot z - \frac{b}{d} \cdot c + a
$$
This means you can find $(a - \frac{b}{d}\cdot c)$ and the ratio $b/d$ b... | Linear model with hidden variable
You already have this:
$$
\quad \frac{1}{b} (y - a) = \frac{1}{d}(z - c)
$$
So let's go one step further and solve it for $y$:
$$
\quad y = \frac{b}{d}(z - c) - a = \frac{b}{d} \cdot z - \frac{b}{d} \ |
43,628 | Linear model with hidden variable | This is an old question, but when it popped up in the timeline I thought it would be a nice example of working with latent variables using Bayesian inference in stan:
library(rstan)
library(tidyverse)
a1 = 5
a2 = 10
b1 = 2
b2 = 3
e1 = .5
e2 = .7
n = 1000
x = rnorm(n)
y1 = a1 + b1 * x + rnorm(n, 0, e1)
y2 = a2 + b2 * x... | Linear model with hidden variable | This is an old question, but when it popped up in the timeline I thought it would be a nice example of working with latent variables using Bayesian inference in stan:
library(rstan)
library(tidyverse) | Linear model with hidden variable
This is an old question, but when it popped up in the timeline I thought it would be a nice example of working with latent variables using Bayesian inference in stan:
library(rstan)
library(tidyverse)
a1 = 5
a2 = 10
b1 = 2
b2 = 3
e1 = .5
e2 = .7
n = 1000
x = rnorm(n)
y1 = a1 + b1 * x ... | Linear model with hidden variable
This is an old question, but when it popped up in the timeline I thought it would be a nice example of working with latent variables using Bayesian inference in stan:
library(rstan)
library(tidyverse) |
43,629 | Adding a magnitude penalty to a GAM | An interesting question. This not an answer, but a few rambling thoughts at the moment.
It sounds like you want to penalize a functional of the parameters, ie, your penalty on $\Delta \eta$ is implicitly a function $g(\beta_{11}, \beta_{12}, \dotsc, \beta_{21}, \beta_{22}, \dotsc, )$ of the parameters in the basis exp... | Adding a magnitude penalty to a GAM | An interesting question. This not an answer, but a few rambling thoughts at the moment.
It sounds like you want to penalize a functional of the parameters, ie, your penalty on $\Delta \eta$ is implic | Adding a magnitude penalty to a GAM
An interesting question. This not an answer, but a few rambling thoughts at the moment.
It sounds like you want to penalize a functional of the parameters, ie, your penalty on $\Delta \eta$ is implicitly a function $g(\beta_{11}, \beta_{12}, \dotsc, \beta_{21}, \beta_{22}, \dotsc, )... | Adding a magnitude penalty to a GAM
An interesting question. This not an answer, but a few rambling thoughts at the moment.
It sounds like you want to penalize a functional of the parameters, ie, your penalty on $\Delta \eta$ is implic |
43,630 | Deep Learning vs Structured Learning | Here is a nice summary of the differences:
And you can refer to this lecture: Statistical and Algorithmic Foundations of Deep Learning for more details. | Deep Learning vs Structured Learning | Here is a nice summary of the differences:
And you can refer to this lecture: Statistical and Algorithmic Foundations of Deep Learning for more details. | Deep Learning vs Structured Learning
Here is a nice summary of the differences:
And you can refer to this lecture: Statistical and Algorithmic Foundations of Deep Learning for more details. | Deep Learning vs Structured Learning
Here is a nice summary of the differences:
And you can refer to this lecture: Statistical and Algorithmic Foundations of Deep Learning for more details. |
43,631 | Prove that $t_{n-1, \alpha/2}$ is strictly decreasing in $n$ | This looks like a good opportunity to discuss important relationships among the Student $t$ distributions. The analysis needed to demonstrate them is elementary, requiring only the basic concepts of differential Calculus, and with the right strategy can be reduced to a simple algebraic calculation.
There is a classic... | Prove that $t_{n-1, \alpha/2}$ is strictly decreasing in $n$ | This looks like a good opportunity to discuss important relationships among the Student $t$ distributions. The analysis needed to demonstrate them is elementary, requiring only the basic concepts of | Prove that $t_{n-1, \alpha/2}$ is strictly decreasing in $n$
This looks like a good opportunity to discuss important relationships among the Student $t$ distributions. The analysis needed to demonstrate them is elementary, requiring only the basic concepts of differential Calculus, and with the right strategy can be r... | Prove that $t_{n-1, \alpha/2}$ is strictly decreasing in $n$
This looks like a good opportunity to discuss important relationships among the Student $t$ distributions. The analysis needed to demonstrate them is elementary, requiring only the basic concepts of |
43,632 | What's the added value of SD line over regression line when examining association between 2 variables? | The SD line is a didactical and visual aid to help seeing the relation for the slope of the regular regression line.
$$\text {slope regression } = r_{xy} \, \frac {\sigma_y}{\sigma_x} = r_{xy} \, \text {slope SD line} $$
The SD line shows how x and y are varying and this can give a more or less steep or flat line depe... | What's the added value of SD line over regression line when examining association between 2 variable | The SD line is a didactical and visual aid to help seeing the relation for the slope of the regular regression line.
$$\text {slope regression } = r_{xy} \, \frac {\sigma_y}{\sigma_x} = r_{xy} \, \te | What's the added value of SD line over regression line when examining association between 2 variables?
The SD line is a didactical and visual aid to help seeing the relation for the slope of the regular regression line.
$$\text {slope regression } = r_{xy} \, \frac {\sigma_y}{\sigma_x} = r_{xy} \, \text {slope SD line... | What's the added value of SD line over regression line when examining association between 2 variable
The SD line is a didactical and visual aid to help seeing the relation for the slope of the regular regression line.
$$\text {slope regression } = r_{xy} \, \frac {\sigma_y}{\sigma_x} = r_{xy} \, \te |
43,633 | Circular statistics for showing the directional mean lies outside a specified region of a duty cycle | Thoughts on my proposed method (original question)
In my proposed approach, computing the mean singing direction via bootstrapped CI's had an issue. In this formualtion, I did not include the expected observations that would happen under a null hypothesis (the bird is singing randomly with no avoidance behaviour). He... | Circular statistics for showing the directional mean lies outside a specified region of a duty cycle | Thoughts on my proposed method (original question)
In my proposed approach, computing the mean singing direction via bootstrapped CI's had an issue. In this formualtion, I did not include the expecte | Circular statistics for showing the directional mean lies outside a specified region of a duty cycle
Thoughts on my proposed method (original question)
In my proposed approach, computing the mean singing direction via bootstrapped CI's had an issue. In this formualtion, I did not include the expected observations that... | Circular statistics for showing the directional mean lies outside a specified region of a duty cycle
Thoughts on my proposed method (original question)
In my proposed approach, computing the mean singing direction via bootstrapped CI's had an issue. In this formualtion, I did not include the expecte |
43,634 | Model selection between parametric nonparametric methods | You shouldn't generally use AIC to choose between parametric and nonparametric models. Parametric and nonparametric models have different modeling assumptions. The traditional AIC is based on a function of the likelihood. Likelihoods of parametric and nonparametric models are not always comparable.
An alternative that... | Model selection between parametric nonparametric methods | You shouldn't generally use AIC to choose between parametric and nonparametric models. Parametric and nonparametric models have different modeling assumptions. The traditional AIC is based on a functi | Model selection between parametric nonparametric methods
You shouldn't generally use AIC to choose between parametric and nonparametric models. Parametric and nonparametric models have different modeling assumptions. The traditional AIC is based on a function of the likelihood. Likelihoods of parametric and nonparametr... | Model selection between parametric nonparametric methods
You shouldn't generally use AIC to choose between parametric and nonparametric models. Parametric and nonparametric models have different modeling assumptions. The traditional AIC is based on a functi |
43,635 | Micro vs weighted F1 score | Micro f1 is based on global precision and recall. It treats each test case equally and doesn't give advantages to small classes. I think it's more suitable.
This article "Macro- and micro-averaged evaluation measures" from Vincent Van Asch in U of Antwerp explains many different kinds of f1 scores. | Micro vs weighted F1 score | Micro f1 is based on global precision and recall. It treats each test case equally and doesn't give advantages to small classes. I think it's more suitable.
This article "Macro- and micro-averaged eva | Micro vs weighted F1 score
Micro f1 is based on global precision and recall. It treats each test case equally and doesn't give advantages to small classes. I think it's more suitable.
This article "Macro- and micro-averaged evaluation measures" from Vincent Van Asch in U of Antwerp explains many different kinds of f1 s... | Micro vs weighted F1 score
Micro f1 is based on global precision and recall. It treats each test case equally and doesn't give advantages to small classes. I think it's more suitable.
This article "Macro- and micro-averaged eva |
43,636 | Training LSTM a sequence one item at a time | Train it one character at a time.
It shouldn't diverge unless the characters are the same and have different ideal-outputs. In that case consider using one-hot vectors instead of scalar vectors. Meaning if a, b, and c are your characters then if a is the character 1, 0, 0 is the input. | Training LSTM a sequence one item at a time | Train it one character at a time.
It shouldn't diverge unless the characters are the same and have different ideal-outputs. In that case consider using one-hot vectors instead of scalar vectors. Meani | Training LSTM a sequence one item at a time
Train it one character at a time.
It shouldn't diverge unless the characters are the same and have different ideal-outputs. In that case consider using one-hot vectors instead of scalar vectors. Meaning if a, b, and c are your characters then if a is the character 1, 0, 0 is ... | Training LSTM a sequence one item at a time
Train it one character at a time.
It shouldn't diverge unless the characters are the same and have different ideal-outputs. In that case consider using one-hot vectors instead of scalar vectors. Meani |
43,637 | Fastest way to solve Bayes estimator problem | Found a quicker way:
We want to minimize $$r(\delta):=E\left(\frac{c\sqrt \theta - \delta}{c\sqrt \theta}\right)^2=E\left(\frac{c^2\theta - 2c\sqrt \theta \delta+\delta ^2}{c^2 \theta}\right)$$ under the posterior distribution. For the no-data problem we get
$$
r(\delta) = 1 - \delta 2c^{-1} E\theta^{-1/2} + \delta... | Fastest way to solve Bayes estimator problem | Found a quicker way:
We want to minimize $$r(\delta):=E\left(\frac{c\sqrt \theta - \delta}{c\sqrt \theta}\right)^2=E\left(\frac{c^2\theta - 2c\sqrt \theta \delta+\delta ^2}{c^2 \theta}\right)$$ unde | Fastest way to solve Bayes estimator problem
Found a quicker way:
We want to minimize $$r(\delta):=E\left(\frac{c\sqrt \theta - \delta}{c\sqrt \theta}\right)^2=E\left(\frac{c^2\theta - 2c\sqrt \theta \delta+\delta ^2}{c^2 \theta}\right)$$ under the posterior distribution. For the no-data problem we get
$$
r(\delta) ... | Fastest way to solve Bayes estimator problem
Found a quicker way:
We want to minimize $$r(\delta):=E\left(\frac{c\sqrt \theta - \delta}{c\sqrt \theta}\right)^2=E\left(\frac{c^2\theta - 2c\sqrt \theta \delta+\delta ^2}{c^2 \theta}\right)$$ unde |
43,638 | R: Model selection with categorical variables using leaps and glmnet | regsubsets (a function in the leaps package that also performs exhaustive model searches) can accept categorical variables that are not split out into dummy variables and, thus, treats them as groups of variables that are either all part of a model or not.
For example, if Year has levels 2013, 2014 and Treatment has... | R: Model selection with categorical variables using leaps and glmnet | regsubsets (a function in the leaps package that also performs exhaustive model searches) can accept categorical variables that are not split out into dummy variables and, thus, treats them as groups | R: Model selection with categorical variables using leaps and glmnet
regsubsets (a function in the leaps package that also performs exhaustive model searches) can accept categorical variables that are not split out into dummy variables and, thus, treats them as groups of variables that are either all part of a model or... | R: Model selection with categorical variables using leaps and glmnet
regsubsets (a function in the leaps package that also performs exhaustive model searches) can accept categorical variables that are not split out into dummy variables and, thus, treats them as groups |
43,639 | Forecasting daily visits using ARIMA with external regressors | An answer without testing/p-values, but with roughly estimating confidence intervals: Adding twice the s.e. (Standard error) on your coefficients should give you approximately 95%-confidence intervals for each one. From that perspective, the 95%-confidence interval for Sunday is roughly speaking between -1800 and -1100... | Forecasting daily visits using ARIMA with external regressors | An answer without testing/p-values, but with roughly estimating confidence intervals: Adding twice the s.e. (Standard error) on your coefficients should give you approximately 95%-confidence intervals | Forecasting daily visits using ARIMA with external regressors
An answer without testing/p-values, but with roughly estimating confidence intervals: Adding twice the s.e. (Standard error) on your coefficients should give you approximately 95%-confidence intervals for each one. From that perspective, the 95%-confidence i... | Forecasting daily visits using ARIMA with external regressors
An answer without testing/p-values, but with roughly estimating confidence intervals: Adding twice the s.e. (Standard error) on your coefficients should give you approximately 95%-confidence intervals |
43,640 | How to test causation in econometrics? | I want to suggest reading an interview with Angus Deaton, the most recent Nobel Laureate in economics, for a frank assessment of the issues raised by the OPs "channel" question regarding their "test and comparison"...here's the link:
https://medium.com/@timothyogden/experimental-conversations-angus-deaton-b2f768dffd57... | How to test causation in econometrics? | I want to suggest reading an interview with Angus Deaton, the most recent Nobel Laureate in economics, for a frank assessment of the issues raised by the OPs "channel" question regarding their "test a | How to test causation in econometrics?
I want to suggest reading an interview with Angus Deaton, the most recent Nobel Laureate in economics, for a frank assessment of the issues raised by the OPs "channel" question regarding their "test and comparison"...here's the link:
https://medium.com/@timothyogden/experimental-... | How to test causation in econometrics?
I want to suggest reading an interview with Angus Deaton, the most recent Nobel Laureate in economics, for a frank assessment of the issues raised by the OPs "channel" question regarding their "test a |
43,641 | How to test causation in econometrics? | If you have a regression model $y=b_0+b_1 x_1 +b_2 x_2 + e$, then $\sigma^2_y=b_1^2\sigma^2_{x_1}+b_2^2\sigma^2_{x_2}+b_1b_2\sigma_{x_1,x_2}+b_1\sigma^2_e$. In this regard you could see $\sigma^2_{x_1}\beta^2_1$ as the part of variance $\sigma^2_y$ channeled through variable $x_1$, IF the covariance $\sigma_{x_1,x_2}$ ... | How to test causation in econometrics? | If you have a regression model $y=b_0+b_1 x_1 +b_2 x_2 + e$, then $\sigma^2_y=b_1^2\sigma^2_{x_1}+b_2^2\sigma^2_{x_2}+b_1b_2\sigma_{x_1,x_2}+b_1\sigma^2_e$. In this regard you could see $\sigma^2_{x_1 | How to test causation in econometrics?
If you have a regression model $y=b_0+b_1 x_1 +b_2 x_2 + e$, then $\sigma^2_y=b_1^2\sigma^2_{x_1}+b_2^2\sigma^2_{x_2}+b_1b_2\sigma_{x_1,x_2}+b_1\sigma^2_e$. In this regard you could see $\sigma^2_{x_1}\beta^2_1$ as the part of variance $\sigma^2_y$ channeled through variable $x_1$... | How to test causation in econometrics?
If you have a regression model $y=b_0+b_1 x_1 +b_2 x_2 + e$, then $\sigma^2_y=b_1^2\sigma^2_{x_1}+b_2^2\sigma^2_{x_2}+b_1b_2\sigma_{x_1,x_2}+b_1\sigma^2_e$. In this regard you could see $\sigma^2_{x_1 |
43,642 | Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix? | Instruction how you can compute sums of squares SSt, SSb, SSw out of matrix of distances (euclidean) between cases (data points) without having at hand the cases x variables dataset. You don't need to know the centroids' coordinates (the group means) - they pass invisibly "on the background": euclidean geometry laws al... | Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix? | Instruction how you can compute sums of squares SSt, SSb, SSw out of matrix of distances (euclidean) between cases (data points) without having at hand the cases x variables dataset. You don't need to | Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
Instruction how you can compute sums of squares SSt, SSb, SSw out of matrix of distances (euclidean) between cases (data points) without having at hand the cases x variables dataset. You don't need to know the centroids' coordinates (... | Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
Instruction how you can compute sums of squares SSt, SSb, SSw out of matrix of distances (euclidean) between cases (data points) without having at hand the cases x variables dataset. You don't need to |
43,643 | Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix? | Sum of squares is closely tied to Euclidean distance. Hamming (on bits) is a special case, as it is the same as Euclidean (on bits), but you cannot conclude from an arbitrary distance matrix what the SSQ etc. are.
Recall how the sum-of-squares are usually defined, compared to Euclidean distance:
$$
SSQ(A,B) = \sum_{a\i... | Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix? | Sum of squares is closely tied to Euclidean distance. Hamming (on bits) is a special case, as it is the same as Euclidean (on bits), but you cannot conclude from an arbitrary distance matrix what the | Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
Sum of squares is closely tied to Euclidean distance. Hamming (on bits) is a special case, as it is the same as Euclidean (on bits), but you cannot conclude from an arbitrary distance matrix what the SSQ etc. are.
Recall how the sum-o... | Sums-of-Squares (total, between, within): how to compute them from a Distance Matrix?
Sum of squares is closely tied to Euclidean distance. Hamming (on bits) is a special case, as it is the same as Euclidean (on bits), but you cannot conclude from an arbitrary distance matrix what the |
43,644 | Multinomial logistic regression assumptions | The key assumption in the MNL is that the errors are independently and identically distributed with a Gumbel extreme value distribution. The problem with testing this assumption is that it is made a priori. In standard regression you fit the least-squares curve, and measure the residual error. In a logit model, you ass... | Multinomial logistic regression assumptions | The key assumption in the MNL is that the errors are independently and identically distributed with a Gumbel extreme value distribution. The problem with testing this assumption is that it is made a p | Multinomial logistic regression assumptions
The key assumption in the MNL is that the errors are independently and identically distributed with a Gumbel extreme value distribution. The problem with testing this assumption is that it is made a priori. In standard regression you fit the least-squares curve, and measure t... | Multinomial logistic regression assumptions
The key assumption in the MNL is that the errors are independently and identically distributed with a Gumbel extreme value distribution. The problem with testing this assumption is that it is made a p |
43,645 | Multinomial logistic regression assumptions | Assumptions:
Outcome follows a categorical distribution (http://en.wikipedia.org/wiki/Categorical_distribution), which is linked to the covariates via a link function as in ordinary logistic regression
Independence of observational units
Linear relation between covariates and (link-transformed) expectation of the outc... | Multinomial logistic regression assumptions | Assumptions:
Outcome follows a categorical distribution (http://en.wikipedia.org/wiki/Categorical_distribution), which is linked to the covariates via a link function as in ordinary logistic regressi | Multinomial logistic regression assumptions
Assumptions:
Outcome follows a categorical distribution (http://en.wikipedia.org/wiki/Categorical_distribution), which is linked to the covariates via a link function as in ordinary logistic regression
Independence of observational units
Linear relation between covariates an... | Multinomial logistic regression assumptions
Assumptions:
Outcome follows a categorical distribution (http://en.wikipedia.org/wiki/Categorical_distribution), which is linked to the covariates via a link function as in ordinary logistic regressi |
43,646 | Multinomial logistic regression assumptions | One of the most important practical assumptions of multinomial logistic is that the number of observations in the smallest frequency category of $Y$ is large, for example 10 times the number of parameters from the right hand side of the model. | Multinomial logistic regression assumptions | One of the most important practical assumptions of multinomial logistic is that the number of observations in the smallest frequency category of $Y$ is large, for example 10 times the number of parame | Multinomial logistic regression assumptions
One of the most important practical assumptions of multinomial logistic is that the number of observations in the smallest frequency category of $Y$ is large, for example 10 times the number of parameters from the right hand side of the model. | Multinomial logistic regression assumptions
One of the most important practical assumptions of multinomial logistic is that the number of observations in the smallest frequency category of $Y$ is large, for example 10 times the number of parame |
43,647 | Multinomial logistic regression assumptions | @h_bauer has provided a good answer. Let me add a small complementary point: You can also test for a curvilinear relationship by adding curvilinear terms and performing a nested model test. For example, imagine you have $X_1$ as an explanatory variable, but you aren't sure whether the relationship between it and the... | Multinomial logistic regression assumptions | @h_bauer has provided a good answer. Let me add a small complementary point: You can also test for a curvilinear relationship by adding curvilinear terms and performing a nested model test. For exa | Multinomial logistic regression assumptions
@h_bauer has provided a good answer. Let me add a small complementary point: You can also test for a curvilinear relationship by adding curvilinear terms and performing a nested model test. For example, imagine you have $X_1$ as an explanatory variable, but you aren't sure... | Multinomial logistic regression assumptions
@h_bauer has provided a good answer. Let me add a small complementary point: You can also test for a curvilinear relationship by adding curvilinear terms and performing a nested model test. For exa |
43,648 | Multinomial logistic regression assumptions | gmacfarlane has been very clear. But to be more precise, and I assume you perform a cross section analysis, the core assumption is the IIA (independence of irrelevant alternatives).
You can not force your data fit into the IIA assumption, you should test it and hope for it to be satisfied. SPSS could not handle the tes... | Multinomial logistic regression assumptions | gmacfarlane has been very clear. But to be more precise, and I assume you perform a cross section analysis, the core assumption is the IIA (independence of irrelevant alternatives).
You can not force | Multinomial logistic regression assumptions
gmacfarlane has been very clear. But to be more precise, and I assume you perform a cross section analysis, the core assumption is the IIA (independence of irrelevant alternatives).
You can not force your data fit into the IIA assumption, you should test it and hope for it to... | Multinomial logistic regression assumptions
gmacfarlane has been very clear. But to be more precise, and I assume you perform a cross section analysis, the core assumption is the IIA (independence of irrelevant alternatives).
You can not force |
43,649 | Are log-linear models exponential models? | It is hard to say what is "usually referred to" without more context, as terminology is not well standardized across fields.
In the most common statistical context, I would say "log-linear model" refers to a Poisson GLM that is applied to a multi-way contingency table and presented in a special form. This is the way... | Are log-linear models exponential models? | It is hard to say what is "usually referred to" without more context, as terminology is not well standardized across fields.
In the most common statistical context, I would say "log-linear model" re | Are log-linear models exponential models?
It is hard to say what is "usually referred to" without more context, as terminology is not well standardized across fields.
In the most common statistical context, I would say "log-linear model" refers to a Poisson GLM that is applied to a multi-way contingency table and pre... | Are log-linear models exponential models?
It is hard to say what is "usually referred to" without more context, as terminology is not well standardized across fields.
In the most common statistical context, I would say "log-linear model" re |
43,650 | Are log-linear models exponential models? | They are different, but it's a bit ambiguous without extra context.
Log-linear models usually refer to an OLS linear model with logged response, or sometimes a GLM with a Normal family, log link function. The Normal distribution is in the exponential family.
If you actually have an exponential response, you would us... | Are log-linear models exponential models? | They are different, but it's a bit ambiguous without extra context.
Log-linear models usually refer to an OLS linear model with logged response, or sometimes a GLM with a Normal family, log link func | Are log-linear models exponential models?
They are different, but it's a bit ambiguous without extra context.
Log-linear models usually refer to an OLS linear model with logged response, or sometimes a GLM with a Normal family, log link function. The Normal distribution is in the exponential family.
If you actually h... | Are log-linear models exponential models?
They are different, but it's a bit ambiguous without extra context.
Log-linear models usually refer to an OLS linear model with logged response, or sometimes a GLM with a Normal family, log link func |
43,651 | Reference for the idea that a simpler model can be used when the range of data values is smaller | I think that this is not a reference-able concept, it's just about relative error. What you call "range of data values" is usually called "scale", and you would just say that a certain theory is enough descriptive for this scale. In the example of the rock, $F=-mg$ is enough to describe the dynamics of the rock, in the... | Reference for the idea that a simpler model can be used when the range of data values is smaller | I think that this is not a reference-able concept, it's just about relative error. What you call "range of data values" is usually called "scale", and you would just say that a certain theory is enoug | Reference for the idea that a simpler model can be used when the range of data values is smaller
I think that this is not a reference-able concept, it's just about relative error. What you call "range of data values" is usually called "scale", and you would just say that a certain theory is enough descriptive for this ... | Reference for the idea that a simpler model can be used when the range of data values is smaller
I think that this is not a reference-able concept, it's just about relative error. What you call "range of data values" is usually called "scale", and you would just say that a certain theory is enoug |
43,652 | Credit Risk and Concentration | In regulatory environment there are actually three parameters tied to credit risk:
1) Exposure at default (EAD) which means nominal amount of money your institution is at risk to lose. You could add unused credit limits /lines there if you want.
2) Loss given default (LGD) which means 1-recovery rate. Recovery rate d... | Credit Risk and Concentration | In regulatory environment there are actually three parameters tied to credit risk:
1) Exposure at default (EAD) which means nominal amount of money your institution is at risk to lose. You could add | Credit Risk and Concentration
In regulatory environment there are actually three parameters tied to credit risk:
1) Exposure at default (EAD) which means nominal amount of money your institution is at risk to lose. You could add unused credit limits /lines there if you want.
2) Loss given default (LGD) which means 1-... | Credit Risk and Concentration
In regulatory environment there are actually three parameters tied to credit risk:
1) Exposure at default (EAD) which means nominal amount of money your institution is at risk to lose. You could add |
43,653 | Credit Risk and Concentration | It is possible to reflect the concentration credit risk in terms of Economic Capital using a modification of the Vasicek VaR model. The Vasicek model is the key model for credit risk under the Basel II framework and it assumes that the credit risk exposures are uniform. Nevertheless, it is possible to modify it by app... | Credit Risk and Concentration | It is possible to reflect the concentration credit risk in terms of Economic Capital using a modification of the Vasicek VaR model. The Vasicek model is the key model for credit risk under the Basel I | Credit Risk and Concentration
It is possible to reflect the concentration credit risk in terms of Economic Capital using a modification of the Vasicek VaR model. The Vasicek model is the key model for credit risk under the Basel II framework and it assumes that the credit risk exposures are uniform. Nevertheless, it is... | Credit Risk and Concentration
It is possible to reflect the concentration credit risk in terms of Economic Capital using a modification of the Vasicek VaR model. The Vasicek model is the key model for credit risk under the Basel I |
43,654 | Credit Risk and Concentration | Going by your comment: if you have a group of people with the same credit rating, and a total amount of X that you want to loan out, it is less risky to loan it to more people (where each person takes a smaller loan.)
There are many ways to get to this result, but my favourite is through the Kelly criterion. Even thoug... | Credit Risk and Concentration | Going by your comment: if you have a group of people with the same credit rating, and a total amount of X that you want to loan out, it is less risky to loan it to more people (where each person takes | Credit Risk and Concentration
Going by your comment: if you have a group of people with the same credit rating, and a total amount of X that you want to loan out, it is less risky to loan it to more people (where each person takes a smaller loan.)
There are many ways to get to this result, but my favourite is through t... | Credit Risk and Concentration
Going by your comment: if you have a group of people with the same credit rating, and a total amount of X that you want to loan out, it is less risky to loan it to more people (where each person takes |
43,655 | Johansen test conditions and Breusch-Godfrey LM test | For your first question, yes, you need to have uncorrelated residuals. In general, your need to determine the lag order of the VAR THEN your perform a cointegration test. This means that there must be no residual autocorrelation in your VAR model (if there still is, you need to increase the lag order. Which lag order t... | Johansen test conditions and Breusch-Godfrey LM test | For your first question, yes, you need to have uncorrelated residuals. In general, your need to determine the lag order of the VAR THEN your perform a cointegration test. This means that there must be | Johansen test conditions and Breusch-Godfrey LM test
For your first question, yes, you need to have uncorrelated residuals. In general, your need to determine the lag order of the VAR THEN your perform a cointegration test. This means that there must be no residual autocorrelation in your VAR model (if there still is, ... | Johansen test conditions and Breusch-Godfrey LM test
For your first question, yes, you need to have uncorrelated residuals. In general, your need to determine the lag order of the VAR THEN your perform a cointegration test. This means that there must be |
43,656 | Are estimates of regression coefficients uncorrelated? | This is an important consideration in designing experiments, where it can be desirable to have no (or very little) correlation among the estimates $\hat a$ and $\hat b$. Such lack of correlation can be achieved by controlling the values of the $X_i$.
To analyze the effects of the $X_i$ on the estimates, the values $(... | Are estimates of regression coefficients uncorrelated? | This is an important consideration in designing experiments, where it can be desirable to have no (or very little) correlation among the estimates $\hat a$ and $\hat b$. Such lack of correlation can | Are estimates of regression coefficients uncorrelated?
This is an important consideration in designing experiments, where it can be desirable to have no (or very little) correlation among the estimates $\hat a$ and $\hat b$. Such lack of correlation can be achieved by controlling the values of the $X_i$.
To analyze t... | Are estimates of regression coefficients uncorrelated?
This is an important consideration in designing experiments, where it can be desirable to have no (or very little) correlation among the estimates $\hat a$ and $\hat b$. Such lack of correlation can |
43,657 | Reference with distributions with various properties | The most comprehensive collection of distributions and their properties that I know of are
Johnson, Kotz, Balakrishnan: Continuous Univariate Distributions Volume 1 and 2;
Kotz, Johnson, Balakrishnan: Continuous Multivariate Distributions;
Johnson, Kemp, Kotz: Univariate Discrete Distributions;
Johnson, Kotz, Balakri... | Reference with distributions with various properties | The most comprehensive collection of distributions and their properties that I know of are
Johnson, Kotz, Balakrishnan: Continuous Univariate Distributions Volume 1 and 2;
Kotz, Johnson, Balakrishna | Reference with distributions with various properties
The most comprehensive collection of distributions and their properties that I know of are
Johnson, Kotz, Balakrishnan: Continuous Univariate Distributions Volume 1 and 2;
Kotz, Johnson, Balakrishnan: Continuous Multivariate Distributions;
Johnson, Kemp, Kotz: Univ... | Reference with distributions with various properties
The most comprehensive collection of distributions and their properties that I know of are
Johnson, Kotz, Balakrishnan: Continuous Univariate Distributions Volume 1 and 2;
Kotz, Johnson, Balakrishna |
43,658 | Reference with distributions with various properties | honestly, there are way too many distributions that I have no idea about. I do believe however that knowing them is not an asset, one must know how to use them.
Anyway, back to your question, I always find this diagram quite informative and useful, it's like probability distributions cheatsheet.
http://jonfwilkins.c... | Reference with distributions with various properties | honestly, there are way too many distributions that I have no idea about. I do believe however that knowing them is not an asset, one must know how to use them.
Anyway, back to your question, I alway | Reference with distributions with various properties
honestly, there are way too many distributions that I have no idea about. I do believe however that knowing them is not an asset, one must know how to use them.
Anyway, back to your question, I always find this diagram quite informative and useful, it's like probabi... | Reference with distributions with various properties
honestly, there are way too many distributions that I have no idea about. I do believe however that knowing them is not an asset, one must know how to use them.
Anyway, back to your question, I alway |
43,659 | Reference with distributions with various properties | No book could cover all distributions, as it is always possible to invent new ones. But
Statistical distributions by Catherine Forbes et al. is a concise book covering many of the more commonly used distributions
while
A primer on statistical distributions by N. Balakrishnan and V.B. Nezvorov
is also fairly concise,... | Reference with distributions with various properties | No book could cover all distributions, as it is always possible to invent new ones. But
Statistical distributions by Catherine Forbes et al. is a concise book covering many of the more commonly used | Reference with distributions with various properties
No book could cover all distributions, as it is always possible to invent new ones. But
Statistical distributions by Catherine Forbes et al. is a concise book covering many of the more commonly used distributions
while
A primer on statistical distributions by N. B... | Reference with distributions with various properties
No book could cover all distributions, as it is always possible to invent new ones. But
Statistical distributions by Catherine Forbes et al. is a concise book covering many of the more commonly used |
43,660 | Reference with distributions with various properties | Merran Evans, Nicholas Hastings, Brian Peacock - Statistical distributions - John Wiley and Sons
I have the second edition and the distributions are in simple alphabetical order (from Bernoulli to Wishart central distribution). | Reference with distributions with various properties | Merran Evans, Nicholas Hastings, Brian Peacock - Statistical distributions - John Wiley and Sons
I have the second edition and the distributions are in simple alphabetical order (from Bernoulli to Wis | Reference with distributions with various properties
Merran Evans, Nicholas Hastings, Brian Peacock - Statistical distributions - John Wiley and Sons
I have the second edition and the distributions are in simple alphabetical order (from Bernoulli to Wishart central distribution). | Reference with distributions with various properties
Merran Evans, Nicholas Hastings, Brian Peacock - Statistical distributions - John Wiley and Sons
I have the second edition and the distributions are in simple alphabetical order (from Bernoulli to Wis |
43,661 | Reference with distributions with various properties | The Hand-book on Statistical Distributions for Experimentalists by Christian Walck at the University of Stockholm is pretty decent....and FREE!! It covers over 40 distributions from A to Z, with each distribution described with its formulas, moments, moment generating function, characteristic function, how to generate ... | Reference with distributions with various properties | The Hand-book on Statistical Distributions for Experimentalists by Christian Walck at the University of Stockholm is pretty decent....and FREE!! It covers over 40 distributions from A to Z, with each | Reference with distributions with various properties
The Hand-book on Statistical Distributions for Experimentalists by Christian Walck at the University of Stockholm is pretty decent....and FREE!! It covers over 40 distributions from A to Z, with each distribution described with its formulas, moments, moment generatin... | Reference with distributions with various properties
The Hand-book on Statistical Distributions for Experimentalists by Christian Walck at the University of Stockholm is pretty decent....and FREE!! It covers over 40 distributions from A to Z, with each |
43,662 | Reference with distributions with various properties | Ben Bolker's "Ecological Models and Data in R" has a section "bestiary of distributions" (pp 160-181) with descriptions of the properties and applications of many common and useful distributions.
It is written at the level of a grad level course in ecology, so it is accessible to non-statisticians. Less dense than th... | Reference with distributions with various properties | Ben Bolker's "Ecological Models and Data in R" has a section "bestiary of distributions" (pp 160-181) with descriptions of the properties and applications of many common and useful distributions.
It | Reference with distributions with various properties
Ben Bolker's "Ecological Models and Data in R" has a section "bestiary of distributions" (pp 160-181) with descriptions of the properties and applications of many common and useful distributions.
It is written at the level of a grad level course in ecology, so it i... | Reference with distributions with various properties
Ben Bolker's "Ecological Models and Data in R" has a section "bestiary of distributions" (pp 160-181) with descriptions of the properties and applications of many common and useful distributions.
It |
43,663 | Reference with distributions with various properties | The Loss Models by Panjer, Wilmot and Klugman contains a good appendix regarding distribution pdf, their support and parameter estimation. | Reference with distributions with various properties | The Loss Models by Panjer, Wilmot and Klugman contains a good appendix regarding distribution pdf, their support and parameter estimation. | Reference with distributions with various properties
The Loss Models by Panjer, Wilmot and Klugman contains a good appendix regarding distribution pdf, their support and parameter estimation. | Reference with distributions with various properties
The Loss Models by Panjer, Wilmot and Klugman contains a good appendix regarding distribution pdf, their support and parameter estimation. |
43,664 | Reference with distributions with various properties | A study of bivariate distributions cannot be complete without a sound background knowledge of the univariate distributions, which would naturally form the marginal or conditional distributions. The two encyclopedic volumes by Johnson et al. (1994, 1995) are the most comprehensive texts to date on continuous univariate ... | Reference with distributions with various properties | A study of bivariate distributions cannot be complete without a sound background knowledge of the univariate distributions, which would naturally form the marginal or conditional distributions. The tw | Reference with distributions with various properties
A study of bivariate distributions cannot be complete without a sound background knowledge of the univariate distributions, which would naturally form the marginal or conditional distributions. The two encyclopedic volumes by Johnson et al. (1994, 1995) are the most ... | Reference with distributions with various properties
A study of bivariate distributions cannot be complete without a sound background knowledge of the univariate distributions, which would naturally form the marginal or conditional distributions. The tw |
43,665 | Reference with distributions with various properties | The series of books by Johnson, Kotz & Balakrishnan (edit: which Nick has also mentioned; the original books were by the first two authors) are probably the most comprehensive. You probably want to start with Continuous Univariate Distributions, Vols I and II.
A couple more:
Evans, Hastings & Peacock, Statistical Distr... | Reference with distributions with various properties | The series of books by Johnson, Kotz & Balakrishnan (edit: which Nick has also mentioned; the original books were by the first two authors) are probably the most comprehensive. You probably want to st | Reference with distributions with various properties
The series of books by Johnson, Kotz & Balakrishnan (edit: which Nick has also mentioned; the original books were by the first two authors) are probably the most comprehensive. You probably want to start with Continuous Univariate Distributions, Vols I and II.
A coup... | Reference with distributions with various properties
The series of books by Johnson, Kotz & Balakrishnan (edit: which Nick has also mentioned; the original books were by the first two authors) are probably the most comprehensive. You probably want to st |
43,666 | Test whether two rank orders differ | So if I understand you correctly, is the question you are trying to answer if there is a difference in the way individuals change across the two tests relative to one another? If so your null hypothesis would be the response of individuals is the same. You could test this using the actual values instead of their ranks ... | Test whether two rank orders differ | So if I understand you correctly, is the question you are trying to answer if there is a difference in the way individuals change across the two tests relative to one another? If so your null hypothes | Test whether two rank orders differ
So if I understand you correctly, is the question you are trying to answer if there is a difference in the way individuals change across the two tests relative to one another? If so your null hypothesis would be the response of individuals is the same. You could test this using the a... | Test whether two rank orders differ
So if I understand you correctly, is the question you are trying to answer if there is a difference in the way individuals change across the two tests relative to one another? If so your null hypothes |
43,667 | Test whether two rank orders differ | If I understood correctly, you want to test if a condition affects the score order of a population. I guess what you are looking for is the Friedman test or its generalization. On wikipedia : http://en.wikipedia.org/wiki/Friedman_test. I think the example with the judges and the wines looks very similar to your problem... | Test whether two rank orders differ | If I understood correctly, you want to test if a condition affects the score order of a population. I guess what you are looking for is the Friedman test or its generalization. On wikipedia : http://e | Test whether two rank orders differ
If I understood correctly, you want to test if a condition affects the score order of a population. I guess what you are looking for is the Friedman test or its generalization. On wikipedia : http://en.wikipedia.org/wiki/Friedman_test. I think the example with the judges and the wine... | Test whether two rank orders differ
If I understood correctly, you want to test if a condition affects the score order of a population. I guess what you are looking for is the Friedman test or its generalization. On wikipedia : http://e |
43,668 | Test whether two rank orders differ | I think the wilcoxon signed-rank test will work in this situation. The null hypothesis is that the 2 sets of ranks are equal. | Test whether two rank orders differ | I think the wilcoxon signed-rank test will work in this situation. The null hypothesis is that the 2 sets of ranks are equal. | Test whether two rank orders differ
I think the wilcoxon signed-rank test will work in this situation. The null hypothesis is that the 2 sets of ranks are equal. | Test whether two rank orders differ
I think the wilcoxon signed-rank test will work in this situation. The null hypothesis is that the 2 sets of ranks are equal. |
43,669 | MANOVA with unequal sample sizes | As in ANOVA, when cells in a factorial MANOVA have different sample sizes, the sum of squares for effect plus error does not equal the total sum of squares. This causes tests of main effects and interactions to be correlated. SPSS offers and adjustment for unequal sample sizes in MANOVA.
For further information have a... | MANOVA with unequal sample sizes | As in ANOVA, when cells in a factorial MANOVA have different sample sizes, the sum of squares for effect plus error does not equal the total sum of squares. This causes tests of main effects and inter | MANOVA with unequal sample sizes
As in ANOVA, when cells in a factorial MANOVA have different sample sizes, the sum of squares for effect plus error does not equal the total sum of squares. This causes tests of main effects and interactions to be correlated. SPSS offers and adjustment for unequal sample sizes in MANOVA... | MANOVA with unequal sample sizes
As in ANOVA, when cells in a factorial MANOVA have different sample sizes, the sum of squares for effect plus error does not equal the total sum of squares. This causes tests of main effects and inter |
43,670 | Geometric Interpretation of Softmax Regression | To start, I'll be referring to your blogpost on softmax regression.
The analysis performed there is almost complete, all it needs is the following: when we want to predict a class during test time, we simply take the class with the highest probability.
Say we want to see the decision region for class 1. It corresponds ... | Geometric Interpretation of Softmax Regression | To start, I'll be referring to your blogpost on softmax regression.
The analysis performed there is almost complete, all it needs is the following: when we want to predict a class during test time, we | Geometric Interpretation of Softmax Regression
To start, I'll be referring to your blogpost on softmax regression.
The analysis performed there is almost complete, all it needs is the following: when we want to predict a class during test time, we simply take the class with the highest probability.
Say we want to see t... | Geometric Interpretation of Softmax Regression
To start, I'll be referring to your blogpost on softmax regression.
The analysis performed there is almost complete, all it needs is the following: when we want to predict a class during test time, we |
43,671 | What does the residual higher level variance tell me? | It could be possible. If you code an east-west variable, a simple binary variable. Check it's correlation with your mode variable. If they are very highly correlated, then multicollinearity may be at play. i.e. your mode variable may in fact be explaining away the east west divide. | What does the residual higher level variance tell me? | It could be possible. If you code an east-west variable, a simple binary variable. Check it's correlation with your mode variable. If they are very highly correlated, then multicollinearity may be at | What does the residual higher level variance tell me?
It could be possible. If you code an east-west variable, a simple binary variable. Check it's correlation with your mode variable. If they are very highly correlated, then multicollinearity may be at play. i.e. your mode variable may in fact be explaining away the e... | What does the residual higher level variance tell me?
It could be possible. If you code an east-west variable, a simple binary variable. Check it's correlation with your mode variable. If they are very highly correlated, then multicollinearity may be at |
43,672 | Comparing model fits or regression coefficients for nonlinear models fitted to different data sets | To compare models you need to have replicates at each level of the predictor. This allows you to partition the SS(residual) into SS(lack of fit) plus SS(pure error). Ideally SS(LOF)->zero. You can test this using an F-test.(https://en.wikipedia.org/wiki/Lack-of-fit_sum_of_squares). If your models all have two param... | Comparing model fits or regression coefficients for nonlinear models fitted to different data sets | To compare models you need to have replicates at each level of the predictor. This allows you to partition the SS(residual) into SS(lack of fit) plus SS(pure error). Ideally SS(LOF)->zero. You can | Comparing model fits or regression coefficients for nonlinear models fitted to different data sets
To compare models you need to have replicates at each level of the predictor. This allows you to partition the SS(residual) into SS(lack of fit) plus SS(pure error). Ideally SS(LOF)->zero. You can test this using an F-... | Comparing model fits or regression coefficients for nonlinear models fitted to different data sets
To compare models you need to have replicates at each level of the predictor. This allows you to partition the SS(residual) into SS(lack of fit) plus SS(pure error). Ideally SS(LOF)->zero. You can |
43,673 | Comparing model fits or regression coefficients for nonlinear models fitted to different data sets | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
You can check the following link
http://www.graphpad.c... | Comparing model fits or regression coefficients for nonlinear models fitted to different data sets | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Comparing model fits or regression coefficients for nonlinear models fitted to different data sets
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
... | Comparing model fits or regression coefficients for nonlinear models fitted to different data sets
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
43,674 | Practical data collection tips for performing a network meta analysis (mixed treatment comparisons) | In principle, there is nothing special about computing the required effect sizes for a network meta-analysis. Let's stick to Cohen's d here. So, for each study, you just compute the $d$ value, either using the raw means and SDs or via some appropriate transformation of some other statistic.
As a simple example (what yo... | Practical data collection tips for performing a network meta analysis (mixed treatment comparisons) | In principle, there is nothing special about computing the required effect sizes for a network meta-analysis. Let's stick to Cohen's d here. So, for each study, you just compute the $d$ value, either | Practical data collection tips for performing a network meta analysis (mixed treatment comparisons)
In principle, there is nothing special about computing the required effect sizes for a network meta-analysis. Let's stick to Cohen's d here. So, for each study, you just compute the $d$ value, either using the raw means ... | Practical data collection tips for performing a network meta analysis (mixed treatment comparisons)
In principle, there is nothing special about computing the required effect sizes for a network meta-analysis. Let's stick to Cohen's d here. So, for each study, you just compute the $d$ value, either |
43,675 | Practical data collection tips for performing a network meta analysis (mixed treatment comparisons) | All the meta-analytic models I have seen require the raw data but if someone is familiar with WinBugs, they might be able to help modify the code to fit your needs. There is a BUGS ListServ that Bayesians frequently post on. Maybe someone there can give you more guidance.
Ahmed | Practical data collection tips for performing a network meta analysis (mixed treatment comparisons) | All the meta-analytic models I have seen require the raw data but if someone is familiar with WinBugs, they might be able to help modify the code to fit your needs. There is a BUGS ListServ that Bayes | Practical data collection tips for performing a network meta analysis (mixed treatment comparisons)
All the meta-analytic models I have seen require the raw data but if someone is familiar with WinBugs, they might be able to help modify the code to fit your needs. There is a BUGS ListServ that Bayesians frequently post... | Practical data collection tips for performing a network meta analysis (mixed treatment comparisons)
All the meta-analytic models I have seen require the raw data but if someone is familiar with WinBugs, they might be able to help modify the code to fit your needs. There is a BUGS ListServ that Bayes |
43,676 | Average Structural Function Calculation | The original poster solved his own problem and posted the results on his blog. | Average Structural Function Calculation | The original poster solved his own problem and posted the results on his blog. | Average Structural Function Calculation
The original poster solved his own problem and posted the results on his blog. | Average Structural Function Calculation
The original poster solved his own problem and posted the results on his blog. |
43,677 | What could "directional mean" be in this context? | I think it is not exactly standard terminology, hence it can mean pretty much anything.
However, I have found this article which seems to be in the same genre: http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.1002621
This makes me suspect that directional may have been used instead of conditional... | What could "directional mean" be in this context? | I think it is not exactly standard terminology, hence it can mean pretty much anything.
However, I have found this article which seems to be in the same genre: http://www.plosgenetics.org/article/info | What could "directional mean" be in this context?
I think it is not exactly standard terminology, hence it can mean pretty much anything.
However, I have found this article which seems to be in the same genre: http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.1002621
This makes me suspect that dir... | What could "directional mean" be in this context?
I think it is not exactly standard terminology, hence it can mean pretty much anything.
However, I have found this article which seems to be in the same genre: http://www.plosgenetics.org/article/info |
43,678 | What is the correct way to determine the amount of difference between two proportions? | You can do a one-sided test but even if you do a two-sided test the side to which you exceed the threshold clearly tells you if you are better or worse.
The standard hypothesis test only tells you that you are significantly better but not by the magnitude. To show that the magnitude is greater than a specified $\Delta$... | What is the correct way to determine the amount of difference between two proportions? | You can do a one-sided test but even if you do a two-sided test the side to which you exceed the threshold clearly tells you if you are better or worse.
The standard hypothesis test only tells you tha | What is the correct way to determine the amount of difference between two proportions?
You can do a one-sided test but even if you do a two-sided test the side to which you exceed the threshold clearly tells you if you are better or worse.
The standard hypothesis test only tells you that you are significantly better bu... | What is the correct way to determine the amount of difference between two proportions?
You can do a one-sided test but even if you do a two-sided test the side to which you exceed the threshold clearly tells you if you are better or worse.
The standard hypothesis test only tells you tha |
43,679 | Bayesian and frequentist reasoning in plain English | Here is how I would explain the basic difference to my grandma:
I have misplaced my phone somewhere in the home. I can use the phone locator on the base of the instrument to locate the phone and when I press the phone locator the phone starts beeping.
Problem: Which area of my home should I search?
Frequentist Reasonin... | Bayesian and frequentist reasoning in plain English | Here is how I would explain the basic difference to my grandma:
I have misplaced my phone somewhere in the home. I can use the phone locator on the base of the instrument to locate the phone and when | Bayesian and frequentist reasoning in plain English
Here is how I would explain the basic difference to my grandma:
I have misplaced my phone somewhere in the home. I can use the phone locator on the base of the instrument to locate the phone and when I press the phone locator the phone starts beeping.
Problem: Which a... | Bayesian and frequentist reasoning in plain English
Here is how I would explain the basic difference to my grandma:
I have misplaced my phone somewhere in the home. I can use the phone locator on the base of the instrument to locate the phone and when |
43,680 | Bayesian and frequentist reasoning in plain English | Tongue firmly in cheek:
A Bayesian defines a "probability" in exactly the same way that most non-statisticians do - namely an indication of the plausibility of a proposition or a situation. If you ask them a question about a particular proposition or situation, they will give you a direct answer assigning probabilitie... | Bayesian and frequentist reasoning in plain English | Tongue firmly in cheek:
A Bayesian defines a "probability" in exactly the same way that most non-statisticians do - namely an indication of the plausibility of a proposition or a situation. If you as | Bayesian and frequentist reasoning in plain English
Tongue firmly in cheek:
A Bayesian defines a "probability" in exactly the same way that most non-statisticians do - namely an indication of the plausibility of a proposition or a situation. If you ask them a question about a particular proposition or situation, they ... | Bayesian and frequentist reasoning in plain English
Tongue firmly in cheek:
A Bayesian defines a "probability" in exactly the same way that most non-statisticians do - namely an indication of the plausibility of a proposition or a situation. If you as |
43,681 | Bayesian and frequentist reasoning in plain English | Very crudely I would say that:
Frequentist: Sampling is infinite and decision rules can be sharp. Data are a repeatable random sample - there is a frequency. Underlying parameters are fixed i.e. they remain constant during this repeatable sampling process.
Bayesian: Unknown quantities are treated probabilistically and... | Bayesian and frequentist reasoning in plain English | Very crudely I would say that:
Frequentist: Sampling is infinite and decision rules can be sharp. Data are a repeatable random sample - there is a frequency. Underlying parameters are fixed i.e. they | Bayesian and frequentist reasoning in plain English
Very crudely I would say that:
Frequentist: Sampling is infinite and decision rules can be sharp. Data are a repeatable random sample - there is a frequency. Underlying parameters are fixed i.e. they remain constant during this repeatable sampling process.
Bayesian: ... | Bayesian and frequentist reasoning in plain English
Very crudely I would say that:
Frequentist: Sampling is infinite and decision rules can be sharp. Data are a repeatable random sample - there is a frequency. Underlying parameters are fixed i.e. they |
43,682 | Bayesian and frequentist reasoning in plain English | Let us say a man rolls a six sided die and it has outcomes 1, 2, 3, 4, 5, or 6. Furthermore, he says that if it lands on a 3, he'll give you a free text book.
Then informally:
The Frequentist would say that each outcome has an equal 1 in 6 chance of occurring. She views probability as being derived from long run freque... | Bayesian and frequentist reasoning in plain English | Let us say a man rolls a six sided die and it has outcomes 1, 2, 3, 4, 5, or 6. Furthermore, he says that if it lands on a 3, he'll give you a free text book.
Then informally:
The Frequentist would sa | Bayesian and frequentist reasoning in plain English
Let us say a man rolls a six sided die and it has outcomes 1, 2, 3, 4, 5, or 6. Furthermore, he says that if it lands on a 3, he'll give you a free text book.
Then informally:
The Frequentist would say that each outcome has an equal 1 in 6 chance of occurring. She vie... | Bayesian and frequentist reasoning in plain English
Let us say a man rolls a six sided die and it has outcomes 1, 2, 3, 4, 5, or 6. Furthermore, he says that if it lands on a 3, he'll give you a free text book.
Then informally:
The Frequentist would sa |
43,683 | Bayesian and frequentist reasoning in plain English | Just a little bit of fun...
A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule.
From this site:
http://www2.isye.gatech.edu/~brani/isyebayes/jokes.html
and from the same site, a nice essay...
"An Intuitive Explanation of Bayes' Theorem"
http://yudk... | Bayesian and frequentist reasoning in plain English | Just a little bit of fun...
A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule.
From this site:
http://www2.isye.gatech.edu/~bra | Bayesian and frequentist reasoning in plain English
Just a little bit of fun...
A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule.
From this site:
http://www2.isye.gatech.edu/~brani/isyebayes/jokes.html
and from the same site, a nice essay...
"An ... | Bayesian and frequentist reasoning in plain English
Just a little bit of fun...
A Bayesian is one who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule.
From this site:
http://www2.isye.gatech.edu/~bra |
43,684 | Bayesian and frequentist reasoning in plain English | The Bayesian is asked to make bets, which may include anything from which fly will crawl up a wall faster to which medicine will save most lives, or which prisoners should go to jail. He has a big box with a handle. He knows that if he puts absolutely everything he knows into the box, including his personal opinion, an... | Bayesian and frequentist reasoning in plain English | The Bayesian is asked to make bets, which may include anything from which fly will crawl up a wall faster to which medicine will save most lives, or which prisoners should go to jail. He has a big box | Bayesian and frequentist reasoning in plain English
The Bayesian is asked to make bets, which may include anything from which fly will crawl up a wall faster to which medicine will save most lives, or which prisoners should go to jail. He has a big box with a handle. He knows that if he puts absolutely everything he kn... | Bayesian and frequentist reasoning in plain English
The Bayesian is asked to make bets, which may include anything from which fly will crawl up a wall faster to which medicine will save most lives, or which prisoners should go to jail. He has a big box |
43,685 | Bayesian and frequentist reasoning in plain English | In plain english, I would say that Bayesian and Frequentist reasoning are distinguished by two different ways of answering the question:
What is probability?
Most differences will essentially boil down to how each answers this question, for it basically defines the domain of valid applications of the theory. Now you c... | Bayesian and frequentist reasoning in plain English | In plain english, I would say that Bayesian and Frequentist reasoning are distinguished by two different ways of answering the question:
What is probability?
Most differences will essentially boil dow | Bayesian and frequentist reasoning in plain English
In plain english, I would say that Bayesian and Frequentist reasoning are distinguished by two different ways of answering the question:
What is probability?
Most differences will essentially boil down to how each answers this question, for it basically defines the do... | Bayesian and frequentist reasoning in plain English
In plain english, I would say that Bayesian and Frequentist reasoning are distinguished by two different ways of answering the question:
What is probability?
Most differences will essentially boil dow |
43,686 | Bayesian and frequentist reasoning in plain English | In reality, I think much of the philosophy surrounding the issue is just grandstanding. That's not to dismiss the debate, but it is a word of caution. Sometimes, practical matters take priority - I'll give an example below.
Also, you could just as easily argue that there are more than two approaches:
Neyman-Pearson ... | Bayesian and frequentist reasoning in plain English | In reality, I think much of the philosophy surrounding the issue is just grandstanding. That's not to dismiss the debate, but it is a word of caution. Sometimes, practical matters take priority - I' | Bayesian and frequentist reasoning in plain English
In reality, I think much of the philosophy surrounding the issue is just grandstanding. That's not to dismiss the debate, but it is a word of caution. Sometimes, practical matters take priority - I'll give an example below.
Also, you could just as easily argue that ... | Bayesian and frequentist reasoning in plain English
In reality, I think much of the philosophy surrounding the issue is just grandstanding. That's not to dismiss the debate, but it is a word of caution. Sometimes, practical matters take priority - I' |
43,687 | Bayesian and frequentist reasoning in plain English | Bayesian and frequentist statistics are compatible in that they can be understood as two limiting cases of assessing the probability of future events based on past events and an assumed model, if one admits that in the limit of a very large number of observations, no uncertainty about the system remains, and that in th... | Bayesian and frequentist reasoning in plain English | Bayesian and frequentist statistics are compatible in that they can be understood as two limiting cases of assessing the probability of future events based on past events and an assumed model, if one | Bayesian and frequentist reasoning in plain English
Bayesian and frequentist statistics are compatible in that they can be understood as two limiting cases of assessing the probability of future events based on past events and an assumed model, if one admits that in the limit of a very large number of observations, no ... | Bayesian and frequentist reasoning in plain English
Bayesian and frequentist statistics are compatible in that they can be understood as two limiting cases of assessing the probability of future events based on past events and an assumed model, if one |
43,688 | Bayesian and frequentist reasoning in plain English | I would say that they look at probability in different ways. The Bayesian is subjective and uses a priori beliefs to define a prior probability distribution on the possible values of the unknown parameters. So he relies on a theory of probability like deFinetti's. The frequentist see probability as something that ha... | Bayesian and frequentist reasoning in plain English | I would say that they look at probability in different ways. The Bayesian is subjective and uses a priori beliefs to define a prior probability distribution on the possible values of the unknown para | Bayesian and frequentist reasoning in plain English
I would say that they look at probability in different ways. The Bayesian is subjective and uses a priori beliefs to define a prior probability distribution on the possible values of the unknown parameters. So he relies on a theory of probability like deFinetti's. ... | Bayesian and frequentist reasoning in plain English
I would say that they look at probability in different ways. The Bayesian is subjective and uses a priori beliefs to define a prior probability distribution on the possible values of the unknown para |
43,689 | Bayesian and frequentist reasoning in plain English | The simplest and clearest explanation I've seen, from Larry Wasserman's notes on Statistical Machine Learning (with disclaimer: "at the risk of oversimplifying"):
Frequentist versus Bayesian Methods
In frequentist inference, probabilities are interpreted as long run frequencies. The goal is to create procedures with ... | Bayesian and frequentist reasoning in plain English | The simplest and clearest explanation I've seen, from Larry Wasserman's notes on Statistical Machine Learning (with disclaimer: "at the risk of oversimplifying"):
Frequentist versus Bayesian Methods
| Bayesian and frequentist reasoning in plain English
The simplest and clearest explanation I've seen, from Larry Wasserman's notes on Statistical Machine Learning (with disclaimer: "at the risk of oversimplifying"):
Frequentist versus Bayesian Methods
In frequentist inference, probabilities are interpreted as long run... | Bayesian and frequentist reasoning in plain English
The simplest and clearest explanation I've seen, from Larry Wasserman's notes on Statistical Machine Learning (with disclaimer: "at the risk of oversimplifying"):
Frequentist versus Bayesian Methods
|
43,690 | Bayesian and frequentist reasoning in plain English | I've attempted a side-by-side comparison of the two schools of thought here and have more background information here. | Bayesian and frequentist reasoning in plain English | I've attempted a side-by-side comparison of the two schools of thought here and have more background information here. | Bayesian and frequentist reasoning in plain English
I've attempted a side-by-side comparison of the two schools of thought here and have more background information here. | Bayesian and frequentist reasoning in plain English
I've attempted a side-by-side comparison of the two schools of thought here and have more background information here. |
43,691 | Bayesian and frequentist reasoning in plain English | The way I answer this question is that frequentists compare the data they see to what they expected. That is, they have a mental model on how frequent something should happen, and then see data and how often it did happen. i.e. how likely is the data they have seen given the model they chose.
Bayesian people, on the ot... | Bayesian and frequentist reasoning in plain English | The way I answer this question is that frequentists compare the data they see to what they expected. That is, they have a mental model on how frequent something should happen, and then see data and ho | Bayesian and frequentist reasoning in plain English
The way I answer this question is that frequentists compare the data they see to what they expected. That is, they have a mental model on how frequent something should happen, and then see data and how often it did happen. i.e. how likely is the data they have seen gi... | Bayesian and frequentist reasoning in plain English
The way I answer this question is that frequentists compare the data they see to what they expected. That is, they have a mental model on how frequent something should happen, and then see data and ho |
43,692 | Bayesian and frequentist reasoning in plain English | In short plain English as follows:
In Bayesian, parameters vary and data are fixed
In Bayesian, $P(\theta|X)=\frac{P(X|\theta)P(\theta)}{P(X)}$ where $P(\theta|X)$ means parameters vary and data are fixed.
In frequentist, parameters are fixed and data vary
In frequentist, $P(\theta|X)=P(X|\theta)$ where $P(X|\theta... | Bayesian and frequentist reasoning in plain English | In short plain English as follows:
In Bayesian, parameters vary and data are fixed
In Bayesian, $P(\theta|X)=\frac{P(X|\theta)P(\theta)}{P(X)}$ where $P(\theta|X)$ means parameters vary and data are | Bayesian and frequentist reasoning in plain English
In short plain English as follows:
In Bayesian, parameters vary and data are fixed
In Bayesian, $P(\theta|X)=\frac{P(X|\theta)P(\theta)}{P(X)}$ where $P(\theta|X)$ means parameters vary and data are fixed.
In frequentist, parameters are fixed and data vary
In freq... | Bayesian and frequentist reasoning in plain English
In short plain English as follows:
In Bayesian, parameters vary and data are fixed
In Bayesian, $P(\theta|X)=\frac{P(X|\theta)P(\theta)}{P(X)}$ where $P(\theta|X)$ means parameters vary and data are |
43,693 | What R packages do you find most useful in your daily work? | Please see link:
TOP 100 R PACKAGES FOR 2013 (JAN-MAY)
http://www.r-statistics.com/2013/06/top-100-r-packages-for-2013-jan-may/ | What R packages do you find most useful in your daily work? | Please see link:
TOP 100 R PACKAGES FOR 2013 (JAN-MAY)
http://www.r-statistics.com/2013/06/top-100-r-packages-for-2013-jan-may/ | What R packages do you find most useful in your daily work?
Please see link:
TOP 100 R PACKAGES FOR 2013 (JAN-MAY)
http://www.r-statistics.com/2013/06/top-100-r-packages-for-2013-jan-may/ | What R packages do you find most useful in your daily work?
Please see link:
TOP 100 R PACKAGES FOR 2013 (JAN-MAY)
http://www.r-statistics.com/2013/06/top-100-r-packages-for-2013-jan-may/ |
43,694 | What R packages do you find most useful in your daily work? | I use plyr and ggplot2 the most on a daily basis.
I also rely heavily on time series packages; most especially, the zoo package. | What R packages do you find most useful in your daily work? | I use plyr and ggplot2 the most on a daily basis.
I also rely heavily on time series packages; most especially, the zoo package. | What R packages do you find most useful in your daily work?
I use plyr and ggplot2 the most on a daily basis.
I also rely heavily on time series packages; most especially, the zoo package. | What R packages do you find most useful in your daily work?
I use plyr and ggplot2 the most on a daily basis.
I also rely heavily on time series packages; most especially, the zoo package. |
43,695 | What R packages do you find most useful in your daily work? | In a narrow sense, R Core has a recommendation: the "recommended" packages.
Everything else depends on your data analysis tasks at hand, and I'd recommend the Task Views at CRAN. | What R packages do you find most useful in your daily work? | In a narrow sense, R Core has a recommendation: the "recommended" packages.
Everything else depends on your data analysis tasks at hand, and I'd recommend the Task Views at CRAN. | What R packages do you find most useful in your daily work?
In a narrow sense, R Core has a recommendation: the "recommended" packages.
Everything else depends on your data analysis tasks at hand, and I'd recommend the Task Views at CRAN. | What R packages do you find most useful in your daily work?
In a narrow sense, R Core has a recommendation: the "recommended" packages.
Everything else depends on your data analysis tasks at hand, and I'd recommend the Task Views at CRAN. |
43,696 | What R packages do you find most useful in your daily work? | I use the xtable package. The xtable package turns tables produced by R (in particular, the tables displaying the anova results) into LaTeX tables, to be included in an article. | What R packages do you find most useful in your daily work? | I use the xtable package. The xtable package turns tables produced by R (in particular, the tables displaying the anova results) into LaTeX tables, to be included in an article. | What R packages do you find most useful in your daily work?
I use the xtable package. The xtable package turns tables produced by R (in particular, the tables displaying the anova results) into LaTeX tables, to be included in an article. | What R packages do you find most useful in your daily work?
I use the xtable package. The xtable package turns tables produced by R (in particular, the tables displaying the anova results) into LaTeX tables, to be included in an article. |
43,697 | What R packages do you find most useful in your daily work? | multicore is quite nice for tool for making faster scripts faster.
cacheSweave saves a lot of time when using Sweave. | What R packages do you find most useful in your daily work? | multicore is quite nice for tool for making faster scripts faster.
cacheSweave saves a lot of time when using Sweave. | What R packages do you find most useful in your daily work?
multicore is quite nice for tool for making faster scripts faster.
cacheSweave saves a lot of time when using Sweave. | What R packages do you find most useful in your daily work?
multicore is quite nice for tool for making faster scripts faster.
cacheSweave saves a lot of time when using Sweave. |
43,698 | What R packages do you find most useful in your daily work? | ggplot2 - hands down best visualization for R.
RMySQL/RSQLite/RODBC - for connecting to a databases
sqldf - manipulate data.frames with SQL queries
Hmisc/rms - packages from Frank Harrell containing convenient miscellaneous functions and nice functions for regression analyses.
GenABEL - nice package for genome-wide ass... | What R packages do you find most useful in your daily work? | ggplot2 - hands down best visualization for R.
RMySQL/RSQLite/RODBC - for connecting to a databases
sqldf - manipulate data.frames with SQL queries
Hmisc/rms - packages from Frank Harrell containing c | What R packages do you find most useful in your daily work?
ggplot2 - hands down best visualization for R.
RMySQL/RSQLite/RODBC - for connecting to a databases
sqldf - manipulate data.frames with SQL queries
Hmisc/rms - packages from Frank Harrell containing convenient miscellaneous functions and nice functions for reg... | What R packages do you find most useful in your daily work?
ggplot2 - hands down best visualization for R.
RMySQL/RSQLite/RODBC - for connecting to a databases
sqldf - manipulate data.frames with SQL queries
Hmisc/rms - packages from Frank Harrell containing c |
43,699 | What R packages do you find most useful in your daily work? | data.table is my favorite now! Very look forward to the new version with the more wishlist implemented. | What R packages do you find most useful in your daily work? | data.table is my favorite now! Very look forward to the new version with the more wishlist implemented. | What R packages do you find most useful in your daily work?
data.table is my favorite now! Very look forward to the new version with the more wishlist implemented. | What R packages do you find most useful in your daily work?
data.table is my favorite now! Very look forward to the new version with the more wishlist implemented. |
43,700 | What R packages do you find most useful in your daily work? | Packages I often use are raster, sp, spatstat, vegan and splancs. I sometimes use ggplot2, tcltk and lattice. | What R packages do you find most useful in your daily work? | Packages I often use are raster, sp, spatstat, vegan and splancs. I sometimes use ggplot2, tcltk and lattice. | What R packages do you find most useful in your daily work?
Packages I often use are raster, sp, spatstat, vegan and splancs. I sometimes use ggplot2, tcltk and lattice. | What R packages do you find most useful in your daily work?
Packages I often use are raster, sp, spatstat, vegan and splancs. I sometimes use ggplot2, tcltk and lattice. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.