idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
49,201 | How to choose between plain vanilla RNN and LSTM RNN when modelling a time series? | Empirically. The criteria is the performance on the validation set. Typically LSTM outperforms RNN, as it does a better job at avoiding the vanishing gradient problem, and can model longer dependences. Some other RNN variants sometimes outperform LSTM for some tasks, e.g. GRU.
FYI:
Greff, Klaus, Rupesh Kumar Srivasta... | How to choose between plain vanilla RNN and LSTM RNN when modelling a time series? | Empirically. The criteria is the performance on the validation set. Typically LSTM outperforms RNN, as it does a better job at avoiding the vanishing gradient problem, and can model longer dependences | How to choose between plain vanilla RNN and LSTM RNN when modelling a time series?
Empirically. The criteria is the performance on the validation set. Typically LSTM outperforms RNN, as it does a better job at avoiding the vanishing gradient problem, and can model longer dependences. Some other RNN variants sometimes o... | How to choose between plain vanilla RNN and LSTM RNN when modelling a time series?
Empirically. The criteria is the performance on the validation set. Typically LSTM outperforms RNN, as it does a better job at avoiding the vanishing gradient problem, and can model longer dependences |
49,202 | By which ways can we, in principle, evaluate whether a model succeeded in generalizing? | One important notion of generalizability, especially in machine learning, is predictive accuracy: the degree to which a learner can predict the value of the dependent variable in cases it wasn't trained with. Predictive accuracy can be estimated with a wide variety of techniques, including a train-test split, cross-val... | By which ways can we, in principle, evaluate whether a model succeeded in generalizing? | One important notion of generalizability, especially in machine learning, is predictive accuracy: the degree to which a learner can predict the value of the dependent variable in cases it wasn't train | By which ways can we, in principle, evaluate whether a model succeeded in generalizing?
One important notion of generalizability, especially in machine learning, is predictive accuracy: the degree to which a learner can predict the value of the dependent variable in cases it wasn't trained with. Predictive accuracy can... | By which ways can we, in principle, evaluate whether a model succeeded in generalizing?
One important notion of generalizability, especially in machine learning, is predictive accuracy: the degree to which a learner can predict the value of the dependent variable in cases it wasn't train |
49,203 | By which ways can we, in principle, evaluate whether a model succeeded in generalizing? | For medium to large datasets, most practitioners will use a holdout set. This is what you refer to as training, validation, and test sets. A holdout set consists of data that your model has never seen before. If your model generalizes well on the holdout set, then presumably it will generalize equally well on live prod... | By which ways can we, in principle, evaluate whether a model succeeded in generalizing? | For medium to large datasets, most practitioners will use a holdout set. This is what you refer to as training, validation, and test sets. A holdout set consists of data that your model has never seen | By which ways can we, in principle, evaluate whether a model succeeded in generalizing?
For medium to large datasets, most practitioners will use a holdout set. This is what you refer to as training, validation, and test sets. A holdout set consists of data that your model has never seen before. If your model generaliz... | By which ways can we, in principle, evaluate whether a model succeeded in generalizing?
For medium to large datasets, most practitioners will use a holdout set. This is what you refer to as training, validation, and test sets. A holdout set consists of data that your model has never seen |
49,204 | Fixed Effects in a model I ANOVA. Why should the parameters sum to zero? | If you don't apply some such constraint, you could not identify any of the parameters, because you could add an arbitrary $\delta$ to each $\alpha_i$ and compensate by subtracting $\delta$ from $\mu$.
You are free to impose any set of constraints that will lead to identifiable parameters. The sum-to-zero constraints ... | Fixed Effects in a model I ANOVA. Why should the parameters sum to zero? | If you don't apply some such constraint, you could not identify any of the parameters, because you could add an arbitrary $\delta$ to each $\alpha_i$ and compensate by subtracting $\delta$ from $\mu$. | Fixed Effects in a model I ANOVA. Why should the parameters sum to zero?
If you don't apply some such constraint, you could not identify any of the parameters, because you could add an arbitrary $\delta$ to each $\alpha_i$ and compensate by subtracting $\delta$ from $\mu$.
You are free to impose any set of constraints... | Fixed Effects in a model I ANOVA. Why should the parameters sum to zero?
If you don't apply some such constraint, you could not identify any of the parameters, because you could add an arbitrary $\delta$ to each $\alpha_i$ and compensate by subtracting $\delta$ from $\mu$. |
49,205 | Thompson Sampling | This formula suffers from heavy notation which perhaps makes it a bit difficult to digest.
Let $A$ be the random event that the action $a^*\in\mathcal{A}$ maximizes the expected reward
$$\bar{r}(a,\theta)=\mathbb{E}(r|a,\theta).$$
Let $r^*(\theta)$ be the maximum expected reward for given $\theta$,
$$
\bar{r}^*(\theta)... | Thompson Sampling | This formula suffers from heavy notation which perhaps makes it a bit difficult to digest.
Let $A$ be the random event that the action $a^*\in\mathcal{A}$ maximizes the expected reward
$$\bar{r}(a,\th | Thompson Sampling
This formula suffers from heavy notation which perhaps makes it a bit difficult to digest.
Let $A$ be the random event that the action $a^*\in\mathcal{A}$ maximizes the expected reward
$$\bar{r}(a,\theta)=\mathbb{E}(r|a,\theta).$$
Let $r^*(\theta)$ be the maximum expected reward for given $\theta$,
$$... | Thompson Sampling
This formula suffers from heavy notation which perhaps makes it a bit difficult to digest.
Let $A$ be the random event that the action $a^*\in\mathcal{A}$ maximizes the expected reward
$$\bar{r}(a,\th |
49,206 | Same kernel for mixed/categorical data? | From a practical point of view, there're no issues with that practice, and some benefices (like a simplified framework).
From a theoretical point of view coincidence between categorical features might not really mean much similarity, these similarities depend on the probabilities of occurrence, and these could (should?... | Same kernel for mixed/categorical data? | From a practical point of view, there're no issues with that practice, and some benefices (like a simplified framework).
From a theoretical point of view coincidence between categorical features might | Same kernel for mixed/categorical data?
From a practical point of view, there're no issues with that practice, and some benefices (like a simplified framework).
From a theoretical point of view coincidence between categorical features might not really mean much similarity, these similarities depend on the probabilities... | Same kernel for mixed/categorical data?
From a practical point of view, there're no issues with that practice, and some benefices (like a simplified framework).
From a theoretical point of view coincidence between categorical features might |
49,207 | Is it legit to run clustering on MDS result of a distance matrix? | MDS is mostly a visualization tool, it can suggests clusters but it doesn't test if the groupings you see are similar at a certain level. So the other papers you refer to were right at using MDS to only plot their data.
I previously used the software PRIMER to do clustering analysis and the package clustsig seems to be... | Is it legit to run clustering on MDS result of a distance matrix? | MDS is mostly a visualization tool, it can suggests clusters but it doesn't test if the groupings you see are similar at a certain level. So the other papers you refer to were right at using MDS to on | Is it legit to run clustering on MDS result of a distance matrix?
MDS is mostly a visualization tool, it can suggests clusters but it doesn't test if the groupings you see are similar at a certain level. So the other papers you refer to were right at using MDS to only plot their data.
I previously used the software PRI... | Is it legit to run clustering on MDS result of a distance matrix?
MDS is mostly a visualization tool, it can suggests clusters but it doesn't test if the groupings you see are similar at a certain level. So the other papers you refer to were right at using MDS to on |
49,208 | Is it legit to run clustering on MDS result of a distance matrix? | That's certainly valid. You just have to keep in mind the tradeoff. You are forsaking some information to embed your $N$ dimensional data into a lower dimensional space.
Multidimensional Scaling can be seen as a dimensionality reduction algorithm like any other, with the advantage being it tries to keep the hierarchica... | Is it legit to run clustering on MDS result of a distance matrix? | That's certainly valid. You just have to keep in mind the tradeoff. You are forsaking some information to embed your $N$ dimensional data into a lower dimensional space.
Multidimensional Scaling can b | Is it legit to run clustering on MDS result of a distance matrix?
That's certainly valid. You just have to keep in mind the tradeoff. You are forsaking some information to embed your $N$ dimensional data into a lower dimensional space.
Multidimensional Scaling can be seen as a dimensionality reduction algorithm like an... | Is it legit to run clustering on MDS result of a distance matrix?
That's certainly valid. You just have to keep in mind the tradeoff. You are forsaking some information to embed your $N$ dimensional data into a lower dimensional space.
Multidimensional Scaling can b |
49,209 | Pandas Time Series DataFrame Missing Values | Keep the 2010 dates - but make the values 'nan' . If you have a timeseries Dataframe, then it's as simple as:
newDf = old.set_value('2010', 'Total Sales', float('nan').
if your data drop out isn't exactly 2010, you can replace 0s with nans:
new = old.replace([0], float('nan'))
This will cause a "pen lift" (if you ca... | Pandas Time Series DataFrame Missing Values | Keep the 2010 dates - but make the values 'nan' . If you have a timeseries Dataframe, then it's as simple as:
newDf = old.set_value('2010', 'Total Sales', float('nan').
if your data drop out isn't e | Pandas Time Series DataFrame Missing Values
Keep the 2010 dates - but make the values 'nan' . If you have a timeseries Dataframe, then it's as simple as:
newDf = old.set_value('2010', 'Total Sales', float('nan').
if your data drop out isn't exactly 2010, you can replace 0s with nans:
new = old.replace([0], float('nan... | Pandas Time Series DataFrame Missing Values
Keep the 2010 dates - but make the values 'nan' . If you have a timeseries Dataframe, then it's as simple as:
newDf = old.set_value('2010', 'Total Sales', float('nan').
if your data drop out isn't e |
49,210 | Which is the dimension (or units) of the predicted random effects? | The issue is that you are attempting to take the logarithm of a variable which is not dimensionless.
There are a number of reasons to state that $\ln(x)$, $\exp(x)$, $\cos(x)$ and so on are properly defined (from a dimensional analysis point of view) only if $x$ is dimensionless. For example, if you define the $\exp$ f... | Which is the dimension (or units) of the predicted random effects? | The issue is that you are attempting to take the logarithm of a variable which is not dimensionless.
There are a number of reasons to state that $\ln(x)$, $\exp(x)$, $\cos(x)$ and so on are properly d | Which is the dimension (or units) of the predicted random effects?
The issue is that you are attempting to take the logarithm of a variable which is not dimensionless.
There are a number of reasons to state that $\ln(x)$, $\exp(x)$, $\cos(x)$ and so on are properly defined (from a dimensional analysis point of view) on... | Which is the dimension (or units) of the predicted random effects?
The issue is that you are attempting to take the logarithm of a variable which is not dimensionless.
There are a number of reasons to state that $\ln(x)$, $\exp(x)$, $\cos(x)$ and so on are properly d |
49,211 | Which is the dimension (or units) of the predicted random effects? | As Robin has pointed out in his answer, it is possible to deal with equations involving units by dividing through by a base value of a single unit, thus creating a unitless equation. However, it is also possible to deal directly with the dimensional quantities (i.e., with the units still included) by treating the unit... | Which is the dimension (or units) of the predicted random effects? | As Robin has pointed out in his answer, it is possible to deal with equations involving units by dividing through by a base value of a single unit, thus creating a unitless equation. However, it is a | Which is the dimension (or units) of the predicted random effects?
As Robin has pointed out in his answer, it is possible to deal with equations involving units by dividing through by a base value of a single unit, thus creating a unitless equation. However, it is also possible to deal directly with the dimensional qu... | Which is the dimension (or units) of the predicted random effects?
As Robin has pointed out in his answer, it is possible to deal with equations involving units by dividing through by a base value of a single unit, thus creating a unitless equation. However, it is a |
49,212 | What is the advantage of sparsity? | I would not so much call it an advantage but a way out in otherwise intractable situations. In particular, in high dimensional problems, we face the situation that the number of available predictors $p$ is much larger than the number of observations $n$. As is well known, classical methods like OLS do not work (do not ... | What is the advantage of sparsity? | I would not so much call it an advantage but a way out in otherwise intractable situations. In particular, in high dimensional problems, we face the situation that the number of available predictors $ | What is the advantage of sparsity?
I would not so much call it an advantage but a way out in otherwise intractable situations. In particular, in high dimensional problems, we face the situation that the number of available predictors $p$ is much larger than the number of observations $n$. As is well known, classical me... | What is the advantage of sparsity?
I would not so much call it an advantage but a way out in otherwise intractable situations. In particular, in high dimensional problems, we face the situation that the number of available predictors $ |
49,213 | What is the advantage of sparsity? | I found this interesting post on sparsity. In my opinion sparsity helps in even terms of statistical robustness of the solution. Having an over complete representation of the features, help averaging the statistical fluctuations introduced during the training phase. But this is an intuition, I have no proof. | What is the advantage of sparsity? | I found this interesting post on sparsity. In my opinion sparsity helps in even terms of statistical robustness of the solution. Having an over complete representation of the features, help averaging | What is the advantage of sparsity?
I found this interesting post on sparsity. In my opinion sparsity helps in even terms of statistical robustness of the solution. Having an over complete representation of the features, help averaging the statistical fluctuations introduced during the training phase. But this is an int... | What is the advantage of sparsity?
I found this interesting post on sparsity. In my opinion sparsity helps in even terms of statistical robustness of the solution. Having an over complete representation of the features, help averaging |
49,214 | Difference between first one-step ahead forecast and first forecast from fitted model | forecast always produces forecasts beyond the end of the data.
So forecast(fit) produces forecasts for observations 401, 402, ...
and forecast(refit) produces forecasts for observations 501, 502, ...
fitted produces one-step in-sample (i.e., training data) "forecasts". That is, it gives a forecast of observation t usin... | Difference between first one-step ahead forecast and first forecast from fitted model | forecast always produces forecasts beyond the end of the data.
So forecast(fit) produces forecasts for observations 401, 402, ...
and forecast(refit) produces forecasts for observations 501, 502, ...
| Difference between first one-step ahead forecast and first forecast from fitted model
forecast always produces forecasts beyond the end of the data.
So forecast(fit) produces forecasts for observations 401, 402, ...
and forecast(refit) produces forecasts for observations 501, 502, ...
fitted produces one-step in-sample... | Difference between first one-step ahead forecast and first forecast from fitted model
forecast always produces forecasts beyond the end of the data.
So forecast(fit) produces forecasts for observations 401, 402, ...
and forecast(refit) produces forecasts for observations 501, 502, ...
|
49,215 | Example of a consistent estimator that doesn't grow less variable with increased sample size? | The common meaning of "consistency" and its technical meaning are different. See this page for some discussion. Also, as noted by @hejseb in a comment on another answer here, lack of bias and consistency are not the same.
This quote from the Wikipedia page may help remove some confusion:
Bias is related to consistency... | Example of a consistent estimator that doesn't grow less variable with increased sample size? | The common meaning of "consistency" and its technical meaning are different. See this page for some discussion. Also, as noted by @hejseb in a comment on another answer here, lack of bias and consiste | Example of a consistent estimator that doesn't grow less variable with increased sample size?
The common meaning of "consistency" and its technical meaning are different. See this page for some discussion. Also, as noted by @hejseb in a comment on another answer here, lack of bias and consistency are not the same.
This... | Example of a consistent estimator that doesn't grow less variable with increased sample size?
The common meaning of "consistency" and its technical meaning are different. See this page for some discussion. Also, as noted by @hejseb in a comment on another answer here, lack of bias and consiste |
49,216 | Maximum Likelihood estimation and the Kalman filter | I could be wrong, but what makes sense to me is this:
define a function for the kalman filtering and prediction. Make that output the log likelihood (using v and the covariance matrix of v). The log likelihood in this case is described in the stack exchange post you refer to. Make sure Q, R, mu_0 and A are free parame... | Maximum Likelihood estimation and the Kalman filter | I could be wrong, but what makes sense to me is this:
define a function for the kalman filtering and prediction. Make that output the log likelihood (using v and the covariance matrix of v). The log | Maximum Likelihood estimation and the Kalman filter
I could be wrong, but what makes sense to me is this:
define a function for the kalman filtering and prediction. Make that output the log likelihood (using v and the covariance matrix of v). The log likelihood in this case is described in the stack exchange post you ... | Maximum Likelihood estimation and the Kalman filter
I could be wrong, but what makes sense to me is this:
define a function for the kalman filtering and prediction. Make that output the log likelihood (using v and the covariance matrix of v). The log |
49,217 | Maximum Likelihood estimation and the Kalman filter | We need to first clarify things here. The original derivation of Kalman filter is optimal for causal predictions. That means you predict at time $t$ given observations until time $t$.
Now for the maximum likelihood (ML) inference of parameters, assuming that these parameters are shared across time, during inference of... | Maximum Likelihood estimation and the Kalman filter | We need to first clarify things here. The original derivation of Kalman filter is optimal for causal predictions. That means you predict at time $t$ given observations until time $t$.
Now for the max | Maximum Likelihood estimation and the Kalman filter
We need to first clarify things here. The original derivation of Kalman filter is optimal for causal predictions. That means you predict at time $t$ given observations until time $t$.
Now for the maximum likelihood (ML) inference of parameters, assuming that these pa... | Maximum Likelihood estimation and the Kalman filter
We need to first clarify things here. The original derivation of Kalman filter is optimal for causal predictions. That means you predict at time $t$ given observations until time $t$.
Now for the max |
49,218 | Unbiased estimator of binomial parameter | Just notice that the probability generating function of $X\sim\mathsf{Bin}(m,p)$ is
$$E(a^X)=(1-p+pa)^m$$
Setting $1-p+ap=1+p$ gives $a=2$.
So for $X_i\sim \mathsf{Bin}(m,p)$ we have $$E(2^{X_i})=(1+p)^m$$
This also means $$E\left(\frac{1}{n}\sum_{i=1}^n 2^{X_i}\right)=(1+p)^m$$
Hence an unbiased estimator of $(1+p)^m$... | Unbiased estimator of binomial parameter | Just notice that the probability generating function of $X\sim\mathsf{Bin}(m,p)$ is
$$E(a^X)=(1-p+pa)^m$$
Setting $1-p+ap=1+p$ gives $a=2$.
So for $X_i\sim \mathsf{Bin}(m,p)$ we have $$E(2^{X_i})=(1+p | Unbiased estimator of binomial parameter
Just notice that the probability generating function of $X\sim\mathsf{Bin}(m,p)$ is
$$E(a^X)=(1-p+pa)^m$$
Setting $1-p+ap=1+p$ gives $a=2$.
So for $X_i\sim \mathsf{Bin}(m,p)$ we have $$E(2^{X_i})=(1+p)^m$$
This also means $$E\left(\frac{1}{n}\sum_{i=1}^n 2^{X_i}\right)=(1+p)^m$$... | Unbiased estimator of binomial parameter
Just notice that the probability generating function of $X\sim\mathsf{Bin}(m,p)$ is
$$E(a^X)=(1-p+pa)^m$$
Setting $1-p+ap=1+p$ gives $a=2$.
So for $X_i\sim \mathsf{Bin}(m,p)$ we have $$E(2^{X_i})=(1+p |
49,219 | Unbiased estimator of binomial parameter | Let $n$ be the parameter of the binomial, $n=10$ in your case, and $m$ the sample size, $m=5$ in your case. I think there is an approximate answer to this that avoids long explicit summations in the case that $n$ is large and if additionally $np$ (or $n(1-p)$) is also large enough so that the normal approximation to th... | Unbiased estimator of binomial parameter | Let $n$ be the parameter of the binomial, $n=10$ in your case, and $m$ the sample size, $m=5$ in your case. I think there is an approximate answer to this that avoids long explicit summations in the c | Unbiased estimator of binomial parameter
Let $n$ be the parameter of the binomial, $n=10$ in your case, and $m$ the sample size, $m=5$ in your case. I think there is an approximate answer to this that avoids long explicit summations in the case that $n$ is large and if additionally $np$ (or $n(1-p)$) is also large enou... | Unbiased estimator of binomial parameter
Let $n$ be the parameter of the binomial, $n=10$ in your case, and $m$ the sample size, $m=5$ in your case. I think there is an approximate answer to this that avoids long explicit summations in the c |
49,220 | Why Are Impulse Responses in VECM Permanent? | This is a great question, and I'm learning so bear with me.
What would be a correct interpretation of an impulse response that does not go back to 0 in a VECM?
Riffing on the drunken walk theme, suppose a drunken man is randomly walking when a mean teenager pushes him. The push sends the man stumbling but he regains ... | Why Are Impulse Responses in VECM Permanent? | This is a great question, and I'm learning so bear with me.
What would be a correct interpretation of an impulse response that does not go back to 0 in a VECM?
Riffing on the drunken walk theme, sup | Why Are Impulse Responses in VECM Permanent?
This is a great question, and I'm learning so bear with me.
What would be a correct interpretation of an impulse response that does not go back to 0 in a VECM?
Riffing on the drunken walk theme, suppose a drunken man is randomly walking when a mean teenager pushes him. The... | Why Are Impulse Responses in VECM Permanent?
This is a great question, and I'm learning so bear with me.
What would be a correct interpretation of an impulse response that does not go back to 0 in a VECM?
Riffing on the drunken walk theme, sup |
49,221 | Leverages and effect of leverage points | We just need to calculate the hat matrix. Write the model for the oneway layout in the form $Y_{ij}= \alpha_j +\epsilon_{ij}$ with one parameter for each group (and no explicit intercept). That will make the calculations simpler (and the hat matrix will not depend on the parametrization chosen), $i=1,2,\dotsc,p, \quad... | Leverages and effect of leverage points | We just need to calculate the hat matrix. Write the model for the oneway layout in the form $Y_{ij}= \alpha_j +\epsilon_{ij}$ with one parameter for each group (and no explicit intercept). That will | Leverages and effect of leverage points
We just need to calculate the hat matrix. Write the model for the oneway layout in the form $Y_{ij}= \alpha_j +\epsilon_{ij}$ with one parameter for each group (and no explicit intercept). That will make the calculations simpler (and the hat matrix will not depend on the paramet... | Leverages and effect of leverage points
We just need to calculate the hat matrix. Write the model for the oneway layout in the form $Y_{ij}= \alpha_j +\epsilon_{ij}$ with one parameter for each group (and no explicit intercept). That will |
49,222 | What's aggregation bias, and how does it relate to the ecological fallacy? | From Clark and Avery (1976):
It has long been known that the use of aggregate data may yield correlation coefficients exhibiting considerable bias above their values at the individual level [10, 21]; and Blalock [2] has shown that the regression coefficients may be biased also. It is well established that it is incorr... | What's aggregation bias, and how does it relate to the ecological fallacy? | From Clark and Avery (1976):
It has long been known that the use of aggregate data may yield correlation coefficients exhibiting considerable bias above their values at the individual level [10, 21]; | What's aggregation bias, and how does it relate to the ecological fallacy?
From Clark and Avery (1976):
It has long been known that the use of aggregate data may yield correlation coefficients exhibiting considerable bias above their values at the individual level [10, 21]; and Blalock [2] has shown that the regressio... | What's aggregation bias, and how does it relate to the ecological fallacy?
From Clark and Avery (1976):
It has long been known that the use of aggregate data may yield correlation coefficients exhibiting considerable bias above their values at the individual level [10, 21]; |
49,223 | Showing independence between two functions of a set of random variables | The MGF idea works well.
The scale of the exponential distribution doesn't matter, so we may take $\theta=1$ and set the exponential density to
$$f(x) = e^{-x} \mathcal{I}(x \gt 0).$$
Writing
$$Y = \sum_i a_i \log(X_i),\ Z = \sum_i X_i,$$
for $|s|\lt 1$ and $|t|\lt 1$ compute the joint MGF as
$$\eqalign{
\phi_{Y,Z}(s,... | Showing independence between two functions of a set of random variables | The MGF idea works well.
The scale of the exponential distribution doesn't matter, so we may take $\theta=1$ and set the exponential density to
$$f(x) = e^{-x} \mathcal{I}(x \gt 0).$$
Writing
$$Y = \ | Showing independence between two functions of a set of random variables
The MGF idea works well.
The scale of the exponential distribution doesn't matter, so we may take $\theta=1$ and set the exponential density to
$$f(x) = e^{-x} \mathcal{I}(x \gt 0).$$
Writing
$$Y = \sum_i a_i \log(X_i),\ Z = \sum_i X_i,$$
for $|s|... | Showing independence between two functions of a set of random variables
The MGF idea works well.
The scale of the exponential distribution doesn't matter, so we may take $\theta=1$ and set the exponential density to
$$f(x) = e^{-x} \mathcal{I}(x \gt 0).$$
Writing
$$Y = \ |
49,224 | Non linear regression mixed model | Because you want to predict the optimal temperatures of each strain, treating the strains as fixed makes sense. However, an interaction among the three level factor and the model for the optimum makes for a headache. I don't think there's enough data to fit it.
A random effects model might help. With your full data of... | Non linear regression mixed model | Because you want to predict the optimal temperatures of each strain, treating the strains as fixed makes sense. However, an interaction among the three level factor and the model for the optimum makes | Non linear regression mixed model
Because you want to predict the optimal temperatures of each strain, treating the strains as fixed makes sense. However, an interaction among the three level factor and the model for the optimum makes for a headache. I don't think there's enough data to fit it.
A random effects model ... | Non linear regression mixed model
Because you want to predict the optimal temperatures of each strain, treating the strains as fixed makes sense. However, an interaction among the three level factor and the model for the optimum makes |
49,225 | Non linear regression mixed model | Use intervals() to get confidence intervals of the model parameters. I also think that the mixed model is more appropriate. But a model with random effects in all parameters can be difficult to estimate. With the data you provided, the model is estimated but the correlation between random effects is close to one. I pre... | Non linear regression mixed model | Use intervals() to get confidence intervals of the model parameters. I also think that the mixed model is more appropriate. But a model with random effects in all parameters can be difficult to estima | Non linear regression mixed model
Use intervals() to get confidence intervals of the model parameters. I also think that the mixed model is more appropriate. But a model with random effects in all parameters can be difficult to estimate. With the data you provided, the model is estimated but the correlation between ran... | Non linear regression mixed model
Use intervals() to get confidence intervals of the model parameters. I also think that the mixed model is more appropriate. But a model with random effects in all parameters can be difficult to estima |
49,226 | How do I find the percentile p (or quantile q) from a weighted dataset? | To solve for quantile $q$ in a weighted set of ordered observations $x_1, x_2, \ldots$:
Let $W$ be the sum of the weights.
Let $w_1, w_2, \ldots$ equal the observation weights ordered by the ranks of the observations.
Find the largest $k$ such that $w_1+w_2+\ldots+w_k \leq Wq$.
Then $x_k$ is your estimate for the $q... | How do I find the percentile p (or quantile q) from a weighted dataset? | To solve for quantile $q$ in a weighted set of ordered observations $x_1, x_2, \ldots$:
Let $W$ be the sum of the weights.
Let $w_1, w_2, \ldots$ equal the observation weights ordered by the ranks o | How do I find the percentile p (or quantile q) from a weighted dataset?
To solve for quantile $q$ in a weighted set of ordered observations $x_1, x_2, \ldots$:
Let $W$ be the sum of the weights.
Let $w_1, w_2, \ldots$ equal the observation weights ordered by the ranks of the observations.
Find the largest $k$ such t... | How do I find the percentile p (or quantile q) from a weighted dataset?
To solve for quantile $q$ in a weighted set of ordered observations $x_1, x_2, \ldots$:
Let $W$ be the sum of the weights.
Let $w_1, w_2, \ldots$ equal the observation weights ordered by the ranks o |
49,227 | How to compute estimate for the first time series value using ARIMA model? | There are different methods that are commonly used to calculate or set initial values for time series modeling algorithms.
The simplest are heuristics, like using the overall mean, or the first observation, or the mean of the first $n$ observations, or whatever. These are often used when there is no underlying statist... | How to compute estimate for the first time series value using ARIMA model? | There are different methods that are commonly used to calculate or set initial values for time series modeling algorithms.
The simplest are heuristics, like using the overall mean, or the first obser | How to compute estimate for the first time series value using ARIMA model?
There are different methods that are commonly used to calculate or set initial values for time series modeling algorithms.
The simplest are heuristics, like using the overall mean, or the first observation, or the mean of the first $n$ observat... | How to compute estimate for the first time series value using ARIMA model?
There are different methods that are commonly used to calculate or set initial values for time series modeling algorithms.
The simplest are heuristics, like using the overall mean, or the first obser |
49,228 | What statistical analysis to run for count data in R? | I'd recommend starting off using a Poisson regression model, which is well suited for count models. Since you seem to have multiple counts at different locations, you will need to use a method that takes into account the correlation of these observations within their clusters? I would suggest using a Generalized Esti... | What statistical analysis to run for count data in R? | I'd recommend starting off using a Poisson regression model, which is well suited for count models. Since you seem to have multiple counts at different locations, you will need to use a method that t | What statistical analysis to run for count data in R?
I'd recommend starting off using a Poisson regression model, which is well suited for count models. Since you seem to have multiple counts at different locations, you will need to use a method that takes into account the correlation of these observations within the... | What statistical analysis to run for count data in R?
I'd recommend starting off using a Poisson regression model, which is well suited for count models. Since you seem to have multiple counts at different locations, you will need to use a method that t |
49,229 | What statistical analysis to run for count data in R? | One simplified approach would entail pooling the counts for spring over the three-year interval, and the counts for fall over the same three years, separately. You can approach this as a goodness-of-fit (GoF) chi-squared test. The idea is that the number of counts would (under the null hypothesis of no difference betwe... | What statistical analysis to run for count data in R? | One simplified approach would entail pooling the counts for spring over the three-year interval, and the counts for fall over the same three years, separately. You can approach this as a goodness-of-f | What statistical analysis to run for count data in R?
One simplified approach would entail pooling the counts for spring over the three-year interval, and the counts for fall over the same three years, separately. You can approach this as a goodness-of-fit (GoF) chi-squared test. The idea is that the number of counts w... | What statistical analysis to run for count data in R?
One simplified approach would entail pooling the counts for spring over the three-year interval, and the counts for fall over the same three years, separately. You can approach this as a goodness-of-f |
49,230 | What statistical analysis to run for count data in R? | There is a debate in biology and ecology studies about transformation (e.g. log-transform) or model reformation (GLM) in dealing with count data. It really depends on the outcome you want to achieve and the cost (error) you can bear. If you want to visualize the data using the boxplot and avoid outliers, I would recomm... | What statistical analysis to run for count data in R? | There is a debate in biology and ecology studies about transformation (e.g. log-transform) or model reformation (GLM) in dealing with count data. It really depends on the outcome you want to achieve a | What statistical analysis to run for count data in R?
There is a debate in biology and ecology studies about transformation (e.g. log-transform) or model reformation (GLM) in dealing with count data. It really depends on the outcome you want to achieve and the cost (error) you can bear. If you want to visualize the dat... | What statistical analysis to run for count data in R?
There is a debate in biology and ecology studies about transformation (e.g. log-transform) or model reformation (GLM) in dealing with count data. It really depends on the outcome you want to achieve a |
49,231 | What statistical analysis to run for count data in R? | A basic statistical test you can do is the unpaired t-test. This will give you t-value and a confidence interval (default 95%) to determine if there is a "true" difference between the number of deer in spring versus fall. R code below:
t.test(spring_deer_count, fall_deer_count)
If your t-value falls outside the confid... | What statistical analysis to run for count data in R? | A basic statistical test you can do is the unpaired t-test. This will give you t-value and a confidence interval (default 95%) to determine if there is a "true" difference between the number of deer i | What statistical analysis to run for count data in R?
A basic statistical test you can do is the unpaired t-test. This will give you t-value and a confidence interval (default 95%) to determine if there is a "true" difference between the number of deer in spring versus fall. R code below:
t.test(spring_deer_count, fall... | What statistical analysis to run for count data in R?
A basic statistical test you can do is the unpaired t-test. This will give you t-value and a confidence interval (default 95%) to determine if there is a "true" difference between the number of deer i |
49,232 | Should I have "Confidence" in Credibility Intervals? | If you have accurately characterized your beliefs about a particular quantity in the prior distribution, then YES, you should have "confidence" in your updated beliefs, represented by the posterior distribution (and therefore credible intervals constructed from it), because Bayes' rule provides the appropriate way to u... | Should I have "Confidence" in Credibility Intervals? | If you have accurately characterized your beliefs about a particular quantity in the prior distribution, then YES, you should have "confidence" in your updated beliefs, represented by the posterior di | Should I have "Confidence" in Credibility Intervals?
If you have accurately characterized your beliefs about a particular quantity in the prior distribution, then YES, you should have "confidence" in your updated beliefs, represented by the posterior distribution (and therefore credible intervals constructed from it), ... | Should I have "Confidence" in Credibility Intervals?
If you have accurately characterized your beliefs about a particular quantity in the prior distribution, then YES, you should have "confidence" in your updated beliefs, represented by the posterior di |
49,233 | Should I have "Confidence" in Credibility Intervals? | It sounds from your statement,"how can I have 95% belief in anything (you either believe something is true or you don't)," that you have a great deal of confidence in what statistics can tell us. Inherently, I (a proponent of Bayesian methods) ask how much to believe in something, and with new information, I am able to... | Should I have "Confidence" in Credibility Intervals? | It sounds from your statement,"how can I have 95% belief in anything (you either believe something is true or you don't)," that you have a great deal of confidence in what statistics can tell us. Inhe | Should I have "Confidence" in Credibility Intervals?
It sounds from your statement,"how can I have 95% belief in anything (you either believe something is true or you don't)," that you have a great deal of confidence in what statistics can tell us. Inherently, I (a proponent of Bayesian methods) ask how much to believe... | Should I have "Confidence" in Credibility Intervals?
It sounds from your statement,"how can I have 95% belief in anything (you either believe something is true or you don't)," that you have a great deal of confidence in what statistics can tell us. Inhe |
49,234 | Formal test for exogeneity of instruments | If you have exactly as many instrumental variables as endogenous regressors, then there is no way to test for IV validity in a homogenous effects model.
Consider, for example the following model:
$$
Y = \alpha + \beta D + U
$$
This is a homogeneous effects model: the treatment effect is a constant $\beta$ that is the... | Formal test for exogeneity of instruments | If you have exactly as many instrumental variables as endogenous regressors, then there is no way to test for IV validity in a homogenous effects model.
Consider, for example the following model:
$$
| Formal test for exogeneity of instruments
If you have exactly as many instrumental variables as endogenous regressors, then there is no way to test for IV validity in a homogenous effects model.
Consider, for example the following model:
$$
Y = \alpha + \beta D + U
$$
This is a homogeneous effects model: the treatmen... | Formal test for exogeneity of instruments
If you have exactly as many instrumental variables as endogenous regressors, then there is no way to test for IV validity in a homogenous effects model.
Consider, for example the following model:
$$
|
49,235 | Formal test for exogeneity of instruments | Hausman and Wu specifications and the test for over identification will do. | Formal test for exogeneity of instruments | Hausman and Wu specifications and the test for over identification will do. | Formal test for exogeneity of instruments
Hausman and Wu specifications and the test for over identification will do. | Formal test for exogeneity of instruments
Hausman and Wu specifications and the test for over identification will do. |
49,236 | Formal test for exogeneity of instruments | You're confusing the concept of endogeneity of instrument with its independence from your outcome.
Given the equation:
$y=\beta_0+\beta_1x+\beta_2z+u$
where $y$ is the outcome, $x$ is the endogenous variable, $z$ is an instrument, and $u$ are unobservables. Endogeneity is what happens when one or more of your right-h... | Formal test for exogeneity of instruments | You're confusing the concept of endogeneity of instrument with its independence from your outcome.
Given the equation:
$y=\beta_0+\beta_1x+\beta_2z+u$
where $y$ is the outcome, $x$ is the endogenous | Formal test for exogeneity of instruments
You're confusing the concept of endogeneity of instrument with its independence from your outcome.
Given the equation:
$y=\beta_0+\beta_1x+\beta_2z+u$
where $y$ is the outcome, $x$ is the endogenous variable, $z$ is an instrument, and $u$ are unobservables. Endogeneity is wha... | Formal test for exogeneity of instruments
You're confusing the concept of endogeneity of instrument with its independence from your outcome.
Given the equation:
$y=\beta_0+\beta_1x+\beta_2z+u$
where $y$ is the outcome, $x$ is the endogenous |
49,237 | Combining transition operators in MCMC | A combination of MCMC proposals can only improve upon each of the Markov operators used, if you do not take computing time into account. There is for instance an early result by Tierney (1994) about the benefits of mixing two MCMC kernels. (One can also argue that Gibbs sampling is nothing but a combination of kernels.... | Combining transition operators in MCMC | A combination of MCMC proposals can only improve upon each of the Markov operators used, if you do not take computing time into account. There is for instance an early result by Tierney (1994) about t | Combining transition operators in MCMC
A combination of MCMC proposals can only improve upon each of the Markov operators used, if you do not take computing time into account. There is for instance an early result by Tierney (1994) about the benefits of mixing two MCMC kernels. (One can also argue that Gibbs sampling i... | Combining transition operators in MCMC
A combination of MCMC proposals can only improve upon each of the Markov operators used, if you do not take computing time into account. There is for instance an early result by Tierney (1994) about t |
49,238 | Markov Random Field Non-Positive Distribution | The forward direction is actually quite a deep result, known as the Hammersley-Clifford Theorem. The counterexample for non-positive distributions was found by Moussouris, and you can see it on page 12 here, and the explanation that follows on page 13. The given network is not factorable.
In case the link goes stale, ... | Markov Random Field Non-Positive Distribution | The forward direction is actually quite a deep result, known as the Hammersley-Clifford Theorem. The counterexample for non-positive distributions was found by Moussouris, and you can see it on page 1 | Markov Random Field Non-Positive Distribution
The forward direction is actually quite a deep result, known as the Hammersley-Clifford Theorem. The counterexample for non-positive distributions was found by Moussouris, and you can see it on page 12 here, and the explanation that follows on page 13. The given network is ... | Markov Random Field Non-Positive Distribution
The forward direction is actually quite a deep result, known as the Hammersley-Clifford Theorem. The counterexample for non-positive distributions was found by Moussouris, and you can see it on page 1 |
49,239 | Is Quasi-Poisson the same thing as fitting a Poisson GEE model? | Quasi-poisson GLM is a special case of Poisson GEE.
The specification of GEE (copied from wikipedia) is that
$$U(\beta)=\sum_i \frac{\partial\mu_{ij}}{\partial \beta_k} V_i(Y_i-\mu_i(\beta))$$
where $V_i$ is the variance of observation $i$. In the case that these are a constant multiple of the (standard dispersion) poi... | Is Quasi-Poisson the same thing as fitting a Poisson GEE model? | Quasi-poisson GLM is a special case of Poisson GEE.
The specification of GEE (copied from wikipedia) is that
$$U(\beta)=\sum_i \frac{\partial\mu_{ij}}{\partial \beta_k} V_i(Y_i-\mu_i(\beta))$$
where $ | Is Quasi-Poisson the same thing as fitting a Poisson GEE model?
Quasi-poisson GLM is a special case of Poisson GEE.
The specification of GEE (copied from wikipedia) is that
$$U(\beta)=\sum_i \frac{\partial\mu_{ij}}{\partial \beta_k} V_i(Y_i-\mu_i(\beta))$$
where $V_i$ is the variance of observation $i$. In the case tha... | Is Quasi-Poisson the same thing as fitting a Poisson GEE model?
Quasi-poisson GLM is a special case of Poisson GEE.
The specification of GEE (copied from wikipedia) is that
$$U(\beta)=\sum_i \frac{\partial\mu_{ij}}{\partial \beta_k} V_i(Y_i-\mu_i(\beta))$$
where $ |
49,240 | Transpose or lack of transpose in the $\hat y=X\hat \beta$ regression equation | In matrix notation
$$\hat{Y} = X \hat{\beta} $$
where $\hat{Y}$ is the fitted $m \times 1$ response vector, $X$ is an $m \times n$ model matrix and $\hat{\beta}$ is the estimated $n \times 1$ coefficient. Each column of $X$ is a predictor and each row is observed predictor values for each observation.
If we to write $... | Transpose or lack of transpose in the $\hat y=X\hat \beta$ regression equation | In matrix notation
$$\hat{Y} = X \hat{\beta} $$
where $\hat{Y}$ is the fitted $m \times 1$ response vector, $X$ is an $m \times n$ model matrix and $\hat{\beta}$ is the estimated $n \times 1$ coeffic | Transpose or lack of transpose in the $\hat y=X\hat \beta$ regression equation
In matrix notation
$$\hat{Y} = X \hat{\beta} $$
where $\hat{Y}$ is the fitted $m \times 1$ response vector, $X$ is an $m \times n$ model matrix and $\hat{\beta}$ is the estimated $n \times 1$ coefficient. Each column of $X$ is a predictor a... | Transpose or lack of transpose in the $\hat y=X\hat \beta$ regression equation
In matrix notation
$$\hat{Y} = X \hat{\beta} $$
where $\hat{Y}$ is the fitted $m \times 1$ response vector, $X$ is an $m \times n$ model matrix and $\hat{\beta}$ is the estimated $n \times 1$ coeffic |
49,241 | Endogeneity in forecasting | Its sure that endogeneity is not acceptable thing if our goal is to find structural/causal effect. You are focused on forecasting then endogeneity, as produced by omitted variables, actually don't is a major problem. Endogeneity produce, first of all, biased parameters estimates. Other sources of endogeneity as, measur... | Endogeneity in forecasting | Its sure that endogeneity is not acceptable thing if our goal is to find structural/causal effect. You are focused on forecasting then endogeneity, as produced by omitted variables, actually don't is | Endogeneity in forecasting
Its sure that endogeneity is not acceptable thing if our goal is to find structural/causal effect. You are focused on forecasting then endogeneity, as produced by omitted variables, actually don't is a major problem. Endogeneity produce, first of all, biased parameters estimates. Other source... | Endogeneity in forecasting
Its sure that endogeneity is not acceptable thing if our goal is to find structural/causal effect. You are focused on forecasting then endogeneity, as produced by omitted variables, actually don't is |
49,242 | How to tune the weak learner in boosted algorithms | Yes, weak learners are absolutely required for boosting to be really successful. That is because each boosting round for trees actually results in more splits and a more complicated model. This will overfit quite quick if we let it. On the other hand, if we pass a more complicated model such as a large polynomial lin... | How to tune the weak learner in boosted algorithms | Yes, weak learners are absolutely required for boosting to be really successful. That is because each boosting round for trees actually results in more splits and a more complicated model. This will | How to tune the weak learner in boosted algorithms
Yes, weak learners are absolutely required for boosting to be really successful. That is because each boosting round for trees actually results in more splits and a more complicated model. This will overfit quite quick if we let it. On the other hand, if we pass a mo... | How to tune the weak learner in boosted algorithms
Yes, weak learners are absolutely required for boosting to be really successful. That is because each boosting round for trees actually results in more splits and a more complicated model. This will |
49,243 | How to tune the weak learner in boosted algorithms | The definition of weak learner is:
Something better than a random guess (1), (2), (3)
There is nothing in that about overfitting, though weakness means overfitting is highly likely. Being a very bad estimator and being a robust estimator tend to not coincide.
Whether or not your learners are weak, you can do a primi... | How to tune the weak learner in boosted algorithms | The definition of weak learner is:
Something better than a random guess (1), (2), (3)
There is nothing in that about overfitting, though weakness means overfitting is highly likely. Being a very ba | How to tune the weak learner in boosted algorithms
The definition of weak learner is:
Something better than a random guess (1), (2), (3)
There is nothing in that about overfitting, though weakness means overfitting is highly likely. Being a very bad estimator and being a robust estimator tend to not coincide.
Whethe... | How to tune the weak learner in boosted algorithms
The definition of weak learner is:
Something better than a random guess (1), (2), (3)
There is nothing in that about overfitting, though weakness means overfitting is highly likely. Being a very ba |
49,244 | How to tune the weak learner in boosted algorithms | One of the practical reasons why we use weak learners is that we don't need to care about issues like this. In many cases ensembling many weak learners is just enough to achieve good performance. Weak learners are simple by design, we don't usually tune them. You are correct, if you wanted to tune them, this becomes a ... | How to tune the weak learner in boosted algorithms | One of the practical reasons why we use weak learners is that we don't need to care about issues like this. In many cases ensembling many weak learners is just enough to achieve good performance. Weak | How to tune the weak learner in boosted algorithms
One of the practical reasons why we use weak learners is that we don't need to care about issues like this. In many cases ensembling many weak learners is just enough to achieve good performance. Weak learners are simple by design, we don't usually tune them. You are c... | How to tune the weak learner in boosted algorithms
One of the practical reasons why we use weak learners is that we don't need to care about issues like this. In many cases ensembling many weak learners is just enough to achieve good performance. Weak |
49,245 | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point? | Before you go any further, ask yourself why you want a cutoff at all and, if so, what you mean by "optimal."
Survival plots may require some choice of a cutoff for display, but if you have a continuous predictor your most important job is to figure out its actual relation to outcome. That might end up being a cutoff in... | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p | Before you go any further, ask yourself why you want a cutoff at all and, if so, what you mean by "optimal."
Survival plots may require some choice of a cutoff for display, but if you have a continuou | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point?
Before you go any further, ask yourself why you want a cutoff at all and, if so, what you mean by "optimal."
Survival plots may require some choice of a cutoff for display, but if you have a continuous predictor yo... | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p
Before you go any further, ask yourself why you want a cutoff at all and, if so, what you mean by "optimal."
Survival plots may require some choice of a cutoff for display, but if you have a continuou |
49,246 | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point? | Obviously the median is the more robust choice in this case, unless there is some clinical consideration to do otherwise. The only reasonable use for finding an "optimal cut-off" would be in order to determine some cut-off for some future analysis. In general, if you want to make a continuous covariate discrete, the de... | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p | Obviously the median is the more robust choice in this case, unless there is some clinical consideration to do otherwise. The only reasonable use for finding an "optimal cut-off" would be in order to | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point?
Obviously the median is the more robust choice in this case, unless there is some clinical consideration to do otherwise. The only reasonable use for finding an "optimal cut-off" would be in order to determine some... | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p
Obviously the median is the more robust choice in this case, unless there is some clinical consideration to do otherwise. The only reasonable use for finding an "optimal cut-off" would be in order to |
49,247 | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
You can try the 'maxstat' tool (an R package) and this... | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-point?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
... | How to determine the cut-point of continuous predictor in survival analysis, optimal or median cut-p
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
49,248 | Incorporating auto-correlation structure into a negative binomial generalized additive mixed model using mgcv in R | This is reasonable, in the same sense that working correlation matrices are included in GLMs to give the General Estimating Equations (GEE) model.
With gamm(), what you are getting is gamm() -> glmmPQL() -> lme() so you are really fitting a specially weighted linear mixed model. I won't ask how you found that $\theta =... | Incorporating auto-correlation structure into a negative binomial generalized additive mixed model u | This is reasonable, in the same sense that working correlation matrices are included in GLMs to give the General Estimating Equations (GEE) model.
With gamm(), what you are getting is gamm() -> glmmPQ | Incorporating auto-correlation structure into a negative binomial generalized additive mixed model using mgcv in R
This is reasonable, in the same sense that working correlation matrices are included in GLMs to give the General Estimating Equations (GEE) model.
With gamm(), what you are getting is gamm() -> glmmPQL() -... | Incorporating auto-correlation structure into a negative binomial generalized additive mixed model u
This is reasonable, in the same sense that working correlation matrices are included in GLMs to give the General Estimating Equations (GEE) model.
With gamm(), what you are getting is gamm() -> glmmPQ |
49,249 | Is a Markov chain with a limiting distribution a stationary process? | Here's a simple example illustrating why the answer is no.
Let $$P = \begin{pmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{pmatrix}$$ be the transition matrix for a first-order Markov process $X_t$ with state space $\left\{0, 1\right\}$. The limiting distribution is $\pi = \left(0.5, 0.5\right)$. However, suppose you start the p... | Is a Markov chain with a limiting distribution a stationary process? | Here's a simple example illustrating why the answer is no.
Let $$P = \begin{pmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{pmatrix}$$ be the transition matrix for a first-order Markov process $X_t$ with state s | Is a Markov chain with a limiting distribution a stationary process?
Here's a simple example illustrating why the answer is no.
Let $$P = \begin{pmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{pmatrix}$$ be the transition matrix for a first-order Markov process $X_t$ with state space $\left\{0, 1\right\}$. The limiting distributi... | Is a Markov chain with a limiting distribution a stationary process?
Here's a simple example illustrating why the answer is no.
Let $$P = \begin{pmatrix} 0.5 & 0.5 \\ 0.5 & 0.5 \end{pmatrix}$$ be the transition matrix for a first-order Markov process $X_t$ with state s |
49,250 | Deterministic sampling from discrete distribution | One general way to generate similar sequences for distributions with similar probabilities is the following. Suppose $A$ is an ordered finite alphabet $(a,b,c,\ldots)$ with a probability distribution $p_A$. To draw a value at random from $A$, generate a vector of independent uniform variates $\mathbf{U}=(U_a, U_b, U_... | Deterministic sampling from discrete distribution | One general way to generate similar sequences for distributions with similar probabilities is the following. Suppose $A$ is an ordered finite alphabet $(a,b,c,\ldots)$ with a probability distribution | Deterministic sampling from discrete distribution
One general way to generate similar sequences for distributions with similar probabilities is the following. Suppose $A$ is an ordered finite alphabet $(a,b,c,\ldots)$ with a probability distribution $p_A$. To draw a value at random from $A$, generate a vector of inde... | Deterministic sampling from discrete distribution
One general way to generate similar sequences for distributions with similar probabilities is the following. Suppose $A$ is an ordered finite alphabet $(a,b,c,\ldots)$ with a probability distribution |
49,251 | stacking and blending of regression models | The simplest way to stack your predictions is to take the average. Linear regression is certainly an alternative. Here is a link to a video in which Phil Brierley describes using regularized regression instead of linear regression to combine model predictions. You could also look at accounts by other Kaggle winners. Fo... | stacking and blending of regression models | The simplest way to stack your predictions is to take the average. Linear regression is certainly an alternative. Here is a link to a video in which Phil Brierley describes using regularized regressio | stacking and blending of regression models
The simplest way to stack your predictions is to take the average. Linear regression is certainly an alternative. Here is a link to a video in which Phil Brierley describes using regularized regression instead of linear regression to combine model predictions. You could also l... | stacking and blending of regression models
The simplest way to stack your predictions is to take the average. Linear regression is certainly an alternative. Here is a link to a video in which Phil Brierley describes using regularized regressio |
49,252 | Calculating p-values for two tail test for population variance | What you are dealing with in this question is a two-sided variance test, which is a specific case of a two-sided test with an asymmetric null distribution. The p-value is the total area under the null density for all values in the lower and upper tails of that density that are at least as "extreme" (i.e., at least as ... | Calculating p-values for two tail test for population variance | What you are dealing with in this question is a two-sided variance test, which is a specific case of a two-sided test with an asymmetric null distribution. The p-value is the total area under the nul | Calculating p-values for two tail test for population variance
What you are dealing with in this question is a two-sided variance test, which is a specific case of a two-sided test with an asymmetric null distribution. The p-value is the total area under the null density for all values in the lower and upper tails of ... | Calculating p-values for two tail test for population variance
What you are dealing with in this question is a two-sided variance test, which is a specific case of a two-sided test with an asymmetric null distribution. The p-value is the total area under the nul |
49,253 | Calculating p-values for two tail test for population variance | The formula you are using for p-value calculation is incorrect.
The right way to do it would be:
p-value = probability ($χ2$ <= value) with (n-2) degrees of freedom (chi-square distribution)
here value is 15.35667 here and (n-2) =16. Now identify the corresponding p-value using any tool that is available online or a t... | Calculating p-values for two tail test for population variance | The formula you are using for p-value calculation is incorrect.
The right way to do it would be:
p-value = probability ($χ2$ <= value) with (n-2) degrees of freedom (chi-square distribution)
here val | Calculating p-values for two tail test for population variance
The formula you are using for p-value calculation is incorrect.
The right way to do it would be:
p-value = probability ($χ2$ <= value) with (n-2) degrees of freedom (chi-square distribution)
here value is 15.35667 here and (n-2) =16. Now identify the corre... | Calculating p-values for two tail test for population variance
The formula you are using for p-value calculation is incorrect.
The right way to do it would be:
p-value = probability ($χ2$ <= value) with (n-2) degrees of freedom (chi-square distribution)
here val |
49,254 | Cross-validation techniques for time series data | Sliding window is perhaps the most straightforward solution for time series, see e.g. Hyndman & Athanasopoulos "Forecasting Principles and Practice" Chapter 2.5 (bottom of the page) and Rob J. Hyndman's blog post "Time series cross-validation: an R example".
However, Bergmeir et al. "A note on the validity of cross-val... | Cross-validation techniques for time series data | Sliding window is perhaps the most straightforward solution for time series, see e.g. Hyndman & Athanasopoulos "Forecasting Principles and Practice" Chapter 2.5 (bottom of the page) and Rob J. Hyndman | Cross-validation techniques for time series data
Sliding window is perhaps the most straightforward solution for time series, see e.g. Hyndman & Athanasopoulos "Forecasting Principles and Practice" Chapter 2.5 (bottom of the page) and Rob J. Hyndman's blog post "Time series cross-validation: an R example".
However, Ber... | Cross-validation techniques for time series data
Sliding window is perhaps the most straightforward solution for time series, see e.g. Hyndman & Athanasopoulos "Forecasting Principles and Practice" Chapter 2.5 (bottom of the page) and Rob J. Hyndman |
49,255 | What are the prerequisites to start learning Bayesian analysis? | Maturity is not really what is needed. Clarity of purpose is useful, though, and that clarity is often absent in statistics books and courses.
According to Richard Royall, there are three main types of question that are typically answered with the help of statistics, and I think that those questions are the prerequisit... | What are the prerequisites to start learning Bayesian analysis? | Maturity is not really what is needed. Clarity of purpose is useful, though, and that clarity is often absent in statistics books and courses.
According to Richard Royall, there are three main types o | What are the prerequisites to start learning Bayesian analysis?
Maturity is not really what is needed. Clarity of purpose is useful, though, and that clarity is often absent in statistics books and courses.
According to Richard Royall, there are three main types of question that are typically answered with the help of ... | What are the prerequisites to start learning Bayesian analysis?
Maturity is not really what is needed. Clarity of purpose is useful, though, and that clarity is often absent in statistics books and courses.
According to Richard Royall, there are three main types o |
49,256 | Metrics for one-class classification | We use one-class classification is used when we have only "positive" labels (although some argue for using it when the quality of the data about the labels is poor) for outlier, or anomaly, detection.
With such data you cannot assess accuracy of the predictions. Technically you can check if it properly labeled all your... | Metrics for one-class classification | We use one-class classification is used when we have only "positive" labels (although some argue for using it when the quality of the data about the labels is poor) for outlier, or anomaly, detection. | Metrics for one-class classification
We use one-class classification is used when we have only "positive" labels (although some argue for using it when the quality of the data about the labels is poor) for outlier, or anomaly, detection.
With such data you cannot assess accuracy of the predictions. Technically you can ... | Metrics for one-class classification
We use one-class classification is used when we have only "positive" labels (although some argue for using it when the quality of the data about the labels is poor) for outlier, or anomaly, detection. |
49,257 | Metrics for one-class classification | Though it's a late reply, I'd like to point of implicit assumptions by previous answers that likely don't hold.
for one-class classification, we don't know the real ratio of positive and negative data. So we cannot any development set has similar distribution to the real data.
A standard setting for one-class classifi... | Metrics for one-class classification | Though it's a late reply, I'd like to point of implicit assumptions by previous answers that likely don't hold.
for one-class classification, we don't know the real ratio of positive and negative dat | Metrics for one-class classification
Though it's a late reply, I'd like to point of implicit assumptions by previous answers that likely don't hold.
for one-class classification, we don't know the real ratio of positive and negative data. So we cannot any development set has similar distribution to the real data.
A st... | Metrics for one-class classification
Though it's a late reply, I'd like to point of implicit assumptions by previous answers that likely don't hold.
for one-class classification, we don't know the real ratio of positive and negative dat |
49,258 | Metrics for one-class classification | @user3791422 has the right answer. In addition, I would like to point out:
If you have the notion of True Positive and False Negative, it means you have a notion of the ground truth and you have predicted responses. Therefore, by definition, False Positive and True Negative should exist.
Logically, what the OP didn't ... | Metrics for one-class classification | @user3791422 has the right answer. In addition, I would like to point out:
If you have the notion of True Positive and False Negative, it means you have a notion of the ground truth and you have predi | Metrics for one-class classification
@user3791422 has the right answer. In addition, I would like to point out:
If you have the notion of True Positive and False Negative, it means you have a notion of the ground truth and you have predicted responses. Therefore, by definition, False Positive and True Negative should e... | Metrics for one-class classification
@user3791422 has the right answer. In addition, I would like to point out:
If you have the notion of True Positive and False Negative, it means you have a notion of the ground truth and you have predi |
49,259 | Metrics for one-class classification | When you do one class classification, besides TP and FN, you should also have FP (false positive) and TN (true negative). FP are the instances that you have classified as positives when they were actually negative and TN are the instances that you have correctly classified as negatives. Then you can calculate precision... | Metrics for one-class classification | When you do one class classification, besides TP and FN, you should also have FP (false positive) and TN (true negative). FP are the instances that you have classified as positives when they were actu | Metrics for one-class classification
When you do one class classification, besides TP and FN, you should also have FP (false positive) and TN (true negative). FP are the instances that you have classified as positives when they were actually negative and TN are the instances that you have correctly classified as negati... | Metrics for one-class classification
When you do one class classification, besides TP and FN, you should also have FP (false positive) and TN (true negative). FP are the instances that you have classified as positives when they were actu |
49,260 | What is the connection between many highly correlated parameters in weight matrix with gradient descent converges slowly? | General problem with correlated inputs: In general, it is possible to have correlated input variables, which leads to correlated weights. Let's take an extreme example and lets assume you have a duplicate feature, $x_1 = x_2$ (perfect correlation), and you want a linear function that maps $X$ to $Y$, $Y = f(X)$, where ... | What is the connection between many highly correlated parameters in weight matrix with gradient desc | General problem with correlated inputs: In general, it is possible to have correlated input variables, which leads to correlated weights. Let's take an extreme example and lets assume you have a dupli | What is the connection between many highly correlated parameters in weight matrix with gradient descent converges slowly?
General problem with correlated inputs: In general, it is possible to have correlated input variables, which leads to correlated weights. Let's take an extreme example and lets assume you have a dup... | What is the connection between many highly correlated parameters in weight matrix with gradient desc
General problem with correlated inputs: In general, it is possible to have correlated input variables, which leads to correlated weights. Let's take an extreme example and lets assume you have a dupli |
49,261 | Cross-validation vs random sampling for classification test | If you use some kind of validation (doesn't matter which) to optimize your model (e.g. by driving the feature reduction), and particularly if you compare many models and/or optimize iteratively, you absolutely need to do a validation of the resulting final model. Whether you do that by a separate validation study, nest... | Cross-validation vs random sampling for classification test | If you use some kind of validation (doesn't matter which) to optimize your model (e.g. by driving the feature reduction), and particularly if you compare many models and/or optimize iteratively, you a | Cross-validation vs random sampling for classification test
If you use some kind of validation (doesn't matter which) to optimize your model (e.g. by driving the feature reduction), and particularly if you compare many models and/or optimize iteratively, you absolutely need to do a validation of the resulting final mod... | Cross-validation vs random sampling for classification test
If you use some kind of validation (doesn't matter which) to optimize your model (e.g. by driving the feature reduction), and particularly if you compare many models and/or optimize iteratively, you a |
49,262 | Spearman's Rank-Order Correlation for higher dimensions | Yes, you could in principle extend the idea of a rank correlation to higher dimensions as long as you have a way of ordering the points. For instance, consider two vectors $x_i = (x_{i1}, x_{i2}, \ldots , x_{ip})$ and $x_j = (x_{j1}, x_{j2}, \ldots, x_{jp})$. We could start by comparing the first two coordinates and ... | Spearman's Rank-Order Correlation for higher dimensions | Yes, you could in principle extend the idea of a rank correlation to higher dimensions as long as you have a way of ordering the points. For instance, consider two vectors $x_i = (x_{i1}, x_{i2}, \ld | Spearman's Rank-Order Correlation for higher dimensions
Yes, you could in principle extend the idea of a rank correlation to higher dimensions as long as you have a way of ordering the points. For instance, consider two vectors $x_i = (x_{i1}, x_{i2}, \ldots , x_{ip})$ and $x_j = (x_{j1}, x_{j2}, \ldots, x_{jp})$. We... | Spearman's Rank-Order Correlation for higher dimensions
Yes, you could in principle extend the idea of a rank correlation to higher dimensions as long as you have a way of ordering the points. For instance, consider two vectors $x_i = (x_{i1}, x_{i2}, \ld |
49,263 | Spearman's Rank-Order Correlation for higher dimensions | If you want to extend the idea of Spearman rank correlation to higher dimension and check for a comonotonic dependence between your $3$ variables, you can do the following:
Transform your data $X$, $Y$, and $Z$ with rank statistics into $ranks(X),ranks(Y),ranks(Z)$
If dependence is (perfectly) comonotonic, then the sc... | Spearman's Rank-Order Correlation for higher dimensions | If you want to extend the idea of Spearman rank correlation to higher dimension and check for a comonotonic dependence between your $3$ variables, you can do the following:
Transform your data $X$, $ | Spearman's Rank-Order Correlation for higher dimensions
If you want to extend the idea of Spearman rank correlation to higher dimension and check for a comonotonic dependence between your $3$ variables, you can do the following:
Transform your data $X$, $Y$, and $Z$ with rank statistics into $ranks(X),ranks(Y),ranks(Z... | Spearman's Rank-Order Correlation for higher dimensions
If you want to extend the idea of Spearman rank correlation to higher dimension and check for a comonotonic dependence between your $3$ variables, you can do the following:
Transform your data $X$, $ |
49,264 | Spearman's Rank-Order Correlation for higher dimensions | You might want to have a look at the following article: Taskinen, S., Randles, R., & Oja, H. (2005). Multivariate nonparametric tests of independence. Journal of the American Statistical Association, 100 (471), 916-925. It gives some (rather technical) generalisations of Spearman's rank correlation coefficient to highe... | Spearman's Rank-Order Correlation for higher dimensions | You might want to have a look at the following article: Taskinen, S., Randles, R., & Oja, H. (2005). Multivariate nonparametric tests of independence. Journal of the American Statistical Association, | Spearman's Rank-Order Correlation for higher dimensions
You might want to have a look at the following article: Taskinen, S., Randles, R., & Oja, H. (2005). Multivariate nonparametric tests of independence. Journal of the American Statistical Association, 100 (471), 916-925. It gives some (rather technical) generalisat... | Spearman's Rank-Order Correlation for higher dimensions
You might want to have a look at the following article: Taskinen, S., Randles, R., & Oja, H. (2005). Multivariate nonparametric tests of independence. Journal of the American Statistical Association, |
49,265 | Why can correlograms indicate non-stationarity? | The quote in your comment claims too much but does relate to something real, and that something can be useful in figuring out suitable models for data.
If you have an $I(1)$ series (a very specific kind of nonstationarity), you should see an ACF that doesn't exhibit the kind of geometric "decay" in the characteristic m... | Why can correlograms indicate non-stationarity? | The quote in your comment claims too much but does relate to something real, and that something can be useful in figuring out suitable models for data.
If you have an $I(1)$ series (a very specific ki | Why can correlograms indicate non-stationarity?
The quote in your comment claims too much but does relate to something real, and that something can be useful in figuring out suitable models for data.
If you have an $I(1)$ series (a very specific kind of nonstationarity), you should see an ACF that doesn't exhibit the k... | Why can correlograms indicate non-stationarity?
The quote in your comment claims too much but does relate to something real, and that something can be useful in figuring out suitable models for data.
If you have an $I(1)$ series (a very specific ki |
49,266 | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | Assume that $X$ has a normal distribution with mean $\mu=0$ and variance $\sigma^2$. Then the probability density function (pdf) of the random variable $X$ is given by:
\begin{eqnarray*}
f_X(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{x^{2}}{2\sigma^{2}}}
\end{eqnarray*}
for $-\infty<x<\infty$ and $\sigma>0$.
Now, when $Z... | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | Assume that $X$ has a normal distribution with mean $\mu=0$ and variance $\sigma^2$. Then the probability density function (pdf) of the random variable $X$ is given by:
\begin{eqnarray*}
f_X(x)=\frac | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
Assume that $X$ has a normal distribution with mean $\mu=0$ and variance $\sigma^2$. Then the probability density function (pdf) of the random variable $X$ is given by:
\begin{eqnarray*}
f_X(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac... | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
Assume that $X$ has a normal distribution with mean $\mu=0$ and variance $\sigma^2$. Then the probability density function (pdf) of the random variable $X$ is given by:
\begin{eqnarray*}
f_X(x)=\frac |
49,267 | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | The normal CDF can be written as $$p=\frac{1}{2}\left[1+\text{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right]$$
where $\text{erf}$ is the error function. For a standard normal, $\mu=0$ and $\sigma=1$. If you were to multiply your random variate $x$ by constant $a$, the only way in which you could keep the cumulat... | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | The normal CDF can be written as $$p=\frac{1}{2}\left[1+\text{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right]$$
where $\text{erf}$ is the error function. For a standard normal, $\mu=0$ and $\sig | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
The normal CDF can be written as $$p=\frac{1}{2}\left[1+\text{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right]$$
where $\text{erf}$ is the error function. For a standard normal, $\mu=0$ and $\sigma=1$. If you were to multiply ... | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
The normal CDF can be written as $$p=\frac{1}{2}\left[1+\text{erf}\left(\frac{x-\mu}{\sigma\sqrt{2}}\right)\right]$$
where $\text{erf}$ is the error function. For a standard normal, $\mu=0$ and $\sig |
49,268 | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | The standard normal distribution has zero mean and one sd. So, if we multiply the distribution by a factor lets say 2, the sd will be now 2. The reason is if the distribution is multiplied by 2,the values will get doubled so as the distance from the mean will be doubled too, henceforth the sd too will get doubled.
The ... | Why standard normal samples multiplied by sd are samples from a normal dist with that sd | The standard normal distribution has zero mean and one sd. So, if we multiply the distribution by a factor lets say 2, the sd will be now 2. The reason is if the distribution is multiplied by 2,the va | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
The standard normal distribution has zero mean and one sd. So, if we multiply the distribution by a factor lets say 2, the sd will be now 2. The reason is if the distribution is multiplied by 2,the values will get doubled so as the... | Why standard normal samples multiplied by sd are samples from a normal dist with that sd
The standard normal distribution has zero mean and one sd. So, if we multiply the distribution by a factor lets say 2, the sd will be now 2. The reason is if the distribution is multiplied by 2,the va |
49,269 | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the condition is possible? | I almost surely do not know what is meant by a.s. in the equation
tagged with a $*$ in your question, but the proof of the independence
stuff is straightforward.
Given any event $B$, not necessarily of positive probability, we can
express it as the disjoint union of the events $A\cap B$ and
$A^c\cap B$, that is, $B = ... | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the conditi | I almost surely do not know what is meant by a.s. in the equation
tagged with a $*$ in your question, but the proof of the independence
stuff is straightforward.
Given any event $B$, not necessarily o | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the condition is possible?
I almost surely do not know what is meant by a.s. in the equation
tagged with a $*$ in your question, but the proof of the independence
stuff is straightforward.
Given any event $B$, not necessarily of po... | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the conditi
I almost surely do not know what is meant by a.s. in the equation
tagged with a $*$ in your question, but the proof of the independence
stuff is straightforward.
Given any event $B$, not necessarily o |
49,270 | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the condition is possible? | Prove $P(A|B) = 1$ if $P(A) = 1, P(B) > 0$:
$$P(A) = 1$$
$$\to 1_A = 1_\Omega \ \text{a.s.}$$
$$\to 1_A 1_B =1_\Omega 1_B \ \text{a.s.}$$
$$\to 1_{A \cap B} =1_B \ \text{a.s.}$$
$$\to P(A \cap B) = P(B)$$
$$\to P(A|B)P(B) = P(B)$$
$$\to P(A|B) = 1 \ QED$$
The last line assumes $P(B) > 0$.
Prove $P(A|B) = 0$ if $P(A) =... | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the conditi | Prove $P(A|B) = 1$ if $P(A) = 1, P(B) > 0$:
$$P(A) = 1$$
$$\to 1_A = 1_\Omega \ \text{a.s.}$$
$$\to 1_A 1_B =1_\Omega 1_B \ \text{a.s.}$$
$$\to 1_{A \cap B} =1_B \ \text{a.s.}$$
$$\to P(A \cap B) = P( | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the condition is possible?
Prove $P(A|B) = 1$ if $P(A) = 1, P(B) > 0$:
$$P(A) = 1$$
$$\to 1_A = 1_\Omega \ \text{a.s.}$$
$$\to 1_A 1_B =1_\Omega 1_B \ \text{a.s.}$$
$$\to 1_{A \cap B} =1_B \ \text{a.s.}$$
$$\to P(A \cap B) = P(B)$$... | Does an unconditional probability of 1 or 0 imply a conditional probability of 1 or 0 if the conditi
Prove $P(A|B) = 1$ if $P(A) = 1, P(B) > 0$:
$$P(A) = 1$$
$$\to 1_A = 1_\Omega \ \text{a.s.}$$
$$\to 1_A 1_B =1_\Omega 1_B \ \text{a.s.}$$
$$\to 1_{A \cap B} =1_B \ \text{a.s.}$$
$$\to P(A \cap B) = P( |
49,271 | Can I ignore under-dispersion in my count data? | If the data are overdispersed, you can estimate the same relative rates and calculate prediction intervals as in a poisson model using a quasipoisson model. Even if the data are not overdispersed, a quasipoisson model is valid and fairly efficient. A quasipoisson model just extends the poisson by estimating a dispersio... | Can I ignore under-dispersion in my count data? | If the data are overdispersed, you can estimate the same relative rates and calculate prediction intervals as in a poisson model using a quasipoisson model. Even if the data are not overdispersed, a q | Can I ignore under-dispersion in my count data?
If the data are overdispersed, you can estimate the same relative rates and calculate prediction intervals as in a poisson model using a quasipoisson model. Even if the data are not overdispersed, a quasipoisson model is valid and fairly efficient. A quasipoisson model ju... | Can I ignore under-dispersion in my count data?
If the data are overdispersed, you can estimate the same relative rates and calculate prediction intervals as in a poisson model using a quasipoisson model. Even if the data are not overdispersed, a q |
49,272 | Can I ignore under-dispersion in my count data? | A descriptively adequate treatment of under-dispersion can apparently be gotten from regression using a Conway Maxwell Poisson (COM) distribution or Consul's Generalized Poisson (GP) regression. It seems that just modeling with a Normality assumption is inefficient.
A review paper and responses for the COM model are h... | Can I ignore under-dispersion in my count data? | A descriptively adequate treatment of under-dispersion can apparently be gotten from regression using a Conway Maxwell Poisson (COM) distribution or Consul's Generalized Poisson (GP) regression. It s | Can I ignore under-dispersion in my count data?
A descriptively adequate treatment of under-dispersion can apparently be gotten from regression using a Conway Maxwell Poisson (COM) distribution or Consul's Generalized Poisson (GP) regression. It seems that just modeling with a Normality assumption is inefficient.
A re... | Can I ignore under-dispersion in my count data?
A descriptively adequate treatment of under-dispersion can apparently be gotten from regression using a Conway Maxwell Poisson (COM) distribution or Consul's Generalized Poisson (GP) regression. It s |
49,273 | Why is it true that a sampling distribution of a test statistic is easier to derive under the null? | Here is the easiest example I can think of to make the point.
Consider $X\sim N(\mu,1)$, i.e., sampling from a normal population with known variance 1. Then,
$$\sqrt{n}(\bar{X}_n-\mu)\sim N(0,1)$$
If the null is true, i.e., $\mu=\mu_0$, you have automatically also already derived the sampling distribution of the test ... | Why is it true that a sampling distribution of a test statistic is easier to derive under the null? | Here is the easiest example I can think of to make the point.
Consider $X\sim N(\mu,1)$, i.e., sampling from a normal population with known variance 1. Then,
$$\sqrt{n}(\bar{X}_n-\mu)\sim N(0,1)$$
If | Why is it true that a sampling distribution of a test statistic is easier to derive under the null?
Here is the easiest example I can think of to make the point.
Consider $X\sim N(\mu,1)$, i.e., sampling from a normal population with known variance 1. Then,
$$\sqrt{n}(\bar{X}_n-\mu)\sim N(0,1)$$
If the null is true, i... | Why is it true that a sampling distribution of a test statistic is easier to derive under the null?
Here is the easiest example I can think of to make the point.
Consider $X\sim N(\mu,1)$, i.e., sampling from a normal population with known variance 1. Then,
$$\sqrt{n}(\bar{X}_n-\mu)\sim N(0,1)$$
If |
49,274 | Generate tail of distribution by a given sample in R | I'm assuming that the values are truncated below the threshold, $t$ rather than censored below $t$ (that is, you don't know how many there are below the threshold).
Let the number of points observed above the truncation point be $n_o$.
A simple approach could go as follows:
Estimate $p$, the proportion of the distrib... | Generate tail of distribution by a given sample in R | I'm assuming that the values are truncated below the threshold, $t$ rather than censored below $t$ (that is, you don't know how many there are below the threshold).
Let the number of points observed | Generate tail of distribution by a given sample in R
I'm assuming that the values are truncated below the threshold, $t$ rather than censored below $t$ (that is, you don't know how many there are below the threshold).
Let the number of points observed above the truncation point be $n_o$.
A simple approach could go as ... | Generate tail of distribution by a given sample in R
I'm assuming that the values are truncated below the threshold, $t$ rather than censored below $t$ (that is, you don't know how many there are below the threshold).
Let the number of points observed |
49,275 | Distribution of $\sum_{i=1}^d | \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ if $| \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ is exponentially distributed | Check whether $u^HFv_i$ are independent of $u^HFv_j$. This should be easy, since these two variables being linear combinations of normal variables are normal, so checking independence is the same as checking whether covariance is zero. If the variables are independent then their squares will be independent too.
Update... | Distribution of $\sum_{i=1}^d | \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ if $| \mathbf{u}^H \mathbf | Check whether $u^HFv_i$ are independent of $u^HFv_j$. This should be easy, since these two variables being linear combinations of normal variables are normal, so checking independence is the same as c | Distribution of $\sum_{i=1}^d | \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ if $| \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ is exponentially distributed
Check whether $u^HFv_i$ are independent of $u^HFv_j$. This should be easy, since these two variables being linear combinations of normal variables are normal, so checkin... | Distribution of $\sum_{i=1}^d | \mathbf{u}^H \mathbf{F} \mathbf{v}_i |^2$ if $| \mathbf{u}^H \mathbf
Check whether $u^HFv_i$ are independent of $u^HFv_j$. This should be easy, since these two variables being linear combinations of normal variables are normal, so checking independence is the same as c |
49,276 | Implications of point-wise convergence of the MGF - reference request | I finally found a proof I understood. I took it from: Billingsley "Probability and Measure". In order to be thorough, I reproduce the full argument here.
Thm: If $X_n$ is a sequence of random variables for which:
The MGF is defined for $t \in [-r,r]$
The MGF converges pointwise for $t \in [-r,r]$ to the MGF of $X$
th... | Implications of point-wise convergence of the MGF - reference request | I finally found a proof I understood. I took it from: Billingsley "Probability and Measure". In order to be thorough, I reproduce the full argument here.
Thm: If $X_n$ is a sequence of random variable | Implications of point-wise convergence of the MGF - reference request
I finally found a proof I understood. I took it from: Billingsley "Probability and Measure". In order to be thorough, I reproduce the full argument here.
Thm: If $X_n$ is a sequence of random variables for which:
The MGF is defined for $t \in [-r,r]... | Implications of point-wise convergence of the MGF - reference request
I finally found a proof I understood. I took it from: Billingsley "Probability and Measure". In order to be thorough, I reproduce the full argument here.
Thm: If $X_n$ is a sequence of random variable |
49,277 | How to train classifier for unbalanced class distributions? | Jain and Nag suggest a balanced training set and a representational test data set for evaluation.
The balanced training set allows for the model to familiarize itself with less frequent state of interest and helps the model to formulate general rules.
However, as @rep_ho points out you should definitely use a test set... | How to train classifier for unbalanced class distributions? | Jain and Nag suggest a balanced training set and a representational test data set for evaluation.
The balanced training set allows for the model to familiarize itself with less frequent state of inter | How to train classifier for unbalanced class distributions?
Jain and Nag suggest a balanced training set and a representational test data set for evaluation.
The balanced training set allows for the model to familiarize itself with less frequent state of interest and helps the model to formulate general rules.
However... | How to train classifier for unbalanced class distributions?
Jain and Nag suggest a balanced training set and a representational test data set for evaluation.
The balanced training set allows for the model to familiarize itself with less frequent state of inter |
49,278 | How to train classifier for unbalanced class distributions? | For unbalanced sample , you can use oversampling for those which are underrepresented or under sampling which have more representations .
But oversampling and under sampling should only be done if you feel that your
sample doesn't represent the true population
Now the question arises how do we know whether my sample... | How to train classifier for unbalanced class distributions? | For unbalanced sample , you can use oversampling for those which are underrepresented or under sampling which have more representations .
But oversampling and under sampling should only be done if y | How to train classifier for unbalanced class distributions?
For unbalanced sample , you can use oversampling for those which are underrepresented or under sampling which have more representations .
But oversampling and under sampling should only be done if you feel that your
sample doesn't represent the true populat... | How to train classifier for unbalanced class distributions?
For unbalanced sample , you can use oversampling for those which are underrepresented or under sampling which have more representations .
But oversampling and under sampling should only be done if y |
49,279 | How to train classifier for unbalanced class distributions? | You definitely should not balance your test set. Test set should be independent assessment of your model.
You might should use different scorings then accuracy. E.g. Balanced accuracy (mean of specificity and sensitivity), kappa, F score. Those measures depends on your possibly arbitrary decision of where to put cutou... | How to train classifier for unbalanced class distributions? | You definitely should not balance your test set. Test set should be independent assessment of your model.
You might should use different scorings then accuracy. E.g. Balanced accuracy (mean of specif | How to train classifier for unbalanced class distributions?
You definitely should not balance your test set. Test set should be independent assessment of your model.
You might should use different scorings then accuracy. E.g. Balanced accuracy (mean of specificity and sensitivity), kappa, F score. Those measures depen... | How to train classifier for unbalanced class distributions?
You definitely should not balance your test set. Test set should be independent assessment of your model.
You might should use different scorings then accuracy. E.g. Balanced accuracy (mean of specif |
49,280 | Properties of MaxEnt posterior distribution for a die with prescribed average | The MaxEnt algorithm heavily favors distributions as close to uniform as possible. Therefore, given the constraint that the average is $4$, it is more optimal to add more mass to $6$ rather than $4$ in order for the posterior to stay close to uniform.
Why such tendency? Perhaps it has something to do with the shortest ... | Properties of MaxEnt posterior distribution for a die with prescribed average | The MaxEnt algorithm heavily favors distributions as close to uniform as possible. Therefore, given the constraint that the average is $4$, it is more optimal to add more mass to $6$ rather than $4$ i | Properties of MaxEnt posterior distribution for a die with prescribed average
The MaxEnt algorithm heavily favors distributions as close to uniform as possible. Therefore, given the constraint that the average is $4$, it is more optimal to add more mass to $6$ rather than $4$ in order for the posterior to stay close to... | Properties of MaxEnt posterior distribution for a die with prescribed average
The MaxEnt algorithm heavily favors distributions as close to uniform as possible. Therefore, given the constraint that the average is $4$, it is more optimal to add more mass to $6$ rather than $4$ i |
49,281 | Properties of MaxEnt posterior distribution for a die with prescribed average | A Bayesian inference of the posterior probabilities given prior uniform probabilities is closer to what you had in mind. Because, indeed, the likelihood for getting a 4 for a loaded dice that has a large probability of 4 is high.
The "problem" is that that inference results in a posterior non-uniform distribution even ... | Properties of MaxEnt posterior distribution for a die with prescribed average | A Bayesian inference of the posterior probabilities given prior uniform probabilities is closer to what you had in mind. Because, indeed, the likelihood for getting a 4 for a loaded dice that has a la | Properties of MaxEnt posterior distribution for a die with prescribed average
A Bayesian inference of the posterior probabilities given prior uniform probabilities is closer to what you had in mind. Because, indeed, the likelihood for getting a 4 for a loaded dice that has a large probability of 4 is high.
The "problem... | Properties of MaxEnt posterior distribution for a die with prescribed average
A Bayesian inference of the posterior probabilities given prior uniform probabilities is closer to what you had in mind. Because, indeed, the likelihood for getting a 4 for a loaded dice that has a la |
49,282 | Goodness of fit: power-law or discrete log-normal? | Don't confuse the statistic with the p-value.
The size of the KS-statistic was small, meaning the biggest distance between the empirical distribution and the power-law was small (i.e. a close fit). The corresponding p-value follows the statistic and is large (i.e. doesn't show a deviation large enough to be able to tel... | Goodness of fit: power-law or discrete log-normal? | Don't confuse the statistic with the p-value.
The size of the KS-statistic was small, meaning the biggest distance between the empirical distribution and the power-law was small (i.e. a close fit). Th | Goodness of fit: power-law or discrete log-normal?
Don't confuse the statistic with the p-value.
The size of the KS-statistic was small, meaning the biggest distance between the empirical distribution and the power-law was small (i.e. a close fit). The corresponding p-value follows the statistic and is large (i.e. does... | Goodness of fit: power-law or discrete log-normal?
Don't confuse the statistic with the p-value.
The size of the KS-statistic was small, meaning the biggest distance between the empirical distribution and the power-law was small (i.e. a close fit). Th |
49,283 | Do sampling methods (MCMC/SMC) work for combination of continuous and discrete random variables? | $P$ has a density against a (reference) measure made of the Lebesgue measure plus the counting measure on $\{0,1\}$. The later measure gives weights of $1$ to the atoms $0$ and $1$. This means that the density at atoms like 0 and 1 is equal to the weight against the counting measure and only the counting measure!, henc... | Do sampling methods (MCMC/SMC) work for combination of continuous and discrete random variables? | $P$ has a density against a (reference) measure made of the Lebesgue measure plus the counting measure on $\{0,1\}$. The later measure gives weights of $1$ to the atoms $0$ and $1$. This means that th | Do sampling methods (MCMC/SMC) work for combination of continuous and discrete random variables?
$P$ has a density against a (reference) measure made of the Lebesgue measure plus the counting measure on $\{0,1\}$. The later measure gives weights of $1$ to the atoms $0$ and $1$. This means that the density at atoms like... | Do sampling methods (MCMC/SMC) work for combination of continuous and discrete random variables?
$P$ has a density against a (reference) measure made of the Lebesgue measure plus the counting measure on $\{0,1\}$. The later measure gives weights of $1$ to the atoms $0$ and $1$. This means that th |
49,284 | How can you resolve mixed normal distributions into their component data sets? | One approach would be to fit a two component Gaussian mixture model. This models the observed distribution as a mixture $w_1 f(\mu_1,\sigma_1)+(1-w_1)f(\mu_2,\sigma_2)$ where $f$ is the normal density.
There are a number of approaches to doing so; the E-M algorithm (by introducing latent variables - in your case indic... | How can you resolve mixed normal distributions into their component data sets? | One approach would be to fit a two component Gaussian mixture model. This models the observed distribution as a mixture $w_1 f(\mu_1,\sigma_1)+(1-w_1)f(\mu_2,\sigma_2)$ where $f$ is the normal density | How can you resolve mixed normal distributions into their component data sets?
One approach would be to fit a two component Gaussian mixture model. This models the observed distribution as a mixture $w_1 f(\mu_1,\sigma_1)+(1-w_1)f(\mu_2,\sigma_2)$ where $f$ is the normal density.
There are a number of approaches to do... | How can you resolve mixed normal distributions into their component data sets?
One approach would be to fit a two component Gaussian mixture model. This models the observed distribution as a mixture $w_1 f(\mu_1,\sigma_1)+(1-w_1)f(\mu_2,\sigma_2)$ where $f$ is the normal density |
49,285 | How can you resolve mixed normal distributions into their component data sets? | You need to find $\theta^\ast = (\mu_1,\sigma_1,\mu_2,\sigma_2,n_1)$ fitting your data, so you could try guessing various $\theta$ and then picking the top-scoring one. To score a particular $\theta$, you could try a Kolmogorov-Smirnov test.
I don't know how accurate you need it, but with a histogram like this ...
..... | How can you resolve mixed normal distributions into their component data sets? | You need to find $\theta^\ast = (\mu_1,\sigma_1,\mu_2,\sigma_2,n_1)$ fitting your data, so you could try guessing various $\theta$ and then picking the top-scoring one. To score a particular $\theta$ | How can you resolve mixed normal distributions into their component data sets?
You need to find $\theta^\ast = (\mu_1,\sigma_1,\mu_2,\sigma_2,n_1)$ fitting your data, so you could try guessing various $\theta$ and then picking the top-scoring one. To score a particular $\theta$, you could try a Kolmogorov-Smirnov test... | How can you resolve mixed normal distributions into their component data sets?
You need to find $\theta^\ast = (\mu_1,\sigma_1,\mu_2,\sigma_2,n_1)$ fitting your data, so you could try guessing various $\theta$ and then picking the top-scoring one. To score a particular $\theta$ |
49,286 | Difference between principal directions and principal component scores in the context of dimensionality reduction | Most of these things are covered in my answers in the following two threads:
Relationship between SVD and PCA. How to use SVD to perform PCA?
What exactly is called "principal component" in PCA?
Still, here I will try to answer your specific concerns.
Think about it like that. You have, let's say, $1000$ data points... | Difference between principal directions and principal component scores in the context of dimensional | Most of these things are covered in my answers in the following two threads:
Relationship between SVD and PCA. How to use SVD to perform PCA?
What exactly is called "principal component" in PCA?
Sti | Difference between principal directions and principal component scores in the context of dimensionality reduction
Most of these things are covered in my answers in the following two threads:
Relationship between SVD and PCA. How to use SVD to perform PCA?
What exactly is called "principal component" in PCA?
Still, he... | Difference between principal directions and principal component scores in the context of dimensional
Most of these things are covered in my answers in the following two threads:
Relationship between SVD and PCA. How to use SVD to perform PCA?
What exactly is called "principal component" in PCA?
Sti |
49,287 | Estimation of quantile given quantiles of subset | I suppose you deal with a data vector of length $n$ where $n$ is so large that it becomes necessary to spread the computations over many machines and that this vector cannot fit inside the memory of any single one of those machines. This squares neatly with the definition of big data as defined here.
Suppose that the d... | Estimation of quantile given quantiles of subset | I suppose you deal with a data vector of length $n$ where $n$ is so large that it becomes necessary to spread the computations over many machines and that this vector cannot fit inside the memory of a | Estimation of quantile given quantiles of subset
I suppose you deal with a data vector of length $n$ where $n$ is so large that it becomes necessary to spread the computations over many machines and that this vector cannot fit inside the memory of any single one of those machines. This squares neatly with the definitio... | Estimation of quantile given quantiles of subset
I suppose you deal with a data vector of length $n$ where $n$ is so large that it becomes necessary to spread the computations over many machines and that this vector cannot fit inside the memory of a |
49,288 | How to get an effect size in nlme? | You can report the likelihood ratio test as an effect size measure. I'm not sure what the exact design of your overall model is, but say you're interested in a two-way repeated measures design where you want to assess the main effects of var1, var2, and the var1*var2 interaction.
To get the likelihood ratio, you can t... | How to get an effect size in nlme? | You can report the likelihood ratio test as an effect size measure. I'm not sure what the exact design of your overall model is, but say you're interested in a two-way repeated measures design where y | How to get an effect size in nlme?
You can report the likelihood ratio test as an effect size measure. I'm not sure what the exact design of your overall model is, but say you're interested in a two-way repeated measures design where you want to assess the main effects of var1, var2, and the var1*var2 interaction.
To ... | How to get an effect size in nlme?
You can report the likelihood ratio test as an effect size measure. I'm not sure what the exact design of your overall model is, but say you're interested in a two-way repeated measures design where y |
49,289 | Bayesian updating, point for point? | You can indeed update point-by-point or via a batch of observations, so long as your observations are at least exchangeable. Exchangeable random variables are conditionally independent given an appropriate latent variable.
That is, you have
$$
p(X_{1}, \ldots, X_{n} \, | \, \theta) = \prod_{i = 1}^{n}p(X_{i} \, | \,... | Bayesian updating, point for point? | You can indeed update point-by-point or via a batch of observations, so long as your observations are at least exchangeable. Exchangeable random variables are conditionally independent given an app | Bayesian updating, point for point?
You can indeed update point-by-point or via a batch of observations, so long as your observations are at least exchangeable. Exchangeable random variables are conditionally independent given an appropriate latent variable.
That is, you have
$$
p(X_{1}, \ldots, X_{n} \, | \, \theta... | Bayesian updating, point for point?
You can indeed update point-by-point or via a batch of observations, so long as your observations are at least exchangeable. Exchangeable random variables are conditionally independent given an app |
49,290 | GLM diagnostics and Deviance residual | Deviance residuals will not in general have 0 mean; they don't for Gamma models.
However the mean deviance residual tends to be reasonably close to 0.
Here's an example of a residual plot from a simple identity link gamma fit (to simulated data for which the model was appropriate; in this case the shape parameter of th... | GLM diagnostics and Deviance residual | Deviance residuals will not in general have 0 mean; they don't for Gamma models.
However the mean deviance residual tends to be reasonably close to 0.
Here's an example of a residual plot from a simpl | GLM diagnostics and Deviance residual
Deviance residuals will not in general have 0 mean; they don't for Gamma models.
However the mean deviance residual tends to be reasonably close to 0.
Here's an example of a residual plot from a simple identity link gamma fit (to simulated data for which the model was appropriate; ... | GLM diagnostics and Deviance residual
Deviance residuals will not in general have 0 mean; they don't for Gamma models.
However the mean deviance residual tends to be reasonably close to 0.
Here's an example of a residual plot from a simpl |
49,291 | GLM diagnostics and Deviance residual | I am going to preface this statement with I am no statistician (I can understand and apply statistical concepts) and I an no GLM expert. From my understanding, GLMs follow the same assumptions of linear models. If the residuals deviate from the fitted values in an uniform way it would indicate that the model is either ... | GLM diagnostics and Deviance residual | I am going to preface this statement with I am no statistician (I can understand and apply statistical concepts) and I an no GLM expert. From my understanding, GLMs follow the same assumptions of line | GLM diagnostics and Deviance residual
I am going to preface this statement with I am no statistician (I can understand and apply statistical concepts) and I an no GLM expert. From my understanding, GLMs follow the same assumptions of linear models. If the residuals deviate from the fitted values in an uniform way it wo... | GLM diagnostics and Deviance residual
I am going to preface this statement with I am no statistician (I can understand and apply statistical concepts) and I an no GLM expert. From my understanding, GLMs follow the same assumptions of line |
49,292 | Why is coxph() so fast for survival analysis on big data? | Believe it or not, it's just Newton-Raphson. It's right here. The weighted mean and covariance matrices mentioned in the vignette passage are Equations (3.4) through (3.6). | Why is coxph() so fast for survival analysis on big data? | Believe it or not, it's just Newton-Raphson. It's right here. The weighted mean and covariance matrices mentioned in the vignette passage are Equations (3.4) through (3.6). | Why is coxph() so fast for survival analysis on big data?
Believe it or not, it's just Newton-Raphson. It's right here. The weighted mean and covariance matrices mentioned in the vignette passage are Equations (3.4) through (3.6). | Why is coxph() so fast for survival analysis on big data?
Believe it or not, it's just Newton-Raphson. It's right here. The weighted mean and covariance matrices mentioned in the vignette passage are Equations (3.4) through (3.6). |
49,293 | Moving Average (MA) process: numerical intuition | Essentially, I agree with @IrishStat, but I would like to "rephrase" the answer a little.
If you assume that $Y_t$ follows an MA(2) process, then you have
$$Y_t = \varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2}$$
(I assume no intercept for simplicity.) Note that this is not what you have in you... | Moving Average (MA) process: numerical intuition | Essentially, I agree with @IrishStat, but I would like to "rephrase" the answer a little.
If you assume that $Y_t$ follows an MA(2) process, then you have
$$Y_t = \varepsilon_t + \theta_1 \varepsilon | Moving Average (MA) process: numerical intuition
Essentially, I agree with @IrishStat, but I would like to "rephrase" the answer a little.
If you assume that $Y_t$ follows an MA(2) process, then you have
$$Y_t = \varepsilon_t + \theta_1 \varepsilon_{t-1} + \theta_2 \varepsilon_{t-2}$$
(I assume no intercept for simpli... | Moving Average (MA) process: numerical intuition
Essentially, I agree with @IrishStat, but I would like to "rephrase" the answer a little.
If you assume that $Y_t$ follows an MA(2) process, then you have
$$Y_t = \varepsilon_t + \theta_1 \varepsilon |
49,294 | Moving Average (MA) process: numerical intuition | The current error $e_t$ is never known until after the $Y_t$ is observed thus it is set to 0.0 . The MA(2) process is $Y_t= + .5 * e_{t-1}+ .5* e_{t-2} + e_t$ where $e_t= 0.0$. No forecast is possible until period 3. | Moving Average (MA) process: numerical intuition | The current error $e_t$ is never known until after the $Y_t$ is observed thus it is set to 0.0 . The MA(2) process is $Y_t= + .5 * e_{t-1}+ .5* e_{t-2} + e_t$ where $e_t= 0.0$. No forecast is possible | Moving Average (MA) process: numerical intuition
The current error $e_t$ is never known until after the $Y_t$ is observed thus it is set to 0.0 . The MA(2) process is $Y_t= + .5 * e_{t-1}+ .5* e_{t-2} + e_t$ where $e_t= 0.0$. No forecast is possible until period 3. | Moving Average (MA) process: numerical intuition
The current error $e_t$ is never known until after the $Y_t$ is observed thus it is set to 0.0 . The MA(2) process is $Y_t= + .5 * e_{t-1}+ .5* e_{t-2} + e_t$ where $e_t= 0.0$. No forecast is possible |
49,295 | Margin of Error of Sample Variance | If you reparameterize in terms of:
$$\sqrt{n} \left( \left[\begin{array}{c} \bar{X} \\ S_n^2 \end{array}\right] - \left[\begin{array}{c} \mu \\ \sigma^2 \end{array}\right] \right) \rightarrow_d \mathcal{N} \left( \left[ \begin{array}{c} 0 \\ 0 \end{array} \right] , \left[ \begin{array}{cc} \sigma^2 & 0 \\ 0 & 2\sigma^... | Margin of Error of Sample Variance | If you reparameterize in terms of:
$$\sqrt{n} \left( \left[\begin{array}{c} \bar{X} \\ S_n^2 \end{array}\right] - \left[\begin{array}{c} \mu \\ \sigma^2 \end{array}\right] \right) \rightarrow_d \mathc | Margin of Error of Sample Variance
If you reparameterize in terms of:
$$\sqrt{n} \left( \left[\begin{array}{c} \bar{X} \\ S_n^2 \end{array}\right] - \left[\begin{array}{c} \mu \\ \sigma^2 \end{array}\right] \right) \rightarrow_d \mathcal{N} \left( \left[ \begin{array}{c} 0 \\ 0 \end{array} \right] , \left[ \begin{arra... | Margin of Error of Sample Variance
If you reparameterize in terms of:
$$\sqrt{n} \left( \left[\begin{array}{c} \bar{X} \\ S_n^2 \end{array}\right] - \left[\begin{array}{c} \mu \\ \sigma^2 \end{array}\right] \right) \rightarrow_d \mathc |
49,296 | Margin of Error of Sample Variance | You can frame this problem more simply by looking at the inverse gamma distribution:
$$W_n \equiv \frac{n}{\chi_n^2} \sim \text{Inverse-Gamma}(\tfrac{n}{2},\tfrac{n}{2}).$$
This distribution has mean $\mathbb{E}(W_n) = n/(n-2)$ and variance $\mathbb{V}(W_n) = 2 n^2/(n-2)^2 (n-4)$ so you have $\mathbb{E}(W_n) \rightarro... | Margin of Error of Sample Variance | You can frame this problem more simply by looking at the inverse gamma distribution:
$$W_n \equiv \frac{n}{\chi_n^2} \sim \text{Inverse-Gamma}(\tfrac{n}{2},\tfrac{n}{2}).$$
This distribution has mean | Margin of Error of Sample Variance
You can frame this problem more simply by looking at the inverse gamma distribution:
$$W_n \equiv \frac{n}{\chi_n^2} \sim \text{Inverse-Gamma}(\tfrac{n}{2},\tfrac{n}{2}).$$
This distribution has mean $\mathbb{E}(W_n) = n/(n-2)$ and variance $\mathbb{V}(W_n) = 2 n^2/(n-2)^2 (n-4)$ so y... | Margin of Error of Sample Variance
You can frame this problem more simply by looking at the inverse gamma distribution:
$$W_n \equiv \frac{n}{\chi_n^2} \sim \text{Inverse-Gamma}(\tfrac{n}{2},\tfrac{n}{2}).$$
This distribution has mean |
49,297 | Multiple comparisons after Kruskal Wallis using the FDR approach. How to compute P values (Dunn or Mann-Whitney)? | From what I understood of the OP question:
1) He ran a omnibus Kruskal-Wallis with significant results
2) He want to run a pairwise test on all groups and he is in doubt whether to use Mann-Whitney or Dunn's test
3) He want to run his own multiple comparison adjustment procedure, so he needs the uncorrected p-values of... | Multiple comparisons after Kruskal Wallis using the FDR approach. How to compute P values (Dunn or M | From what I understood of the OP question:
1) He ran a omnibus Kruskal-Wallis with significant results
2) He want to run a pairwise test on all groups and he is in doubt whether to use Mann-Whitney or | Multiple comparisons after Kruskal Wallis using the FDR approach. How to compute P values (Dunn or Mann-Whitney)?
From what I understood of the OP question:
1) He ran a omnibus Kruskal-Wallis with significant results
2) He want to run a pairwise test on all groups and he is in doubt whether to use Mann-Whitney or Dunn'... | Multiple comparisons after Kruskal Wallis using the FDR approach. How to compute P values (Dunn or M
From what I understood of the OP question:
1) He ran a omnibus Kruskal-Wallis with significant results
2) He want to run a pairwise test on all groups and he is in doubt whether to use Mann-Whitney or |
49,298 | Rule of thumb for using logarithmic scale | As a rule of thumb, try to make the data fit a (standard) normal distribution, a uniform distribution or any other distribution where the values are more or less “evenly” distributed.
As a measurement, one thing that you could aim for is to maximize the distribution’s entropy for a fixed variance.
So, if your data is a... | Rule of thumb for using logarithmic scale | As a rule of thumb, try to make the data fit a (standard) normal distribution, a uniform distribution or any other distribution where the values are more or less “evenly” distributed.
As a measurement | Rule of thumb for using logarithmic scale
As a rule of thumb, try to make the data fit a (standard) normal distribution, a uniform distribution or any other distribution where the values are more or less “evenly” distributed.
As a measurement, one thing that you could aim for is to maximize the distribution’s entropy f... | Rule of thumb for using logarithmic scale
As a rule of thumb, try to make the data fit a (standard) normal distribution, a uniform distribution or any other distribution where the values are more or less “evenly” distributed.
As a measurement |
49,299 | Rule of thumb for using logarithmic scale | (So to be kosher and not mix the question with an answer.)
Right now I am using scale which minimized the following ratio:
$$\frac{\sqrt[4]{\langle (x - \bar{x})^4 \rangle}}{\sqrt{\langle (x - \bar{x})^2 \rangle}}$$
That is, after normalizing a variable (i.e. mean 0 and variance 1) I am looking to have the 4th moment a... | Rule of thumb for using logarithmic scale | (So to be kosher and not mix the question with an answer.)
Right now I am using scale which minimized the following ratio:
$$\frac{\sqrt[4]{\langle (x - \bar{x})^4 \rangle}}{\sqrt{\langle (x - \bar{x} | Rule of thumb for using logarithmic scale
(So to be kosher and not mix the question with an answer.)
Right now I am using scale which minimized the following ratio:
$$\frac{\sqrt[4]{\langle (x - \bar{x})^4 \rangle}}{\sqrt{\langle (x - \bar{x})^2 \rangle}}$$
That is, after normalizing a variable (i.e. mean 0 and varianc... | Rule of thumb for using logarithmic scale
(So to be kosher and not mix the question with an answer.)
Right now I am using scale which minimized the following ratio:
$$\frac{\sqrt[4]{\langle (x - \bar{x})^4 \rangle}}{\sqrt{\langle (x - \bar{x} |
49,300 | Evaluate glmtree model | The strategy you describe looks very reasonable. For evaluation you can use the usual kinds of measures that you employ for other binary classiers (or trees in particular): misclassification rate (or conversely classification accuracy), log-likelihood, ROC, AUC, etc. Personally, I often use the ROCR package but the pRO... | Evaluate glmtree model | The strategy you describe looks very reasonable. For evaluation you can use the usual kinds of measures that you employ for other binary classiers (or trees in particular): misclassification rate (or | Evaluate glmtree model
The strategy you describe looks very reasonable. For evaluation you can use the usual kinds of measures that you employ for other binary classiers (or trees in particular): misclassification rate (or conversely classification accuracy), log-likelihood, ROC, AUC, etc. Personally, I often use the R... | Evaluate glmtree model
The strategy you describe looks very reasonable. For evaluation you can use the usual kinds of measures that you employ for other binary classiers (or trees in particular): misclassification rate (or |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.