idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
43,101
|
Is the inductive bias a prior?
|
Prior is a prior knowledge that can help us learn new concepts from the available data.
Prior or inductive prior is also known as inductive bias.
In here, the word inductive does not hold the strict mathematical meaning of induction, but rather the fact that we will make some inference based on the previous knowledge.
In this sense inductive bias is a prior (prior distribution) which is the knowledge about the data (without observing any data).
The posterior distribution is a knowledge after the new evidence (the data) has been observed, taking in account the prior knowledge.
In this sense you may guess the distribution is the knowledge about the data.
|
Is the inductive bias a prior?
|
Prior is a prior knowledge that can help us learn new concepts from the available data.
Prior or inductive prior is also known as inductive bias.
In here, the word inductive does not hold the strict m
|
Is the inductive bias a prior?
Prior is a prior knowledge that can help us learn new concepts from the available data.
Prior or inductive prior is also known as inductive bias.
In here, the word inductive does not hold the strict mathematical meaning of induction, but rather the fact that we will make some inference based on the previous knowledge.
In this sense inductive bias is a prior (prior distribution) which is the knowledge about the data (without observing any data).
The posterior distribution is a knowledge after the new evidence (the data) has been observed, taking in account the prior knowledge.
In this sense you may guess the distribution is the knowledge about the data.
|
Is the inductive bias a prior?
Prior is a prior knowledge that can help us learn new concepts from the available data.
Prior or inductive prior is also known as inductive bias.
In here, the word inductive does not hold the strict m
|
43,102
|
Is the inductive bias a prior?
|
No. This is the likelihood as evident from the defintion. You introduce inductive bias in linear regression by making assumption that the data follows linear model. Even in Bayesian approaches you're introducing some inductive bias by assuming that data follows a model from a certain family.
|
Is the inductive bias a prior?
|
No. This is the likelihood as evident from the defintion. You introduce inductive bias in linear regression by making assumption that the data follows linear model. Even in Bayesian approaches you're
|
Is the inductive bias a prior?
No. This is the likelihood as evident from the defintion. You introduce inductive bias in linear regression by making assumption that the data follows linear model. Even in Bayesian approaches you're introducing some inductive bias by assuming that data follows a model from a certain family.
|
Is the inductive bias a prior?
No. This is the likelihood as evident from the defintion. You introduce inductive bias in linear regression by making assumption that the data follows linear model. Even in Bayesian approaches you're
|
43,103
|
Computing cross-validated $R^2$ from mean cross-validation error
|
The confusion was caused by a one-symbol typo in the originally posted code (see comments above).
The answers are:
Yes.
Yes, but only for Gaussian GLM (as far as I understand from the glmnet package description).
Depends on how you define $R^2$ for the weighted regression. Deviance ratio will take weights into account (I don't know how exactly), whereas $R^2$ defined via mean error will (obviously) not.
|
Computing cross-validated $R^2$ from mean cross-validation error
|
The confusion was caused by a one-symbol typo in the originally posted code (see comments above).
The answers are:
Yes.
Yes, but only for Gaussian GLM (as far as I understand from the glmnet package
|
Computing cross-validated $R^2$ from mean cross-validation error
The confusion was caused by a one-symbol typo in the originally posted code (see comments above).
The answers are:
Yes.
Yes, but only for Gaussian GLM (as far as I understand from the glmnet package description).
Depends on how you define $R^2$ for the weighted regression. Deviance ratio will take weights into account (I don't know how exactly), whereas $R^2$ defined via mean error will (obviously) not.
|
Computing cross-validated $R^2$ from mean cross-validation error
The confusion was caused by a one-symbol typo in the originally posted code (see comments above).
The answers are:
Yes.
Yes, but only for Gaussian GLM (as far as I understand from the glmnet package
|
43,104
|
What is the objective of maximum likelihood estimation?
|
The objective is to estimate the parameters or, more precisely, to get a method for their estimation (since the same form of likelihood can be applied to different data sets).
There are different ways to choose parameter estimators - maximum likelihood is just one of them, which uses as the criteria for choosing the estimator that the probability of getting the observed result is maximal. Maximum likelihood estimators have many convenient mathematical properties.
$P(Y|X, \theta)$ is a function relating the predictor variables $X$ and the output variables $Y$, parametrized by parameters $\theta$. Its functional form is chosen a priori and limits how close it can be to the "true" distribution (if such a "true" distribution exists at all): e.g., normal/Gaussian function can well approximate many distributions (gamma distribution, lognormal, etc.) but it will never reveal that the underlying distribution is not normal.
|
What is the objective of maximum likelihood estimation?
|
The objective is to estimate the parameters or, more precisely, to get a method for their estimation (since the same form of likelihood can be applied to different data sets).
There are different ways
|
What is the objective of maximum likelihood estimation?
The objective is to estimate the parameters or, more precisely, to get a method for their estimation (since the same form of likelihood can be applied to different data sets).
There are different ways to choose parameter estimators - maximum likelihood is just one of them, which uses as the criteria for choosing the estimator that the probability of getting the observed result is maximal. Maximum likelihood estimators have many convenient mathematical properties.
$P(Y|X, \theta)$ is a function relating the predictor variables $X$ and the output variables $Y$, parametrized by parameters $\theta$. Its functional form is chosen a priori and limits how close it can be to the "true" distribution (if such a "true" distribution exists at all): e.g., normal/Gaussian function can well approximate many distributions (gamma distribution, lognormal, etc.) but it will never reveal that the underlying distribution is not normal.
|
What is the objective of maximum likelihood estimation?
The objective is to estimate the parameters or, more precisely, to get a method for their estimation (since the same form of likelihood can be applied to different data sets).
There are different ways
|
43,105
|
What is the objective of maximum likelihood estimation?
|
Oh, I think I understand a bit better now. The objective is to find parameters that maximize the likelihood that our observations will be similar if we take another, similar, sample. For example, let's say that we draw 5 marbles from a bag and 3 of them are black. We then place them back in the bag. Our objective, then, is to figure out the fraction of black marbles in the bag that would make us most likely to see 3 black marbles the next time we draw 5.
|
What is the objective of maximum likelihood estimation?
|
Oh, I think I understand a bit better now. The objective is to find parameters that maximize the likelihood that our observations will be similar if we take another, similar, sample. For example, let'
|
What is the objective of maximum likelihood estimation?
Oh, I think I understand a bit better now. The objective is to find parameters that maximize the likelihood that our observations will be similar if we take another, similar, sample. For example, let's say that we draw 5 marbles from a bag and 3 of them are black. We then place them back in the bag. Our objective, then, is to figure out the fraction of black marbles in the bag that would make us most likely to see 3 black marbles the next time we draw 5.
|
What is the objective of maximum likelihood estimation?
Oh, I think I understand a bit better now. The objective is to find parameters that maximize the likelihood that our observations will be similar if we take another, similar, sample. For example, let'
|
43,106
|
Kolmogorov-Smirnov test with dependent data
|
the Kolmogorov-Smirnov test is only to be used if the samples are independent. However, here you can find a script that does the job in form of a permutation test (in R).
|
Kolmogorov-Smirnov test with dependent data
|
the Kolmogorov-Smirnov test is only to be used if the samples are independent. However, here you can find a script that does the job in form of a permutation test (in R).
|
Kolmogorov-Smirnov test with dependent data
the Kolmogorov-Smirnov test is only to be used if the samples are independent. However, here you can find a script that does the job in form of a permutation test (in R).
|
Kolmogorov-Smirnov test with dependent data
the Kolmogorov-Smirnov test is only to be used if the samples are independent. However, here you can find a script that does the job in form of a permutation test (in R).
|
43,107
|
Kolmogorov-Smirnov test with dependent data
|
The Kolmogorov-Smirnov test is designed for two independent samples. You need a test for paired observations. I do not think anyone has ever proposed a KS test analog for paired observations. Also, it is a very cautious test, that is, it has low power in many circumstances.
I suggest a paired nonparametric test, either a sign test or a Wilcoxon signed-rank test, both based on the differences between the two scores for each subject. The latter is more common and has strong power for observations that have a normal distribution and a wide range of non-normal distributions.
If the two measures are strongly correlated then it is possible that the sign test would be quite powerful, possibly more so than the Wilcoxon signed rank test. Both tests are based on permutations of the observations: the signed differences for the Wilcoxon signed rank test and only the signs of the differences for the sign test.
Friedman's test is usually used for three or more measurements. If there are only two measurements, it gives the same p-value results as the Wilcoxon test.
|
Kolmogorov-Smirnov test with dependent data
|
The Kolmogorov-Smirnov test is designed for two independent samples. You need a test for paired observations. I do not think anyone has ever proposed a KS test analog for paired observations. Also, it
|
Kolmogorov-Smirnov test with dependent data
The Kolmogorov-Smirnov test is designed for two independent samples. You need a test for paired observations. I do not think anyone has ever proposed a KS test analog for paired observations. Also, it is a very cautious test, that is, it has low power in many circumstances.
I suggest a paired nonparametric test, either a sign test or a Wilcoxon signed-rank test, both based on the differences between the two scores for each subject. The latter is more common and has strong power for observations that have a normal distribution and a wide range of non-normal distributions.
If the two measures are strongly correlated then it is possible that the sign test would be quite powerful, possibly more so than the Wilcoxon signed rank test. Both tests are based on permutations of the observations: the signed differences for the Wilcoxon signed rank test and only the signs of the differences for the sign test.
Friedman's test is usually used for three or more measurements. If there are only two measurements, it gives the same p-value results as the Wilcoxon test.
|
Kolmogorov-Smirnov test with dependent data
The Kolmogorov-Smirnov test is designed for two independent samples. You need a test for paired observations. I do not think anyone has ever proposed a KS test analog for paired observations. Also, it
|
43,108
|
Kolmogorov-Smirnov test with dependent data
|
I believe what you need is a (Related-Samples) Friedman's Two-way Analysis of Variance by Ranks (see this).
It comes with the a pair of null hypotheses about the distributions you are comparing that you can reject/retain depending on the p value:
H0(a): "The populations represented by the k conditions have the same distribution of scores."
H0(b): "The population has the same distribution of scores on the different measures represented by the conditions."
Or as stated in (2):
"For Friedman’s two-way analysis of variance by ranks, the null hypothesis states that the K repeated measures or matched groups come from the same population or from populations with the same median (1). Under the null hypothesis, the test assumes that the response variable has the same underlying continuous distribution;"
Not very relevant to R, but the aforementioned test is also what IBM offers in the (v24) SPSS statistical package for testing if k related samples have been drawn from the same population (see IBM SPSS v24 Documentation):
"Compare Distributions. Friedman's 2-way ANOVA by ranks (k samples) produces a related samples test of whether k related samples have been drawn from the same population. You can optionally request multiple comparisons of the k samples, either All pairwise multiple comparisons or Stepwise step-down comparisons."
References
(1) Siegel, S. (1956). Nonparametric statistics for the behavioral sciences.
(2) Pereira, D. G., Afonso, A., & Medeiros, F. M. (2015). Overview of Friedman’s test and post-hoc analysis. Communications in Statistics-Simulation and Computation, 44(10), 2636-2653.
|
Kolmogorov-Smirnov test with dependent data
|
I believe what you need is a (Related-Samples) Friedman's Two-way Analysis of Variance by Ranks (see this).
It comes with the a pair of null hypotheses about the distributions you are comparing that y
|
Kolmogorov-Smirnov test with dependent data
I believe what you need is a (Related-Samples) Friedman's Two-way Analysis of Variance by Ranks (see this).
It comes with the a pair of null hypotheses about the distributions you are comparing that you can reject/retain depending on the p value:
H0(a): "The populations represented by the k conditions have the same distribution of scores."
H0(b): "The population has the same distribution of scores on the different measures represented by the conditions."
Or as stated in (2):
"For Friedman’s two-way analysis of variance by ranks, the null hypothesis states that the K repeated measures or matched groups come from the same population or from populations with the same median (1). Under the null hypothesis, the test assumes that the response variable has the same underlying continuous distribution;"
Not very relevant to R, but the aforementioned test is also what IBM offers in the (v24) SPSS statistical package for testing if k related samples have been drawn from the same population (see IBM SPSS v24 Documentation):
"Compare Distributions. Friedman's 2-way ANOVA by ranks (k samples) produces a related samples test of whether k related samples have been drawn from the same population. You can optionally request multiple comparisons of the k samples, either All pairwise multiple comparisons or Stepwise step-down comparisons."
References
(1) Siegel, S. (1956). Nonparametric statistics for the behavioral sciences.
(2) Pereira, D. G., Afonso, A., & Medeiros, F. M. (2015). Overview of Friedman’s test and post-hoc analysis. Communications in Statistics-Simulation and Computation, 44(10), 2636-2653.
|
Kolmogorov-Smirnov test with dependent data
I believe what you need is a (Related-Samples) Friedman's Two-way Analysis of Variance by Ranks (see this).
It comes with the a pair of null hypotheses about the distributions you are comparing that y
|
43,109
|
How to use LDA results for feature selection?
|
If it doesn't need to be vanilla LDA (which is not supposed to select from input features), there's e.g. Sparse Discriminant Analysis, which is a LASSO penalized LDA:
Line Clemmensen, Trevor Hastie, Daniela Witten, Bjarne Ersbøll: Sparse Discriminant Analysis (2011)
This uses a discrete subset of the input features via the LASSO regularization.
|
How to use LDA results for feature selection?
|
If it doesn't need to be vanilla LDA (which is not supposed to select from input features), there's e.g. Sparse Discriminant Analysis, which is a LASSO penalized LDA:
Line Clemmensen, Trevor Hastie,
|
How to use LDA results for feature selection?
If it doesn't need to be vanilla LDA (which is not supposed to select from input features), there's e.g. Sparse Discriminant Analysis, which is a LASSO penalized LDA:
Line Clemmensen, Trevor Hastie, Daniela Witten, Bjarne Ersbøll: Sparse Discriminant Analysis (2011)
This uses a discrete subset of the input features via the LASSO regularization.
|
How to use LDA results for feature selection?
If it doesn't need to be vanilla LDA (which is not supposed to select from input features), there's e.g. Sparse Discriminant Analysis, which is a LASSO penalized LDA:
Line Clemmensen, Trevor Hastie,
|
43,110
|
How to use LDA results for feature selection?
|
Lda models are used to predict a categorical variable (factor) using one or several continuous (numerical) features. So given some measurements about a forest, you will be able to predict which type of forest a given observation belongs to. Before applying a lda model, you have to determine which features are relevant to discriminate the data. To do so, you need to use and apply an ANOVA model to each numerical variable. In each of these ANOVA models, the variable to explain (Y) is the numerical feature, and the explicative variable (X) is the categorical feature you want to predict in the lda model. This will tell you for each forest type, if the mean of the numerical feature stays the same or not. If it does, it will not give you any information to discriminate the data. Therefore it'll not be relevant to the model and you will not use it. However if the mean of a numerical feature differs depending on the forest type, it will help you discriminate the data and you'll use it in the lda model.
|
How to use LDA results for feature selection?
|
Lda models are used to predict a categorical variable (factor) using one or several continuous (numerical) features. So given some measurements about a forest, you will be able to predict which type o
|
How to use LDA results for feature selection?
Lda models are used to predict a categorical variable (factor) using one or several continuous (numerical) features. So given some measurements about a forest, you will be able to predict which type of forest a given observation belongs to. Before applying a lda model, you have to determine which features are relevant to discriminate the data. To do so, you need to use and apply an ANOVA model to each numerical variable. In each of these ANOVA models, the variable to explain (Y) is the numerical feature, and the explicative variable (X) is the categorical feature you want to predict in the lda model. This will tell you for each forest type, if the mean of the numerical feature stays the same or not. If it does, it will not give you any information to discriminate the data. Therefore it'll not be relevant to the model and you will not use it. However if the mean of a numerical feature differs depending on the forest type, it will help you discriminate the data and you'll use it in the lda model.
|
How to use LDA results for feature selection?
Lda models are used to predict a categorical variable (factor) using one or several continuous (numerical) features. So given some measurements about a forest, you will be able to predict which type o
|
43,111
|
How to use LDA results for feature selection?
|
I don't know if this may be of any use, but I wanted to mention the idea of using LDA to give an "importance value" to each features (for selection), by computing the correlation of each features to each components (LD1, LD2, LD3,...) and selecting the features that are highly correlated to some important components.
Perhaps the explained variance of each component can be directly used in the computation as well:
sum(explained_variance_ratio_of_component * weight_of_features) or, sum(explained_variance_ratio_of_component * correlation_of_features)
I did not find yet documentations about this, so its more about giving a possible idea to follow rather than a straightforward solution. Please let me know your thoughts about this.
|
How to use LDA results for feature selection?
|
I don't know if this may be of any use, but I wanted to mention the idea of using LDA to give an "importance value" to each features (for selection), by computing the correlation of each features to e
|
How to use LDA results for feature selection?
I don't know if this may be of any use, but I wanted to mention the idea of using LDA to give an "importance value" to each features (for selection), by computing the correlation of each features to each components (LD1, LD2, LD3,...) and selecting the features that are highly correlated to some important components.
Perhaps the explained variance of each component can be directly used in the computation as well:
sum(explained_variance_ratio_of_component * weight_of_features) or, sum(explained_variance_ratio_of_component * correlation_of_features)
I did not find yet documentations about this, so its more about giving a possible idea to follow rather than a straightforward solution. Please let me know your thoughts about this.
|
How to use LDA results for feature selection?
I don't know if this may be of any use, but I wanted to mention the idea of using LDA to give an "importance value" to each features (for selection), by computing the correlation of each features to e
|
43,112
|
multi-class classification with word2vec
|
It is always hard to assess a priori the performance of a pre-treatment on the data. Even something as simple as normalizing the data does not have an obvious influence on the performance on the later trained classifiers (see per example this post : Normalizing data worsens the performance of CNN?).
However the following links may help you implement your idea :
Text Classification With Word2Vec the author assesses the performance of various classifiers on text documents, with a word2vec embedding. It happens that the best performance is attained with the "classical" linear support vector classifier and a TF-IDF encoding (the approach is really helpful in terms of code, especially if you work with python and sk-learn)
Regarding SVMs, there are kernels designed for text. I once had nice results with Information diffusion kernels and TF-IDF encoding. Or you have kernels that works directly on strings : Text Classification using String Kernels, but their implementations are scarcer...
|
multi-class classification with word2vec
|
It is always hard to assess a priori the performance of a pre-treatment on the data. Even something as simple as normalizing the data does not have an obvious influence on the performance on the later
|
multi-class classification with word2vec
It is always hard to assess a priori the performance of a pre-treatment on the data. Even something as simple as normalizing the data does not have an obvious influence on the performance on the later trained classifiers (see per example this post : Normalizing data worsens the performance of CNN?).
However the following links may help you implement your idea :
Text Classification With Word2Vec the author assesses the performance of various classifiers on text documents, with a word2vec embedding. It happens that the best performance is attained with the "classical" linear support vector classifier and a TF-IDF encoding (the approach is really helpful in terms of code, especially if you work with python and sk-learn)
Regarding SVMs, there are kernels designed for text. I once had nice results with Information diffusion kernels and TF-IDF encoding. Or you have kernels that works directly on strings : Text Classification using String Kernels, but their implementations are scarcer...
|
multi-class classification with word2vec
It is always hard to assess a priori the performance of a pre-treatment on the data. Even something as simple as normalizing the data does not have an obvious influence on the performance on the later
|
43,113
|
multi-class classification with word2vec
|
The best place to start is with a linear kernel, since this is a) the simplest and b) often works well with text data. You could then try nonlinear kernels such as the popular RBF kernel.
|
multi-class classification with word2vec
|
The best place to start is with a linear kernel, since this is a) the simplest and b) often works well with text data. You could then try nonlinear kernels such as the popular RBF kernel.
|
multi-class classification with word2vec
The best place to start is with a linear kernel, since this is a) the simplest and b) often works well with text data. You could then try nonlinear kernels such as the popular RBF kernel.
|
multi-class classification with word2vec
The best place to start is with a linear kernel, since this is a) the simplest and b) often works well with text data. You could then try nonlinear kernels such as the popular RBF kernel.
|
43,114
|
Closed form function relating $\mu$ to the natural parameter for the logarithmic series distribution?
|
You have the answer in your question: Lambert-W.
In R you can use
# Load library
library(LambertW)
# Define function to obtain p from mu
p <- function(mu) {1 - exp((1/mu) + W(-1/(mu*exp(1/mu)),-1))}
# Show (visually) that the function provides the correct values
mu <- 1 + 10*c(0:100)/100
plot(p(mu), mu, xlab="p", ylab="mu", las=1)
pp <- c(1:99)/100
lines(pp, -pp/((1-pp)*log(1-pp)))
to obtain
In Mathematica you can use
data = Table[{1 - Exp[(1/mu) +
ProductLog[-1, -1/(mu Exp[1/mu])]], mu}, {mu, 1.01, 5, 0.01}];
cp = -1/Log[1 - p];
ListPlot[{data, Table[{p, cp p/(1 - p)}, {p, 0.1, 0.93, 0.01}]},
PlotStyle -> {{PointSize[0.03]}, {PointSize[0.01]}}]
(Mathematica was used to determine the general form of the relationship for determining $p$ from $\mu$.)
|
Closed form function relating $\mu$ to the natural parameter for the logarithmic series distribution
|
You have the answer in your question: Lambert-W.
In R you can use
# Load library
library(LambertW)
# Define function to obtain p from mu
p <- function(mu) {1 - exp((1/mu) + W(-1/(mu*exp(1/mu)),-
|
Closed form function relating $\mu$ to the natural parameter for the logarithmic series distribution?
You have the answer in your question: Lambert-W.
In R you can use
# Load library
library(LambertW)
# Define function to obtain p from mu
p <- function(mu) {1 - exp((1/mu) + W(-1/(mu*exp(1/mu)),-1))}
# Show (visually) that the function provides the correct values
mu <- 1 + 10*c(0:100)/100
plot(p(mu), mu, xlab="p", ylab="mu", las=1)
pp <- c(1:99)/100
lines(pp, -pp/((1-pp)*log(1-pp)))
to obtain
In Mathematica you can use
data = Table[{1 - Exp[(1/mu) +
ProductLog[-1, -1/(mu Exp[1/mu])]], mu}, {mu, 1.01, 5, 0.01}];
cp = -1/Log[1 - p];
ListPlot[{data, Table[{p, cp p/(1 - p)}, {p, 0.1, 0.93, 0.01}]},
PlotStyle -> {{PointSize[0.03]}, {PointSize[0.01]}}]
(Mathematica was used to determine the general form of the relationship for determining $p$ from $\mu$.)
|
Closed form function relating $\mu$ to the natural parameter for the logarithmic series distribution
You have the answer in your question: Lambert-W.
In R you can use
# Load library
library(LambertW)
# Define function to obtain p from mu
p <- function(mu) {1 - exp((1/mu) + W(-1/(mu*exp(1/mu)),-
|
43,115
|
ARMA-GARCH model selection / fit evaluation
|
In short, you should select models using AIC and/or out-of-sample fit criteria and view the rejected hypothesis as a suggestion to consider other types of models.
When using this class of time series models researchers are usually interested in accurate prediction\forecasting. Since AIC measures how well a model predicts the data in-sample, it operates as a fair means of model selection in this case (you may also want to test how well the models fit out-of-sample…more on that below).
However, just because a particular model has the lowest AIC does not mean that that model is correctly specified or that it approximates the true data generating process well. It could be that all the models you proposed were poor choices, or that the true process FTSE follows is so complex that practically every reasonable model will be rejected given enough data. AIC provides no information on this point which is where hypothesis testing can come in.
Under the assumptions of standard ARMA-GARCH, the residuals should be homoscedastic and more generally iid normal. Your hypothesis test suggests that your residuals are not homoscedastic and, in turn, that your ARMA-GARCH model may be miss specified. On this note you may want to consider alternative specifications for the volatility process including other variants of GARCH models, i.e. EGARCH, GJR-GARCH, TGARCH, AVGARCH, NGARCH, GARCH-M, etc. and/or stochastic volatility models. It is highly likely that one of these models will offer a lower AIC value and produce residuals which cannot be rejected for homoscedasticity.
One important thing to note though is that no model will be perfect, especially for something like the FTSE 100. The true data generating process driving a large financial index like this is impossibly complex, so pretty much every model you propose will be false. For this reason, it can be argued that any meaningful hypothesis you do not reject is a reflection of insufficient data or lack of statistical power rather than evidence supporting one model over others.
One way to partially resolve this dilemma is to use out-of-sample fit as opposed to or in conjunction with AIC. A simple example would be to fit the model using only the first 80% or 90% of the data and using the resulting coefficient estimates to obtain a log-likelihood for the remaining 20%-10% portion of the data. The model with the highest log-likelihood would be preferred. If the ARMA-GARCH model is truly misspecified in a way that impairs its forecasting performance, then an out-of-sample fit will help expose it.
|
ARMA-GARCH model selection / fit evaluation
|
In short, you should select models using AIC and/or out-of-sample fit criteria and view the rejected hypothesis as a suggestion to consider other types of models.
When using this class of time serie
|
ARMA-GARCH model selection / fit evaluation
In short, you should select models using AIC and/or out-of-sample fit criteria and view the rejected hypothesis as a suggestion to consider other types of models.
When using this class of time series models researchers are usually interested in accurate prediction\forecasting. Since AIC measures how well a model predicts the data in-sample, it operates as a fair means of model selection in this case (you may also want to test how well the models fit out-of-sample…more on that below).
However, just because a particular model has the lowest AIC does not mean that that model is correctly specified or that it approximates the true data generating process well. It could be that all the models you proposed were poor choices, or that the true process FTSE follows is so complex that practically every reasonable model will be rejected given enough data. AIC provides no information on this point which is where hypothesis testing can come in.
Under the assumptions of standard ARMA-GARCH, the residuals should be homoscedastic and more generally iid normal. Your hypothesis test suggests that your residuals are not homoscedastic and, in turn, that your ARMA-GARCH model may be miss specified. On this note you may want to consider alternative specifications for the volatility process including other variants of GARCH models, i.e. EGARCH, GJR-GARCH, TGARCH, AVGARCH, NGARCH, GARCH-M, etc. and/or stochastic volatility models. It is highly likely that one of these models will offer a lower AIC value and produce residuals which cannot be rejected for homoscedasticity.
One important thing to note though is that no model will be perfect, especially for something like the FTSE 100. The true data generating process driving a large financial index like this is impossibly complex, so pretty much every model you propose will be false. For this reason, it can be argued that any meaningful hypothesis you do not reject is a reflection of insufficient data or lack of statistical power rather than evidence supporting one model over others.
One way to partially resolve this dilemma is to use out-of-sample fit as opposed to or in conjunction with AIC. A simple example would be to fit the model using only the first 80% or 90% of the data and using the resulting coefficient estimates to obtain a log-likelihood for the remaining 20%-10% portion of the data. The model with the highest log-likelihood would be preferred. If the ARMA-GARCH model is truly misspecified in a way that impairs its forecasting performance, then an out-of-sample fit will help expose it.
|
ARMA-GARCH model selection / fit evaluation
In short, you should select models using AIC and/or out-of-sample fit criteria and view the rejected hypothesis as a suggestion to consider other types of models.
When using this class of time serie
|
43,116
|
How to standardize proportions from US Census data
|
After talking to local statisticians and not seeing any other answers, I can provide some answer. I'm also happy to remove the question if commentators think it is too narrow.
The number of respondents is the right sample size for the score computations. I was using 1%, and I've since learned that 2/3 of 1% is a better estimate of the response rate. I can get state level sample sizes from the Census Bureau. I've also verified the data comes from the American Community Survey rather than the general census, which doesn't ask relationship questions.
It was also suggested to exclude the far outliers when computing the grand mean with the idea that those locations are categorically different from the general population of counties.
Another technique for handling variation due to small samples is Small Area Estimation, which can be thought of as a kind of weighted smoother.
Though I had forgotten the source, I now realize my inspiration for this line of exploration was Howard Wainer's discussion of similar issues with cancer rates by county and test results by school, collected in Picturing the Uncertain World.
|
How to standardize proportions from US Census data
|
After talking to local statisticians and not seeing any other answers, I can provide some answer. I'm also happy to remove the question if commentators think it is too narrow.
The number of respondent
|
How to standardize proportions from US Census data
After talking to local statisticians and not seeing any other answers, I can provide some answer. I'm also happy to remove the question if commentators think it is too narrow.
The number of respondents is the right sample size for the score computations. I was using 1%, and I've since learned that 2/3 of 1% is a better estimate of the response rate. I can get state level sample sizes from the Census Bureau. I've also verified the data comes from the American Community Survey rather than the general census, which doesn't ask relationship questions.
It was also suggested to exclude the far outliers when computing the grand mean with the idea that those locations are categorically different from the general population of counties.
Another technique for handling variation due to small samples is Small Area Estimation, which can be thought of as a kind of weighted smoother.
Though I had forgotten the source, I now realize my inspiration for this line of exploration was Howard Wainer's discussion of similar issues with cancer rates by county and test results by school, collected in Picturing the Uncertain World.
|
How to standardize proportions from US Census data
After talking to local statisticians and not seeing any other answers, I can provide some answer. I'm also happy to remove the question if commentators think it is too narrow.
The number of respondent
|
43,117
|
Time varying coefficient in Cox model
|
Just answered my own question with the same problem. Basically you need to do a time-split as you describe and then add an interaction term for the time. As you suggested it sounds like a good idea to perhaps split the time more fine grained in the beginning and then increase the intervals. Using my Greg package you can set the by-time to either a single interval or to a vector, e.g. if we create a dataset with four subjects:
test_data <- data.frame(
id = 1:4,
time = c(4, 3.5, 1, 5),
event = c("censored", "dead", "alive", "dead"),
age = c(62.2, 55.3, 73.7, 46.3),
date = as.Date(
c("2003-01-01",
"2010-04-01",
"2013-09-20",
"2002-02-23")))
Looking like this:
| id| time|event | age|date |
|--:|----:|:--------|----:|:----------|
| 1| 4.0|censored | 62.2|2003-01-01 |
| 2| 3.5|dead | 55.3|2010-04-01 |
| 3| 1.0|alive | 73.7|2013-09-20 |
| 4| 5.0|dead | 46.3|2002-02-23 |
We split each subject into several through:
library(Greg)
library(dplyr)
split_data <-
test_data %>%
select(id, event, time, age, date) %>%
timeSplitter(by = c(.1, .5, 2), # The time that we want to split by
event_var = "event",
time_var = "time",
event_start_status = "alive",
time_related_vars = c("age", "date"))
knitr::kable(split_data)
Gives:
| id|event | age| date| Start_time| Stop_time|
|--:|:--------|----:|--------:|----------:|---------:|
| 1|alive | 62.2| 2002.999| 0.0| 0.1|
| 1|alive | 62.3| 2003.099| 0.1| 0.5|
| 1|alive | 62.7| 2003.499| 0.5| 2.0|
| 1|censored | 64.2| 2004.999| 2.0| 4.0|
| 2|alive | 55.3| 2010.246| 0.0| 0.1|
| 2|alive | 55.4| 2010.346| 0.1| 0.5|
| 2|alive | 55.8| 2010.746| 0.5| 2.0|
| 2|dead | 57.3| 2012.246| 2.0| 3.5|
| 3|alive | 73.7| 2013.718| 0.0| 0.1|
| 3|alive | 73.8| 2013.818| 0.1| 0.5|
| 3|alive | 74.2| 2014.218| 0.5| 1.0|
| 4|alive | 46.3| 2002.145| 0.0| 0.1|
| 4|alive | 46.4| 2002.245| 0.1| 0.5|
| 4|alive | 46.8| 2002.645| 0.5| 2.0|
| 4|dead | 48.3| 2004.145| 2.0| 5.0|
As described in my previous answer you now just need to model using Surv(Start_time, End_time, event) together with an additional : interaction term for your variable (note that you should also have the variable without the interaction in the model).
|
Time varying coefficient in Cox model
|
Just answered my own question with the same problem. Basically you need to do a time-split as you describe and then add an interaction term for the time. As you suggested it sounds like a good idea to
|
Time varying coefficient in Cox model
Just answered my own question with the same problem. Basically you need to do a time-split as you describe and then add an interaction term for the time. As you suggested it sounds like a good idea to perhaps split the time more fine grained in the beginning and then increase the intervals. Using my Greg package you can set the by-time to either a single interval or to a vector, e.g. if we create a dataset with four subjects:
test_data <- data.frame(
id = 1:4,
time = c(4, 3.5, 1, 5),
event = c("censored", "dead", "alive", "dead"),
age = c(62.2, 55.3, 73.7, 46.3),
date = as.Date(
c("2003-01-01",
"2010-04-01",
"2013-09-20",
"2002-02-23")))
Looking like this:
| id| time|event | age|date |
|--:|----:|:--------|----:|:----------|
| 1| 4.0|censored | 62.2|2003-01-01 |
| 2| 3.5|dead | 55.3|2010-04-01 |
| 3| 1.0|alive | 73.7|2013-09-20 |
| 4| 5.0|dead | 46.3|2002-02-23 |
We split each subject into several through:
library(Greg)
library(dplyr)
split_data <-
test_data %>%
select(id, event, time, age, date) %>%
timeSplitter(by = c(.1, .5, 2), # The time that we want to split by
event_var = "event",
time_var = "time",
event_start_status = "alive",
time_related_vars = c("age", "date"))
knitr::kable(split_data)
Gives:
| id|event | age| date| Start_time| Stop_time|
|--:|:--------|----:|--------:|----------:|---------:|
| 1|alive | 62.2| 2002.999| 0.0| 0.1|
| 1|alive | 62.3| 2003.099| 0.1| 0.5|
| 1|alive | 62.7| 2003.499| 0.5| 2.0|
| 1|censored | 64.2| 2004.999| 2.0| 4.0|
| 2|alive | 55.3| 2010.246| 0.0| 0.1|
| 2|alive | 55.4| 2010.346| 0.1| 0.5|
| 2|alive | 55.8| 2010.746| 0.5| 2.0|
| 2|dead | 57.3| 2012.246| 2.0| 3.5|
| 3|alive | 73.7| 2013.718| 0.0| 0.1|
| 3|alive | 73.8| 2013.818| 0.1| 0.5|
| 3|alive | 74.2| 2014.218| 0.5| 1.0|
| 4|alive | 46.3| 2002.145| 0.0| 0.1|
| 4|alive | 46.4| 2002.245| 0.1| 0.5|
| 4|alive | 46.8| 2002.645| 0.5| 2.0|
| 4|dead | 48.3| 2004.145| 2.0| 5.0|
As described in my previous answer you now just need to model using Surv(Start_time, End_time, event) together with an additional : interaction term for your variable (note that you should also have the variable without the interaction in the model).
|
Time varying coefficient in Cox model
Just answered my own question with the same problem. Basically you need to do a time-split as you describe and then add an interaction term for the time. As you suggested it sounds like a good idea to
|
43,118
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
|
What is the intended use of the result? That bears on what form of answer is needed, to include whether a stochastic (Monte Carlo) simulation approach might be adequate, And even the bigger picture matter of is this problem necessary to solve, and did someone come up with this problem as a way of solving a higher level problem, and there might be a better approach to the higher level problem which doesn't require this.
Here is a stochastic (Monte Carlo) simulation solution in MATLAB.
a = 1; b = 2; c = 3; d = 4; k = -1; % Made up values for illustrative purpose
n = 1e8; % Number of replications
mux = 10; sigmax = 4; sigmay = 7; % Made up values for illustrative purposes
X = mux + sigmax * randn(n,1); Y = sigmay * randn(n,1); Y1 = a + b + c + d * Y;
success_index = exp(X).*Y1 > 0; % replications in which condition is true
num_success = sum(success_index);
Cond_Sample = exp(X(success_index)) .* Y1(success_index) + k;
disp([num_success mean(Cond_Sample) std(Cond_Sample)/sqrt(num_success)])
1.0e+09 *
0.058475265000000 1.502775087443930 0.057342191058931
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
|
What is the intended use of the result? That bears on what form of answer is needed, to include whether a stochastic (Monte Carlo) simulation approach might be adequate, And even the bigger picture
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
What is the intended use of the result? That bears on what form of answer is needed, to include whether a stochastic (Monte Carlo) simulation approach might be adequate, And even the bigger picture matter of is this problem necessary to solve, and did someone come up with this problem as a way of solving a higher level problem, and there might be a better approach to the higher level problem which doesn't require this.
Here is a stochastic (Monte Carlo) simulation solution in MATLAB.
a = 1; b = 2; c = 3; d = 4; k = -1; % Made up values for illustrative purpose
n = 1e8; % Number of replications
mux = 10; sigmax = 4; sigmay = 7; % Made up values for illustrative purposes
X = mux + sigmax * randn(n,1); Y = sigmay * randn(n,1); Y1 = a + b + c + d * Y;
success_index = exp(X).*Y1 > 0; % replications in which condition is true
num_success = sum(success_index);
Cond_Sample = exp(X(success_index)) .* Y1(success_index) + k;
disp([num_success mean(Cond_Sample) std(Cond_Sample)/sqrt(num_success)])
1.0e+09 *
0.058475265000000 1.502775087443930 0.057342191058931
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
What is the intended use of the result? That bears on what form of answer is needed, to include whether a stochastic (Monte Carlo) simulation approach might be adequate, And even the bigger picture
|
43,119
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
|
Comments:
The joint density is given by multiplying the densities since they are idp. One variable is just a paramater to the other.
$Ye^X$ is not normally distributed so approach (4) wont work.
The expressions below might allow you to find some approximation. If not they are relatively easy to evaluate with a computer.
Let $X\sim\mathcal{N}(\mu, \sigma^2)$, denote $c = -k/y$, and $d(y)=\log(-k/y)$.
Let $Z=e^X$, then
$$E[YZ\mid B] = E[YE[Z\mid B]],$$
and,
$$E[Z\mid Z>c] = e^{\mu+\sigma^2/2}\frac{P(X>d-\sigma^2)}{P(X>d)},$$
if $c>0$ and $E[Z] = e^{\mu+\sigma^2/2}$ otherwise. Thus, since $c>0$ only if $Y<0$,
$$E[YZ\mid B] = \frac{1}{2}e^{\mu+\sigma^2/2}\bigg(E[Y\mathbb{1}(Y>0] + E\Big[Y\frac{P(X>d(Y)-\sigma^2)}{P(X>d(Y))}\mathbb{1}(Y<0)\Big]\bigg).$$
The first part is simply
$$\int_0^\infty y f_Y(y)\,dy,$$
and the second
$$\int_{-\infty}^0 y \frac{P(X>log(-\frac{k}{y})-\sigma^2)}{P(X>log(-\frac{k}{y}))}f_Y(y)\,dy.$$
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
|
Comments:
The joint density is given by multiplying the densities since they are idp. One variable is just a paramater to the other.
$Ye^X$ is not normally distributed so approach (4) wont work.
The
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
Comments:
The joint density is given by multiplying the densities since they are idp. One variable is just a paramater to the other.
$Ye^X$ is not normally distributed so approach (4) wont work.
The expressions below might allow you to find some approximation. If not they are relatively easy to evaluate with a computer.
Let $X\sim\mathcal{N}(\mu, \sigma^2)$, denote $c = -k/y$, and $d(y)=\log(-k/y)$.
Let $Z=e^X$, then
$$E[YZ\mid B] = E[YE[Z\mid B]],$$
and,
$$E[Z\mid Z>c] = e^{\mu+\sigma^2/2}\frac{P(X>d-\sigma^2)}{P(X>d)},$$
if $c>0$ and $E[Z] = e^{\mu+\sigma^2/2}$ otherwise. Thus, since $c>0$ only if $Y<0$,
$$E[YZ\mid B] = \frac{1}{2}e^{\mu+\sigma^2/2}\bigg(E[Y\mathbb{1}(Y>0] + E\Big[Y\frac{P(X>d(Y)-\sigma^2)}{P(X>d(Y))}\mathbb{1}(Y<0)\Big]\bigg).$$
The first part is simply
$$\int_0^\infty y f_Y(y)\,dy,$$
and the second
$$\int_{-\infty}^0 y \frac{P(X>log(-\frac{k}{y})-\sigma^2)}{P(X>log(-\frac{k}{y}))}f_Y(y)\,dy.$$
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
Comments:
The joint density is given by multiplying the densities since they are idp. One variable is just a paramater to the other.
$Ye^X$ is not normally distributed so approach (4) wont work.
The
|
43,120
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
|
This solution is due to the suggestions and corrections from @Hunaphu, @whuber, and others. Could someone please verify if all the steps make sense?
ANSWER STEPS START
Using some notational shortcuts,
Consider,
\begin{eqnarray*}
E\left[\left.\left(e^{X}Y+k\right)\right|\left(e^{X}Y+k\right)>0\right] & = & E\left[k\left|\left(e^{X}Y+k\right)>0\right.\right]+E\left[\left(e^{X}Y\right)\left|\left(e^{X}Y+k\right)>0\right.\right]
\end{eqnarray*}
\begin{eqnarray*}
& = & k+E\left[\left.\left(Ye^{X}\right)\right|\left.\left(Ye^{X}+k\right)>0\right]\right.
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int\int ye^{x}f\left(\left.ye^{x}\right|\left\{ ye^{x}+k\right\} >0\right)dxdy
\end{eqnarray*}
Here, $f\left(w\right)$ is the probability density function for $w$,
Here, $f\left(w\right)$ is the probability density function for $w$,
\begin{eqnarray*}
& = & k+\int\int ye^{x}\frac{f\left(ye^{x};\left\{ ye^{x}+k\right\} >0\right)}{f\left(\left\{ ye^{x}+k\right\} >0\right)}dxdy
\end{eqnarray*}
\begin{eqnarray*}
\left[\text{We note that, }ye^{x}>-k>0\Rightarrow y>0\right]
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int\int ye^{x}\frac{f\left(y\right)f\left(e^{x};\left\{ ye^{x}+k\right\} >0\right)}{f\left(\left\{ ye^{x}+k\right\} >0\right)}dxdy
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int y\left[\int\frac{e^{x}f\left(e^{x};\left\{ e^{x}>-\frac{k}{y}\right\} \right)}{f\left(e^{x}>-\frac{k}{y}\right)}dx\right]f\left(y\right)dy
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int y\left[\int e^{x}f\left(e^{x}\left|\left\{ e^{x}>-\frac{k}{y}\right\} \right.\right)dx\right]f\left(y\right)dy
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int_{0}^{\left(y<-k\right)}y\left[\int e^{x}f\left(e^{x}\left|\left\{ e^{x}>1\right\} \right.\right)dx\right]f\left(y\right)dy+\int_{\left(y>-k\right)}^{\infty}y\left[\int e^{x}f\left(e^{x}\left|\left\{ e^{x}<1\right\} \right.\right)dx\right]f\left(y\right)dy
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int_{0}^{\left(-k\right)}y\left[E\left(\left.W\right|W>c\right)\right]f\left(y\right)dy+\int_{\left(-k\right)}^{\infty}y\left[E\left(\left.W\right|W<c\right)\right]f\left(y\right)dy\quad;\;\text{here, }W=e^{X}\text{ and }c=1
\end{eqnarray*}
Simplifying the inner expectations,
\begin{eqnarray*}
E\left(\left.W\right|W>c\right)=\frac{1}{P\left(e^{X}>c\right)}\int_{c}^{\infty}w\frac{1}{w\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left[\frac{ln\left(w\right)-\mu_{X}}{\sigma_{X}}\right]^{2}}dw
\end{eqnarray*}
Put $t=ln\left(w\right)$, we have, $dw=e^{t}dt$
\begin{eqnarray*}
E\left(\left.W\right|W>c\right)=\frac{1}{P\left(X>ln\left(c\right)\right)}\int_{ln\left(c\right)}^{\infty}\frac{e^{t}}{\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{t-\mu_{X}}{\sigma_{X}}\right)^{2}}dt
\end{eqnarray*}
\begin{eqnarray*}
t-\frac{1}{2}\left(\frac{t-\mu_{X}}{\sigma_{X}}\right)^{2}=-\frac{1}{2\sigma_{X}^{2}}\left(t-\left(\mu_{X}+\sigma_{X}^{2}\right)\right)^{2}+\mu_{X}+\frac{\sigma_{X}^{2}}{2}
\end{eqnarray*}
\begin{eqnarray*}
E\left(\left.W\right|W>c\right)=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(\mu_{X}+\sigma_{X}Z>ln\left(c\right)\right)}\int_{ln\left(c\right)}^{\infty}\frac{1}{\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left[\frac{t-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]^{2}}dt\quad;Z\sim N\left(0,1\right)
\end{eqnarray*}
Put $s=\left[\frac{t-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]$
and $b=\left[\frac{ln\left(c\right)-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]$
we have, $ds=\frac{dt}{\sigma_{X}}$
\begin{eqnarray*}
E\left(\left.W\right|W>c\right)=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(Z>\frac{ln\left(c\right)-\mu_{X}}{\sigma_{X}}\right)}\int_{b}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}s^{2}}ds
\end{eqnarray*}
\begin{eqnarray*}
=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(Z<\frac{-ln\left(c\right)+\mu_{X}}{\sigma_{X}}\right)}\left[\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}s^{2}}ds-\int_{-\infty}^{b}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}s^{2}}ds\right]
\end{eqnarray*}
\begin{eqnarray*}
=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(Z<\frac{-ln\left(c\right)+\mu_{X}}{\sigma_{X}}\right)}\left[1-\Phi\left(b\right)\right]\quad;\Phi\text{ is the standard normal CDF}
\end{eqnarray*}
\begin{eqnarray*}
=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{\Phi\left(\frac{-ln\left(c\right)+\mu_{X}}{\sigma_{X}}\right)}\left[\Phi\left(-b\right)\right]
\end{eqnarray*}
Similarly for the other case,
\begin{eqnarray*}
E\left(\left.W\right|W<c\right)=\frac{1}{P\left(e^{X}<c\right)}\int_{0}^{c}w\frac{1}{w\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left[\frac{ln\left(w\right)-\mu_{X}}{\sigma_{X}}\right]^{2}}dw
\end{eqnarray*}
Put $t=ln\left(w\right)$, we have, $dw=e^{t}dt$
\begin{eqnarray*}
E\left(\left.W\right|W<c\right)=\frac{1}{P\left(X<ln\left(c\right)\right)}\int_{-\infty}^{ln\left(c\right)}\frac{e^{t}}{\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{t-\mu_{X}}{\sigma_{X}}\right)^{2}}dt
\end{eqnarray*}
\begin{eqnarray*}
t-\frac{1}{2}\left(\frac{t-\mu_{X}}{\sigma_{X}}\right)^{2}=-\frac{1}{2\sigma_{X}^{2}}\left(t-\left(\mu_{X}+\sigma_{X}^{2}\right)\right)^{2}+\mu_{X}+\frac{\sigma_{X}^{2}}{2}
\end{eqnarray*}
\begin{eqnarray*}
E\left(\left.W\right|W<c\right)=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(\mu_{X}+\sigma_{X}Z<ln\left(c\right)\right)}\int_{-\infty}^{ln\left(c\right)}\frac{1}{\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left[\frac{t-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]^{2}}dt\quad;Z\sim N\left(0,1\right)
\end{eqnarray*}
Put $s=\left[\frac{t-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]$
and $b=\left[\frac{ln\left(c\right)-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]$
we have, $ds=\frac{dt}{\sigma_{X}}$
\begin{eqnarray*}
E\left(\left.W\right|W<c\right)=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(Z<\frac{ln\left(c\right)-\mu_{X}}{\sigma_{X}}\right)}\int_{-\infty}^{b}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}s^{2}}ds
\end{eqnarray*}
\begin{eqnarray*}
=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(Z<\frac{ln\left(c\right)-\mu_{X}}{\sigma_{X}}\right)}\left[\Phi\left(b\right)\right]\quad;\Phi\text{ is the standard normal CDF}
\end{eqnarray*}
\begin{eqnarray*}
=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{\Phi\left(\frac{ln\left(c\right)-\mu_{X}}{\sigma_{X}}\right)}\left[\Phi\left(b\right)\right]
\end{eqnarray*}
Using the results for the inner expectations,
\begin{eqnarray*}
E\left[\left.\left(e^{X}Y+k\right)\right|\left(e^{X}Y+k\right)>0\right]=k+\int_{0}^{\left(-k\right)}y\left[\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{\Phi\left(\frac{-ln\left(c\right)+\mu_{X}}{\sigma_{X}}\right)}\left[\Phi\left(-b\right)\right]\right]f\left(y\right)dy+\int_{\left(-k\right)}^{\infty}y\left[\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{\Phi\left(\frac{ln\left(c\right)-\mu_{X}}{\sigma_{X}}\right)}\left[\Phi\left(b\right)\right]\right]f\left(y\right)dy
\end{eqnarray*}
\begin{eqnarray*}
=k+e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}\left[\int_{0}^{\left(-k\right)}y\left\{ \frac{\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} f\left(y\right)dy+\int_{\left(-k\right)}^{\infty}y\left\{ \frac{\Phi\left(-\left[\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right]\right)}{\Phi\left(-\left[\frac{\mu_{X}}{\sigma_{X}}\right]\right)}\right\} f\left(y\right)dy\right]
\end{eqnarray*}
\begin{eqnarray*}
=k+e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}\left[\int_{0}^{\left(-k\right)}y\left\{ \frac{\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} f\left(y\right)dy+\int_{\left(-k\right)}^{\infty}y\left\{ \frac{1-\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{1-\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} f\left(y\right)dy\right]
\end{eqnarray*}
\begin{eqnarray*}
& = & k+e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}\left[\left\{ \frac{\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} \int_{-\frac{\mu_{Y}}{\sigma_{Y}}}^{-\left(\frac{k+\mu_{Y}}{\sigma_{Y}}\right)}\left(\mu_{Y}+\sigma_{Y}z\right)\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}z^{2}}dz\right.\\
& & +\left.\left\{ \frac{1-\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{1-\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} \int_{-\left(\frac{k+\mu_{Y}}{\sigma_{Y}}\right)}^{\infty}\left(\mu_{Y}+\sigma_{Y}z\right)\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}z^{2}}dz\right]\quad;Z\sim N\left(0,1\right)
\end{eqnarray*}
\begin{eqnarray*}
& = & k+e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}\left[\left\{ \frac{\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} \left\{ \mu_{Y}\left[\Phi\left(-\left[\frac{k+\mu_{Y}}{\sigma_{Y}}\right]\right)-\Phi\left(-\frac{\mu_{Y}}{\sigma_{Y}}\right)\right]-\frac{\sigma_{Y}}{\sqrt{2\pi}}\left[e^{-\frac{1}{2}\left(\frac{k+\mu_{Y}}{\sigma_{Y}}\right)^{2}}-e^{-\frac{1}{2}\left(\frac{\mu_{Y}}{\sigma_{Y}}\right)^{2}}\right]\right\} \right.\\
& & +\left.\left\{ \frac{1-\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{1-\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} \left\{ \mu_{Y}\left[1-\Phi\left(-\left[\frac{k+\mu_{Y}}{\sigma_{Y}}\right]\right)\right]+\frac{\sigma_{Y}}{\sqrt{2\pi}}\left[e^{-\frac{1}{2}\left(\frac{k+\mu_{Y}}{\sigma_{Y}}\right)^{2}}\right]\right\} \right]
\end{eqnarray*}
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
|
This solution is due to the suggestions and corrections from @Hunaphu, @whuber, and others. Could someone please verify if all the steps make sense?
ANSWER STEPS START
Using some notational shortcuts,
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
This solution is due to the suggestions and corrections from @Hunaphu, @whuber, and others. Could someone please verify if all the steps make sense?
ANSWER STEPS START
Using some notational shortcuts,
Consider,
\begin{eqnarray*}
E\left[\left.\left(e^{X}Y+k\right)\right|\left(e^{X}Y+k\right)>0\right] & = & E\left[k\left|\left(e^{X}Y+k\right)>0\right.\right]+E\left[\left(e^{X}Y\right)\left|\left(e^{X}Y+k\right)>0\right.\right]
\end{eqnarray*}
\begin{eqnarray*}
& = & k+E\left[\left.\left(Ye^{X}\right)\right|\left.\left(Ye^{X}+k\right)>0\right]\right.
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int\int ye^{x}f\left(\left.ye^{x}\right|\left\{ ye^{x}+k\right\} >0\right)dxdy
\end{eqnarray*}
Here, $f\left(w\right)$ is the probability density function for $w$,
Here, $f\left(w\right)$ is the probability density function for $w$,
\begin{eqnarray*}
& = & k+\int\int ye^{x}\frac{f\left(ye^{x};\left\{ ye^{x}+k\right\} >0\right)}{f\left(\left\{ ye^{x}+k\right\} >0\right)}dxdy
\end{eqnarray*}
\begin{eqnarray*}
\left[\text{We note that, }ye^{x}>-k>0\Rightarrow y>0\right]
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int\int ye^{x}\frac{f\left(y\right)f\left(e^{x};\left\{ ye^{x}+k\right\} >0\right)}{f\left(\left\{ ye^{x}+k\right\} >0\right)}dxdy
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int y\left[\int\frac{e^{x}f\left(e^{x};\left\{ e^{x}>-\frac{k}{y}\right\} \right)}{f\left(e^{x}>-\frac{k}{y}\right)}dx\right]f\left(y\right)dy
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int y\left[\int e^{x}f\left(e^{x}\left|\left\{ e^{x}>-\frac{k}{y}\right\} \right.\right)dx\right]f\left(y\right)dy
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int_{0}^{\left(y<-k\right)}y\left[\int e^{x}f\left(e^{x}\left|\left\{ e^{x}>1\right\} \right.\right)dx\right]f\left(y\right)dy+\int_{\left(y>-k\right)}^{\infty}y\left[\int e^{x}f\left(e^{x}\left|\left\{ e^{x}<1\right\} \right.\right)dx\right]f\left(y\right)dy
\end{eqnarray*}
\begin{eqnarray*}
& = & k+\int_{0}^{\left(-k\right)}y\left[E\left(\left.W\right|W>c\right)\right]f\left(y\right)dy+\int_{\left(-k\right)}^{\infty}y\left[E\left(\left.W\right|W<c\right)\right]f\left(y\right)dy\quad;\;\text{here, }W=e^{X}\text{ and }c=1
\end{eqnarray*}
Simplifying the inner expectations,
\begin{eqnarray*}
E\left(\left.W\right|W>c\right)=\frac{1}{P\left(e^{X}>c\right)}\int_{c}^{\infty}w\frac{1}{w\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left[\frac{ln\left(w\right)-\mu_{X}}{\sigma_{X}}\right]^{2}}dw
\end{eqnarray*}
Put $t=ln\left(w\right)$, we have, $dw=e^{t}dt$
\begin{eqnarray*}
E\left(\left.W\right|W>c\right)=\frac{1}{P\left(X>ln\left(c\right)\right)}\int_{ln\left(c\right)}^{\infty}\frac{e^{t}}{\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{t-\mu_{X}}{\sigma_{X}}\right)^{2}}dt
\end{eqnarray*}
\begin{eqnarray*}
t-\frac{1}{2}\left(\frac{t-\mu_{X}}{\sigma_{X}}\right)^{2}=-\frac{1}{2\sigma_{X}^{2}}\left(t-\left(\mu_{X}+\sigma_{X}^{2}\right)\right)^{2}+\mu_{X}+\frac{\sigma_{X}^{2}}{2}
\end{eqnarray*}
\begin{eqnarray*}
E\left(\left.W\right|W>c\right)=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(\mu_{X}+\sigma_{X}Z>ln\left(c\right)\right)}\int_{ln\left(c\right)}^{\infty}\frac{1}{\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left[\frac{t-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]^{2}}dt\quad;Z\sim N\left(0,1\right)
\end{eqnarray*}
Put $s=\left[\frac{t-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]$
and $b=\left[\frac{ln\left(c\right)-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]$
we have, $ds=\frac{dt}{\sigma_{X}}$
\begin{eqnarray*}
E\left(\left.W\right|W>c\right)=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(Z>\frac{ln\left(c\right)-\mu_{X}}{\sigma_{X}}\right)}\int_{b}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}s^{2}}ds
\end{eqnarray*}
\begin{eqnarray*}
=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(Z<\frac{-ln\left(c\right)+\mu_{X}}{\sigma_{X}}\right)}\left[\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}s^{2}}ds-\int_{-\infty}^{b}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}s^{2}}ds\right]
\end{eqnarray*}
\begin{eqnarray*}
=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(Z<\frac{-ln\left(c\right)+\mu_{X}}{\sigma_{X}}\right)}\left[1-\Phi\left(b\right)\right]\quad;\Phi\text{ is the standard normal CDF}
\end{eqnarray*}
\begin{eqnarray*}
=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{\Phi\left(\frac{-ln\left(c\right)+\mu_{X}}{\sigma_{X}}\right)}\left[\Phi\left(-b\right)\right]
\end{eqnarray*}
Similarly for the other case,
\begin{eqnarray*}
E\left(\left.W\right|W<c\right)=\frac{1}{P\left(e^{X}<c\right)}\int_{0}^{c}w\frac{1}{w\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left[\frac{ln\left(w\right)-\mu_{X}}{\sigma_{X}}\right]^{2}}dw
\end{eqnarray*}
Put $t=ln\left(w\right)$, we have, $dw=e^{t}dt$
\begin{eqnarray*}
E\left(\left.W\right|W<c\right)=\frac{1}{P\left(X<ln\left(c\right)\right)}\int_{-\infty}^{ln\left(c\right)}\frac{e^{t}}{\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{t-\mu_{X}}{\sigma_{X}}\right)^{2}}dt
\end{eqnarray*}
\begin{eqnarray*}
t-\frac{1}{2}\left(\frac{t-\mu_{X}}{\sigma_{X}}\right)^{2}=-\frac{1}{2\sigma_{X}^{2}}\left(t-\left(\mu_{X}+\sigma_{X}^{2}\right)\right)^{2}+\mu_{X}+\frac{\sigma_{X}^{2}}{2}
\end{eqnarray*}
\begin{eqnarray*}
E\left(\left.W\right|W<c\right)=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(\mu_{X}+\sigma_{X}Z<ln\left(c\right)\right)}\int_{-\infty}^{ln\left(c\right)}\frac{1}{\sigma_{X}\sqrt{2\pi}}e^{-\frac{1}{2}\left[\frac{t-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]^{2}}dt\quad;Z\sim N\left(0,1\right)
\end{eqnarray*}
Put $s=\left[\frac{t-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]$
and $b=\left[\frac{ln\left(c\right)-\left(\mu_{X}+\sigma_{X}^{2}\right)}{\sigma_{X}}\right]$
we have, $ds=\frac{dt}{\sigma_{X}}$
\begin{eqnarray*}
E\left(\left.W\right|W<c\right)=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(Z<\frac{ln\left(c\right)-\mu_{X}}{\sigma_{X}}\right)}\int_{-\infty}^{b}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}s^{2}}ds
\end{eqnarray*}
\begin{eqnarray*}
=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{P\left(Z<\frac{ln\left(c\right)-\mu_{X}}{\sigma_{X}}\right)}\left[\Phi\left(b\right)\right]\quad;\Phi\text{ is the standard normal CDF}
\end{eqnarray*}
\begin{eqnarray*}
=\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{\Phi\left(\frac{ln\left(c\right)-\mu_{X}}{\sigma_{X}}\right)}\left[\Phi\left(b\right)\right]
\end{eqnarray*}
Using the results for the inner expectations,
\begin{eqnarray*}
E\left[\left.\left(e^{X}Y+k\right)\right|\left(e^{X}Y+k\right)>0\right]=k+\int_{0}^{\left(-k\right)}y\left[\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{\Phi\left(\frac{-ln\left(c\right)+\mu_{X}}{\sigma_{X}}\right)}\left[\Phi\left(-b\right)\right]\right]f\left(y\right)dy+\int_{\left(-k\right)}^{\infty}y\left[\frac{e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}}{\Phi\left(\frac{ln\left(c\right)-\mu_{X}}{\sigma_{X}}\right)}\left[\Phi\left(b\right)\right]\right]f\left(y\right)dy
\end{eqnarray*}
\begin{eqnarray*}
=k+e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}\left[\int_{0}^{\left(-k\right)}y\left\{ \frac{\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} f\left(y\right)dy+\int_{\left(-k\right)}^{\infty}y\left\{ \frac{\Phi\left(-\left[\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right]\right)}{\Phi\left(-\left[\frac{\mu_{X}}{\sigma_{X}}\right]\right)}\right\} f\left(y\right)dy\right]
\end{eqnarray*}
\begin{eqnarray*}
=k+e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}\left[\int_{0}^{\left(-k\right)}y\left\{ \frac{\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} f\left(y\right)dy+\int_{\left(-k\right)}^{\infty}y\left\{ \frac{1-\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{1-\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} f\left(y\right)dy\right]
\end{eqnarray*}
\begin{eqnarray*}
& = & k+e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}\left[\left\{ \frac{\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} \int_{-\frac{\mu_{Y}}{\sigma_{Y}}}^{-\left(\frac{k+\mu_{Y}}{\sigma_{Y}}\right)}\left(\mu_{Y}+\sigma_{Y}z\right)\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}z^{2}}dz\right.\\
& & +\left.\left\{ \frac{1-\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{1-\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} \int_{-\left(\frac{k+\mu_{Y}}{\sigma_{Y}}\right)}^{\infty}\left(\mu_{Y}+\sigma_{Y}z\right)\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}z^{2}}dz\right]\quad;Z\sim N\left(0,1\right)
\end{eqnarray*}
\begin{eqnarray*}
& = & k+e^{\left(\mu_{X}+\frac{1}{2}\sigma_{X}^{2}\right)}\left[\left\{ \frac{\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} \left\{ \mu_{Y}\left[\Phi\left(-\left[\frac{k+\mu_{Y}}{\sigma_{Y}}\right]\right)-\Phi\left(-\frac{\mu_{Y}}{\sigma_{Y}}\right)\right]-\frac{\sigma_{Y}}{\sqrt{2\pi}}\left[e^{-\frac{1}{2}\left(\frac{k+\mu_{Y}}{\sigma_{Y}}\right)^{2}}-e^{-\frac{1}{2}\left(\frac{\mu_{Y}}{\sigma_{Y}}\right)^{2}}\right]\right\} \right.\\
& & +\left.\left\{ \frac{1-\Phi\left(\frac{\mu_{X}+\sigma_{X}^{2}}{\sigma_{X}}\right)}{1-\Phi\left(\frac{\mu_{X}}{\sigma_{X}}\right)}\right\} \left\{ \mu_{Y}\left[1-\Phi\left(-\left[\frac{k+\mu_{Y}}{\sigma_{Y}}\right]\right)\right]+\frac{\sigma_{Y}}{\sqrt{2\pi}}\left[e^{-\frac{1}{2}\left(\frac{k+\mu_{Y}}{\sigma_{Y}}\right)^{2}}\right]\right\} \right]
\end{eqnarray*}
|
Conditional Expected Value of Product of Normal and Log-Normal Distribution
This solution is due to the suggestions and corrections from @Hunaphu, @whuber, and others. Could someone please verify if all the steps make sense?
ANSWER STEPS START
Using some notational shortcuts,
|
43,121
|
Examples of Non-Linear Time Series?
|
Contrived financial example:
Lets say you put away amount $M$ into your savings account every month. You also put $\frac{1}{2}$ of the total money you have in the bank into a 2 month CD each month. The CD pays interest into your main account, and the interest rate for it is some step function $q(D)$, where $D$ is the amount you put in. All the remaining money which isn't in one of the CDs gets an interest rate of $r$. Finally, lets say you have some random cost each month $\epsilon_t$.
At time $t$, your total savings would be:
$$
X_t = M + r\frac{1}{2}(X_{t-1}-X_{t-2}) + q\left(\frac{1}{2}X_{t-1}\right)\frac{1}{2}X_{t-1} + q\left(\frac{1}{2}X_{t-2}\right)\frac{1}{2}X_{t-2} - \epsilon_t
$$
Here we have an additive model that depends on its past in a way which is neither cleanly linear nor log linear. Its a toy example, but in general ones rate of return on an investment grows with the amount they have saved, so assuming its linear or log linear doesn't strike me as safe.
|
Examples of Non-Linear Time Series?
|
Contrived financial example:
Lets say you put away amount $M$ into your savings account every month. You also put $\frac{1}{2}$ of the total money you have in the bank into a 2 month CD each month. Th
|
Examples of Non-Linear Time Series?
Contrived financial example:
Lets say you put away amount $M$ into your savings account every month. You also put $\frac{1}{2}$ of the total money you have in the bank into a 2 month CD each month. The CD pays interest into your main account, and the interest rate for it is some step function $q(D)$, where $D$ is the amount you put in. All the remaining money which isn't in one of the CDs gets an interest rate of $r$. Finally, lets say you have some random cost each month $\epsilon_t$.
At time $t$, your total savings would be:
$$
X_t = M + r\frac{1}{2}(X_{t-1}-X_{t-2}) + q\left(\frac{1}{2}X_{t-1}\right)\frac{1}{2}X_{t-1} + q\left(\frac{1}{2}X_{t-2}\right)\frac{1}{2}X_{t-2} - \epsilon_t
$$
Here we have an additive model that depends on its past in a way which is neither cleanly linear nor log linear. Its a toy example, but in general ones rate of return on an investment grows with the amount they have saved, so assuming its linear or log linear doesn't strike me as safe.
|
Examples of Non-Linear Time Series?
Contrived financial example:
Lets say you put away amount $M$ into your savings account every month. You also put $\frac{1}{2}$ of the total money you have in the bank into a 2 month CD each month. Th
|
43,122
|
Is it possible to parallelize a matching method?
|
Nearest neighbor matching without replacement cannot be parallelized in a straightforward way. Each match depends on the matches that occurred before it (i.e., the matching must be performed sequentially). This means one could not perform the matching on independent cores that do not communicate with each other. An exception is when combining nearest neighbor matching with exact matching on one or more covariates; in that case, you can split the matching problem into separate matching problems defined by the strata of the covariates to be exactly matched. For example, you could request exact matching on province (i.e., if you had a country-wide dataset), which would split the matching problem into much smaller problems defined by province. In this case, province-specific matching can be performed on separate cores since the matches in one province do not depend on matches in another.
Nearest neighbor matching with replacement can be performed in parallel because each match does not depend on the results of other matches. Whether a control unit is already matched to another treated unit doesn't matter for each treated unit. The treated units can be partitioned and the matching for each partition can take place on its own core.
MatchIt has been updated since this question was asked and now relies on Rcpp for the matching, which is much faster than the original R-based code implemented when this question was asked. With 8 million observations, though, it will still take an extremely long time, and other methods might be preferred.
|
Is it possible to parallelize a matching method?
|
Nearest neighbor matching without replacement cannot be parallelized in a straightforward way. Each match depends on the matches that occurred before it (i.e., the matching must be performed sequentia
|
Is it possible to parallelize a matching method?
Nearest neighbor matching without replacement cannot be parallelized in a straightforward way. Each match depends on the matches that occurred before it (i.e., the matching must be performed sequentially). This means one could not perform the matching on independent cores that do not communicate with each other. An exception is when combining nearest neighbor matching with exact matching on one or more covariates; in that case, you can split the matching problem into separate matching problems defined by the strata of the covariates to be exactly matched. For example, you could request exact matching on province (i.e., if you had a country-wide dataset), which would split the matching problem into much smaller problems defined by province. In this case, province-specific matching can be performed on separate cores since the matches in one province do not depend on matches in another.
Nearest neighbor matching with replacement can be performed in parallel because each match does not depend on the results of other matches. Whether a control unit is already matched to another treated unit doesn't matter for each treated unit. The treated units can be partitioned and the matching for each partition can take place on its own core.
MatchIt has been updated since this question was asked and now relies on Rcpp for the matching, which is much faster than the original R-based code implemented when this question was asked. With 8 million observations, though, it will still take an extremely long time, and other methods might be preferred.
|
Is it possible to parallelize a matching method?
Nearest neighbor matching without replacement cannot be parallelized in a straightforward way. Each match depends on the matches that occurred before it (i.e., the matching must be performed sequentia
|
43,123
|
Shrinkage of the eigenvalues
|
First of all, I don't know anything about Stein-Haff estimator other than what I saw from a few seconds of Googling in https://stat.duke.edu/~berger/papers/yang.pdf , which contains the quote "This estimator has two problems. First, the intuitively compatible ordering $\phi_1 \geq \phi_2 \geq \dots \geq \phi_p$ is frequently violated. Second, and more serious, some of the $\phi_i$ may even be negative. Stein suggests an isotonizing algorithm to avoid these problems. ... The details of this isotonizing algorithm can be found in Lin and Perlman (1985)".
That reference is: LIN, S. P. and PERLMAN,M. D. (1985). A Monte Carlo comparison of four estimators for a covariance matrix. In Multivariate Analysis 6 (P. R. Krishnaiah, ed.) 411-429. North-Holland, Amsterdam.
However, I do know about optimization. Isotonizing constraints can be placed on a least squares problem, making it into a (linearly constrained) convex Quadratic Programming (QP) problem, which is easy to formulate and numerically solve using off the shelf software. If an L^1 norm is used for the regression, or even an L^1 penalty being added to an L^2 objective, that is still a convex QP. In the case in which the objective is solely L^1, it would actually be a Linear Programming (LP) problem, which is a special case of a convex QP.
As for the negative eigenvalues, presuming those are still possible after adding the isotonizing constraints, that can be dealt with by imposing a semidefinite constraint on the covariance matrix. I.e., imposing the constraint that the minimum eigenvalue $\geq 0$. You could actually set a minimum eigenvalue value other than 0 if you so desire, and you would need to do this if you want to ensure the covariance matrix is nonsingular as you seem to suggest is desired or required. Addition of this semidefinite constraint turns the whole optimization problem into a convex semidefinite program (SDP), or technically, something convertible thereinto.
Formulating and numerically solving such a convex SDP, i.e., objective in your choice of norm (L^p for any $p \geq 1$), plus any objective penalty in your choice of norm $(p \geq 1)$ which need not be the same as the other norm, plus isotonizing (linear) constraints, plus semi-definite constraint, is VERY easy and straightforward using a tool such as CVX http://cvxr.com/cvx/ . This should be very fast executing unless dimension of the covariance matrix (what you called p, not what I called p) is in the thousands or greater. YALMIP http://users.isy.liu.se/johanl/yalmip/ could be used instead of CVX (which only allows formulation and solution of convex optimization problems, except for the optional specification of integer constraints). YALMIP allows for a greater choice of optimization solvers and a greater range of problems (non-convex) which can be formulated and solved than CVX, but has a steeper learning curve.
|
Shrinkage of the eigenvalues
|
First of all, I don't know anything about Stein-Haff estimator other than what I saw from a few seconds of Googling in https://stat.duke.edu/~berger/papers/yang.pdf , which contains the quote "This es
|
Shrinkage of the eigenvalues
First of all, I don't know anything about Stein-Haff estimator other than what I saw from a few seconds of Googling in https://stat.duke.edu/~berger/papers/yang.pdf , which contains the quote "This estimator has two problems. First, the intuitively compatible ordering $\phi_1 \geq \phi_2 \geq \dots \geq \phi_p$ is frequently violated. Second, and more serious, some of the $\phi_i$ may even be negative. Stein suggests an isotonizing algorithm to avoid these problems. ... The details of this isotonizing algorithm can be found in Lin and Perlman (1985)".
That reference is: LIN, S. P. and PERLMAN,M. D. (1985). A Monte Carlo comparison of four estimators for a covariance matrix. In Multivariate Analysis 6 (P. R. Krishnaiah, ed.) 411-429. North-Holland, Amsterdam.
However, I do know about optimization. Isotonizing constraints can be placed on a least squares problem, making it into a (linearly constrained) convex Quadratic Programming (QP) problem, which is easy to formulate and numerically solve using off the shelf software. If an L^1 norm is used for the regression, or even an L^1 penalty being added to an L^2 objective, that is still a convex QP. In the case in which the objective is solely L^1, it would actually be a Linear Programming (LP) problem, which is a special case of a convex QP.
As for the negative eigenvalues, presuming those are still possible after adding the isotonizing constraints, that can be dealt with by imposing a semidefinite constraint on the covariance matrix. I.e., imposing the constraint that the minimum eigenvalue $\geq 0$. You could actually set a minimum eigenvalue value other than 0 if you so desire, and you would need to do this if you want to ensure the covariance matrix is nonsingular as you seem to suggest is desired or required. Addition of this semidefinite constraint turns the whole optimization problem into a convex semidefinite program (SDP), or technically, something convertible thereinto.
Formulating and numerically solving such a convex SDP, i.e., objective in your choice of norm (L^p for any $p \geq 1$), plus any objective penalty in your choice of norm $(p \geq 1)$ which need not be the same as the other norm, plus isotonizing (linear) constraints, plus semi-definite constraint, is VERY easy and straightforward using a tool such as CVX http://cvxr.com/cvx/ . This should be very fast executing unless dimension of the covariance matrix (what you called p, not what I called p) is in the thousands or greater. YALMIP http://users.isy.liu.se/johanl/yalmip/ could be used instead of CVX (which only allows formulation and solution of convex optimization problems, except for the optional specification of integer constraints). YALMIP allows for a greater choice of optimization solvers and a greater range of problems (non-convex) which can be formulated and solved than CVX, but has a steeper learning curve.
|
Shrinkage of the eigenvalues
First of all, I don't know anything about Stein-Haff estimator other than what I saw from a few seconds of Googling in https://stat.duke.edu/~berger/papers/yang.pdf , which contains the quote "This es
|
43,124
|
Any significance of area under curves in lasso plot?
|
A couple things that immediately occur to me about this.
I think spdrnl's right, due to the standardization, the effect sizes should be comparable. It looks like it may be the case that the plot is on the scale of the original variables though, I'd check which is true and work with a plot of the coefficients of the standardized predictors.
First observation. I think you'll want to be careful with your region of integration. Suppose the most predictive model is associated with a $\log(\lambda)$ somewhere in the middle of the plot. Then the models corresponding to the left hand side of the plot are overfit, and just capturing noise in the data. You probably don't want to report on this area. So, in terms of lambda, I would recommend integrating:
$$ \int_0^{\lambda_{opt}} | \beta_i(t) | $$
Second observation. You are going to lose some subtlety with non-monotonic coefficient paths. I'm thinking of your lasso example from yesterday
Here the area method would report some definite significance for cyl. What's really true is that cyl is important for small models, then the effect drops out for large models. The area approach does not capture this. You may want to complement your area measurements with comments or pictures focusing on these interesting cases.
Finally, you'll have to choose what to measure on your x-axis. The choices are $\lambda$, $\log(\lambda)$ and $\sum_i | \beta_i |$. I would lean towards the latter, as that is measuring how much of the total allocated coefficient budget goes to each predictor. The others are only interpretable though Lagrange multipliers, making it hard to really be sure what is being measured.
|
Any significance of area under curves in lasso plot?
|
A couple things that immediately occur to me about this.
I think spdrnl's right, due to the standardization, the effect sizes should be comparable. It looks like it may be the case that the plot is o
|
Any significance of area under curves in lasso plot?
A couple things that immediately occur to me about this.
I think spdrnl's right, due to the standardization, the effect sizes should be comparable. It looks like it may be the case that the plot is on the scale of the original variables though, I'd check which is true and work with a plot of the coefficients of the standardized predictors.
First observation. I think you'll want to be careful with your region of integration. Suppose the most predictive model is associated with a $\log(\lambda)$ somewhere in the middle of the plot. Then the models corresponding to the left hand side of the plot are overfit, and just capturing noise in the data. You probably don't want to report on this area. So, in terms of lambda, I would recommend integrating:
$$ \int_0^{\lambda_{opt}} | \beta_i(t) | $$
Second observation. You are going to lose some subtlety with non-monotonic coefficient paths. I'm thinking of your lasso example from yesterday
Here the area method would report some definite significance for cyl. What's really true is that cyl is important for small models, then the effect drops out for large models. The area approach does not capture this. You may want to complement your area measurements with comments or pictures focusing on these interesting cases.
Finally, you'll have to choose what to measure on your x-axis. The choices are $\lambda$, $\log(\lambda)$ and $\sum_i | \beta_i |$. I would lean towards the latter, as that is measuring how much of the total allocated coefficient budget goes to each predictor. The others are only interpretable though Lagrange multipliers, making it hard to really be sure what is being measured.
|
Any significance of area under curves in lasso plot?
A couple things that immediately occur to me about this.
I think spdrnl's right, due to the standardization, the effect sizes should be comparable. It looks like it may be the case that the plot is o
|
43,125
|
Clustering from similarity/distance matrix [duplicate]
|
If you have a similarity matrix, try to use Spectral methods for clustering. Take a look at Laplacian Eigenmaps for example. The idea is to compute eigenvectors from the Laplacian matrix (computed from the similarity matrix) and then come up with the feature vectors (one for each element) that respect the similarities. You can then cluster these feature vectors using for example k-means clustering algorithm.
From practical perspecive, if your matrix is big and dense, Spectral methods can quickly become very computationally intensive and memory hogs.
I used Spectral methods for image clustering and eventually classification. The results were pretty good. The difficult part was to get a good similarity matrix.
|
Clustering from similarity/distance matrix [duplicate]
|
If you have a similarity matrix, try to use Spectral methods for clustering. Take a look at Laplacian Eigenmaps for example. The idea is to compute eigenvectors from the Laplacian matrix (computed fro
|
Clustering from similarity/distance matrix [duplicate]
If you have a similarity matrix, try to use Spectral methods for clustering. Take a look at Laplacian Eigenmaps for example. The idea is to compute eigenvectors from the Laplacian matrix (computed from the similarity matrix) and then come up with the feature vectors (one for each element) that respect the similarities. You can then cluster these feature vectors using for example k-means clustering algorithm.
From practical perspecive, if your matrix is big and dense, Spectral methods can quickly become very computationally intensive and memory hogs.
I used Spectral methods for image clustering and eventually classification. The results were pretty good. The difficult part was to get a good similarity matrix.
|
Clustering from similarity/distance matrix [duplicate]
If you have a similarity matrix, try to use Spectral methods for clustering. Take a look at Laplacian Eigenmaps for example. The idea is to compute eigenvectors from the Laplacian matrix (computed fro
|
43,126
|
Clustering from similarity/distance matrix [duplicate]
|
Two ideas immediately come to mind:
The simpler is hierarchical clustering http://en.wikipedia.org/wiki/Hierarchical_clustering which only requires distances between points.
The other is much more complicated. There are techniques which, given distances between points, provides a distance preserving embedding into a Euclidean space. Once there, lots of clustering techniques will apply. When I remember (or successfully google) the names of several of these techniques, I'll edit the answer.
|
Clustering from similarity/distance matrix [duplicate]
|
Two ideas immediately come to mind:
The simpler is hierarchical clustering http://en.wikipedia.org/wiki/Hierarchical_clustering which only requires distances between points.
The other is much more com
|
Clustering from similarity/distance matrix [duplicate]
Two ideas immediately come to mind:
The simpler is hierarchical clustering http://en.wikipedia.org/wiki/Hierarchical_clustering which only requires distances between points.
The other is much more complicated. There are techniques which, given distances between points, provides a distance preserving embedding into a Euclidean space. Once there, lots of clustering techniques will apply. When I remember (or successfully google) the names of several of these techniques, I'll edit the answer.
|
Clustering from similarity/distance matrix [duplicate]
Two ideas immediately come to mind:
The simpler is hierarchical clustering http://en.wikipedia.org/wiki/Hierarchical_clustering which only requires distances between points.
The other is much more com
|
43,127
|
The correct use of tensor product in gam (mgcv) function
|
Before answering both of your questions, it's important to address the purpose of using ti() (and te() for that matter). Both smoothing functions are used when you're including interaction terms with different scales or units in your model.
On that note, whenever you're trying to examine the effects of an explanatory variable (x1) on y across different values of another explanatory variable (x2), you need to make sure that they're on the same scale.
Is it correct using ti() (i.e. tensor product) when there is no interaction in the formula?
It's not appropriate to use the ti() function when there is no interaction in the formula. The purpose of ti() is to separate the interactions from individual univariate effects. In order to accomplish this, you use regular smoothing terms (s()) for each variable, and then ti() for each interaction.
It would look something like this: mod_ti <- gam(y ~ s(x1) + s(x2) + ti(x1, x2)) Where the effects of x1 and x2 are separately smoothed, and the interaction between both on different scales is a separate term contained in ti().
Can I compare directly (e.g. using AIC for example) models fitted with ti() and models fitted with s()?
Sure you can compare them, but again, if your interaction terms are on different scales, you shouldn't fit the interaction with s(). You should either use te() or ti() (the choice depends on whether or not you're including the separate effects of the interaction terms in your model).
|
The correct use of tensor product in gam (mgcv) function
|
Before answering both of your questions, it's important to address the purpose of using ti() (and te() for that matter). Both smoothing functions are used when you're including interaction terms with
|
The correct use of tensor product in gam (mgcv) function
Before answering both of your questions, it's important to address the purpose of using ti() (and te() for that matter). Both smoothing functions are used when you're including interaction terms with different scales or units in your model.
On that note, whenever you're trying to examine the effects of an explanatory variable (x1) on y across different values of another explanatory variable (x2), you need to make sure that they're on the same scale.
Is it correct using ti() (i.e. tensor product) when there is no interaction in the formula?
It's not appropriate to use the ti() function when there is no interaction in the formula. The purpose of ti() is to separate the interactions from individual univariate effects. In order to accomplish this, you use regular smoothing terms (s()) for each variable, and then ti() for each interaction.
It would look something like this: mod_ti <- gam(y ~ s(x1) + s(x2) + ti(x1, x2)) Where the effects of x1 and x2 are separately smoothed, and the interaction between both on different scales is a separate term contained in ti().
Can I compare directly (e.g. using AIC for example) models fitted with ti() and models fitted with s()?
Sure you can compare them, but again, if your interaction terms are on different scales, you shouldn't fit the interaction with s(). You should either use te() or ti() (the choice depends on whether or not you're including the separate effects of the interaction terms in your model).
|
The correct use of tensor product in gam (mgcv) function
Before answering both of your questions, it's important to address the purpose of using ti() (and te() for that matter). Both smoothing functions are used when you're including interaction terms with
|
43,128
|
GradientBoostClassifier(sklearn) takes very long time to train [closed]
|
Might be a bit late... But.
1 - sklearn's Random Forest supports multithreading. GradientBoostingClassifier does not. This can be responsible for a 8 times speed up.
2 - sklearn's Random Forest works on a subset of the total number of features (at least, by default) whereas GradientBoostingClassifier uses all the features to grow each each tree.
If you set the argument max_features for GBC, you can observe a huge speed-up (but different results). From sklearn documentation :
max_features : int, float, string or None, optional (default=None)
The number of features to consider when looking for the best split:
If int, then consider max_features features at each split.
If float, then max_features is a percentage and int(max_features *
n_features) features are considered at each split.
If “auto”, then max_features=sqrt(n_features).
If “sqrt”, then max_features=sqrt(n_features).
If “log2”, then max_features=log2(n_features).
If None, then max_features=n_features.
Choosing max_features < n_features leads to a reduction of variance and
an increase in bias.
Option 2 is a matter of choice/performance. As for option 1, an implementation of GBC supporting multithreading is now available: xgboost, https://github.com/dmlc/xgboost. I used it with R, but the python implementation seems even easier to use.
Edit. Regarding the training time of various algorithm, you may be interested in learning more about complexities of machine learning methods.
|
GradientBoostClassifier(sklearn) takes very long time to train [closed]
|
Might be a bit late... But.
1 - sklearn's Random Forest supports multithreading. GradientBoostingClassifier does not. This can be responsible for a 8 times speed up.
2 - sklearn's Random Forest works
|
GradientBoostClassifier(sklearn) takes very long time to train [closed]
Might be a bit late... But.
1 - sklearn's Random Forest supports multithreading. GradientBoostingClassifier does not. This can be responsible for a 8 times speed up.
2 - sklearn's Random Forest works on a subset of the total number of features (at least, by default) whereas GradientBoostingClassifier uses all the features to grow each each tree.
If you set the argument max_features for GBC, you can observe a huge speed-up (but different results). From sklearn documentation :
max_features : int, float, string or None, optional (default=None)
The number of features to consider when looking for the best split:
If int, then consider max_features features at each split.
If float, then max_features is a percentage and int(max_features *
n_features) features are considered at each split.
If “auto”, then max_features=sqrt(n_features).
If “sqrt”, then max_features=sqrt(n_features).
If “log2”, then max_features=log2(n_features).
If None, then max_features=n_features.
Choosing max_features < n_features leads to a reduction of variance and
an increase in bias.
Option 2 is a matter of choice/performance. As for option 1, an implementation of GBC supporting multithreading is now available: xgboost, https://github.com/dmlc/xgboost. I used it with R, but the python implementation seems even easier to use.
Edit. Regarding the training time of various algorithm, you may be interested in learning more about complexities of machine learning methods.
|
GradientBoostClassifier(sklearn) takes very long time to train [closed]
Might be a bit late... But.
1 - sklearn's Random Forest supports multithreading. GradientBoostingClassifier does not. This can be responsible for a 8 times speed up.
2 - sklearn's Random Forest works
|
43,129
|
Nested linear mixed-effects model
|
I think the models you wrote are not incorrect, although I do wonder why you chose to treat Sites as fixed effects rather than random effects. There is nothing special about these sites, right? For example, you don't care about any differences among these particular 9 sites? If not, they are probably best considered random. 9 is not a lot of levels for a random factor, but hey, I'm sure it's expensive to get a lot of different sites.
Since FAM, GEN, and SPEC are explicitly nested in your dataset (e.g., each FAM has its own unique set of GEN labels associated with it), another way to write your models would be:
TWL1 <- lmer(TWL ~ SITE + (1|FAM)+(1|GEN)+(1|SPEC))
CPI1 <- lmer(CPI ~ SITE + (1|FAM)+(1|GEN)+(1|SPEC))
ACL1 <- lmer(ACL ~ SITE + (1|FAM)+(1|GEN)+(1|SPEC))
Although, as I hinted above, the models where all effects are random might make more sense:
TWL1 <- lmer(TWL ~ (1|SITE)+(1|FAM)+(1|GEN)+(1|SPEC))
CPI1 <- lmer(CPI ~ (1|SITE)+(1|FAM)+(1|GEN)+(1|SPEC))
ACL1 <- lmer(ACL ~ (1|SITE)+(1|FAM)+(1|GEN)+(1|SPEC))
Or even possibly the models where sites are random and the plant categories are fixed? Not sure about these but they seem at least not-obviously-crazy to me:
TWL1 <- lmer(TWL ~ FAM + GEN + SPEC + (1|SITE))
CPI1 <- lmer(CPI ~ FAM + GEN + SPEC + (1|SITE))
ACL1 <- lmer(ACL ~ FAM + GEN + SPEC + (1|SITE))
I'm not totally sure what "70% of the basal area" means, but if it implies that future replications of the study would most likely end up with the same set of plant categories (although obviously different individual plants), then maybe this last specification is defensible. But I leave that to your scientific judgment.
As for whether you want to compare models using likelihood ratio tests, it really just depends on what you are wanting to know. If your goal is to talk about proportions of variation due to each of the effects in your study, the models with all random effects would probably be easiest because you can compute those proportions simply by taking the ratio of each variance component over the sum of all the variance components.
|
Nested linear mixed-effects model
|
I think the models you wrote are not incorrect, although I do wonder why you chose to treat Sites as fixed effects rather than random effects. There is nothing special about these sites, right? For ex
|
Nested linear mixed-effects model
I think the models you wrote are not incorrect, although I do wonder why you chose to treat Sites as fixed effects rather than random effects. There is nothing special about these sites, right? For example, you don't care about any differences among these particular 9 sites? If not, they are probably best considered random. 9 is not a lot of levels for a random factor, but hey, I'm sure it's expensive to get a lot of different sites.
Since FAM, GEN, and SPEC are explicitly nested in your dataset (e.g., each FAM has its own unique set of GEN labels associated with it), another way to write your models would be:
TWL1 <- lmer(TWL ~ SITE + (1|FAM)+(1|GEN)+(1|SPEC))
CPI1 <- lmer(CPI ~ SITE + (1|FAM)+(1|GEN)+(1|SPEC))
ACL1 <- lmer(ACL ~ SITE + (1|FAM)+(1|GEN)+(1|SPEC))
Although, as I hinted above, the models where all effects are random might make more sense:
TWL1 <- lmer(TWL ~ (1|SITE)+(1|FAM)+(1|GEN)+(1|SPEC))
CPI1 <- lmer(CPI ~ (1|SITE)+(1|FAM)+(1|GEN)+(1|SPEC))
ACL1 <- lmer(ACL ~ (1|SITE)+(1|FAM)+(1|GEN)+(1|SPEC))
Or even possibly the models where sites are random and the plant categories are fixed? Not sure about these but they seem at least not-obviously-crazy to me:
TWL1 <- lmer(TWL ~ FAM + GEN + SPEC + (1|SITE))
CPI1 <- lmer(CPI ~ FAM + GEN + SPEC + (1|SITE))
ACL1 <- lmer(ACL ~ FAM + GEN + SPEC + (1|SITE))
I'm not totally sure what "70% of the basal area" means, but if it implies that future replications of the study would most likely end up with the same set of plant categories (although obviously different individual plants), then maybe this last specification is defensible. But I leave that to your scientific judgment.
As for whether you want to compare models using likelihood ratio tests, it really just depends on what you are wanting to know. If your goal is to talk about proportions of variation due to each of the effects in your study, the models with all random effects would probably be easiest because you can compute those proportions simply by taking the ratio of each variance component over the sum of all the variance components.
|
Nested linear mixed-effects model
I think the models you wrote are not incorrect, although I do wonder why you chose to treat Sites as fixed effects rather than random effects. There is nothing special about these sites, right? For ex
|
43,130
|
What are the disadvantages of using Lasso for feature selection in classification problems? [duplicate]
|
Lasso doesn't just do feature selection. It's trying to minimise the sum of squared errors subject penalised by the magnitude of the regression coefficients. This will often lead to a lower mean square error compared to an OLS procedure.
The nature of the $l_1$ penalty pushes many regression coefficients to zero; inducing sparsity and thus constituting form of feature selection.
However, if we are picking the best subset of all available predictors by minimum sum of squared errors - the best choice would be all of them.
If we set an arbitrary $k=10$, how do we know we should not choose $k=11$? Lasso doesn't have this problem as $k$ is chosen on a principled basis according to the optimisation problem.
|
What are the disadvantages of using Lasso for feature selection in classification problems? [duplica
|
Lasso doesn't just do feature selection. It's trying to minimise the sum of squared errors subject penalised by the magnitude of the regression coefficients. This will often lead to a lower mean squar
|
What are the disadvantages of using Lasso for feature selection in classification problems? [duplicate]
Lasso doesn't just do feature selection. It's trying to minimise the sum of squared errors subject penalised by the magnitude of the regression coefficients. This will often lead to a lower mean square error compared to an OLS procedure.
The nature of the $l_1$ penalty pushes many regression coefficients to zero; inducing sparsity and thus constituting form of feature selection.
However, if we are picking the best subset of all available predictors by minimum sum of squared errors - the best choice would be all of them.
If we set an arbitrary $k=10$, how do we know we should not choose $k=11$? Lasso doesn't have this problem as $k$ is chosen on a principled basis according to the optimisation problem.
|
What are the disadvantages of using Lasso for feature selection in classification problems? [duplica
Lasso doesn't just do feature selection. It's trying to minimise the sum of squared errors subject penalised by the magnitude of the regression coefficients. This will often lead to a lower mean squar
|
43,131
|
How is ABC more computationally efficient than exact Bayesian Computation for parameter estimation in dynamical systems (ODE) models?
|
It looks like it is similar to HMC in that it uses a type of stochastic gradient instead of a random walk. I didn't see anywhere that they say they could do the estimate based on a single simulation. Instead, it looks like you can simply get good values much faster. It really looks very similar to HMC as it's implemented in Stan. I would actually recommend Stan, since it looks like it has better documentation and there are some books on it currently.
|
How is ABC more computationally efficient than exact Bayesian Computation for parameter estimation i
|
It looks like it is similar to HMC in that it uses a type of stochastic gradient instead of a random walk. I didn't see anywhere that they say they could do the estimate based on a single simulation.
|
How is ABC more computationally efficient than exact Bayesian Computation for parameter estimation in dynamical systems (ODE) models?
It looks like it is similar to HMC in that it uses a type of stochastic gradient instead of a random walk. I didn't see anywhere that they say they could do the estimate based on a single simulation. Instead, it looks like you can simply get good values much faster. It really looks very similar to HMC as it's implemented in Stan. I would actually recommend Stan, since it looks like it has better documentation and there are some books on it currently.
|
How is ABC more computationally efficient than exact Bayesian Computation for parameter estimation i
It looks like it is similar to HMC in that it uses a type of stochastic gradient instead of a random walk. I didn't see anywhere that they say they could do the estimate based on a single simulation.
|
43,132
|
Can a labeled LDA (Latent Dirichlet Allocation) dataset have just one label per document?
|
There's nothing stopping you, but this essentially reduces to learning a bag of words model for each label, albeit with a shared prior in the form of $\eta$. The new model would look like this:
To see why these are equivalent, see this snippet from the labelled LDA paper:
The traditional LDA model then draws a multinomial mixture distribution $\theta^{(d)}$ over all $K$ topics, for each document $d$, from a Dirichlet prior $\alpha$. However, we would like to restrict $\theta^{(d)}$ to be defined only over the topics that correspond to its labels $\Lambda(d)$. Since the word-topic assignments $z_i$ (see step 9 in Table 1) are drawn from this distribution, this restriction ensures that all the topic assignments are limited to the document’s labels.
If the document has only a single—and importantly, observed—label, its topic assignment is limited to the corresponding topic, and all its words are generated from the same multinomial distribution. (This is because $\Lambda(d)$ will ensure that only one value of $\theta^{(d)}$ is nonzero.)
It bears superficial resemblance to a mixture of unigrams, where each document is produced by a single topic. But in that model the topic is a latent variable, and in your case it's observed. Cf. the mixture of unigrams model as described in the original LDA paper:
Under this mixture model, each document is generated by first choosing a topic $z$ and then generating $N$ words independently from the conditional
multinomial $p(w|z)$. [...] When estimated from a corpus, the word distributions can be viewed as representations of topics under the assumption that each document exhibits exactly one topic.
There's nothing wrong with bag of words, but it's worth noting that the paper introducing LDA demonstrated better performance, in terms of perplexity, in two experiments. (Figure 9.)
To the question about classification, sure: Each bag of words model will give you a likelihood for the document, and you can use this along with a prior on topics to find $p(z|\textbf{w})$ using Bayes rule. (If your prior on topics is uniform, this is equivalent to maximum likelihood.)
You've only asked whether this is possible, and not about likely performance. But for what it's worth, my intuition is that you'll get better predictive performance with regular LDA and a subsequent classifier. When in doubt, cross validate.
|
Can a labeled LDA (Latent Dirichlet Allocation) dataset have just one label per document?
|
There's nothing stopping you, but this essentially reduces to learning a bag of words model for each label, albeit with a shared prior in the form of $\eta$. The new model would look like this:
To se
|
Can a labeled LDA (Latent Dirichlet Allocation) dataset have just one label per document?
There's nothing stopping you, but this essentially reduces to learning a bag of words model for each label, albeit with a shared prior in the form of $\eta$. The new model would look like this:
To see why these are equivalent, see this snippet from the labelled LDA paper:
The traditional LDA model then draws a multinomial mixture distribution $\theta^{(d)}$ over all $K$ topics, for each document $d$, from a Dirichlet prior $\alpha$. However, we would like to restrict $\theta^{(d)}$ to be defined only over the topics that correspond to its labels $\Lambda(d)$. Since the word-topic assignments $z_i$ (see step 9 in Table 1) are drawn from this distribution, this restriction ensures that all the topic assignments are limited to the document’s labels.
If the document has only a single—and importantly, observed—label, its topic assignment is limited to the corresponding topic, and all its words are generated from the same multinomial distribution. (This is because $\Lambda(d)$ will ensure that only one value of $\theta^{(d)}$ is nonzero.)
It bears superficial resemblance to a mixture of unigrams, where each document is produced by a single topic. But in that model the topic is a latent variable, and in your case it's observed. Cf. the mixture of unigrams model as described in the original LDA paper:
Under this mixture model, each document is generated by first choosing a topic $z$ and then generating $N$ words independently from the conditional
multinomial $p(w|z)$. [...] When estimated from a corpus, the word distributions can be viewed as representations of topics under the assumption that each document exhibits exactly one topic.
There's nothing wrong with bag of words, but it's worth noting that the paper introducing LDA demonstrated better performance, in terms of perplexity, in two experiments. (Figure 9.)
To the question about classification, sure: Each bag of words model will give you a likelihood for the document, and you can use this along with a prior on topics to find $p(z|\textbf{w})$ using Bayes rule. (If your prior on topics is uniform, this is equivalent to maximum likelihood.)
You've only asked whether this is possible, and not about likely performance. But for what it's worth, my intuition is that you'll get better predictive performance with regular LDA and a subsequent classifier. When in doubt, cross validate.
|
Can a labeled LDA (Latent Dirichlet Allocation) dataset have just one label per document?
There's nothing stopping you, but this essentially reduces to learning a bag of words model for each label, albeit with a shared prior in the form of $\eta$. The new model would look like this:
To se
|
43,133
|
Can a labeled LDA (Latent Dirichlet Allocation) dataset have just one label per document?
|
In supervised LDA a single label is added for each document (in addition to topic labels for each word). This label known as response variable reflects some quantity of interest associated with a document: this could be the quality score of a report, or a star rating of a movie review, or the number of downloads of an on-line article. The graphical model for sLDA is shown below:
The documents and response variables are modeled jointly in order to find latent topics that will best predict the response variables for future unlabeled documents. For more information about sLDA, check out the original paper.
|
Can a labeled LDA (Latent Dirichlet Allocation) dataset have just one label per document?
|
In supervised LDA a single label is added for each document (in addition to topic labels for each word). This label known as response variable reflects some quantity of interest associated with a docu
|
Can a labeled LDA (Latent Dirichlet Allocation) dataset have just one label per document?
In supervised LDA a single label is added for each document (in addition to topic labels for each word). This label known as response variable reflects some quantity of interest associated with a document: this could be the quality score of a report, or a star rating of a movie review, or the number of downloads of an on-line article. The graphical model for sLDA is shown below:
The documents and response variables are modeled jointly in order to find latent topics that will best predict the response variables for future unlabeled documents. For more information about sLDA, check out the original paper.
|
Can a labeled LDA (Latent Dirichlet Allocation) dataset have just one label per document?
In supervised LDA a single label is added for each document (in addition to topic labels for each word). This label known as response variable reflects some quantity of interest associated with a docu
|
43,134
|
DNA exoneration: what are the chances?
|
Are there any studies that have investigated this FRR for DNA?
Yes, what you are refering to is also called a type II error or a false negative. People have investigated this and also for lab work in general:
Koehler et al.
Kloosterman et al.
Lapworth & Teal
NFI
A very broad range of error values, with Koehler reporting a ridiculous 12 out of 1000.
What are the statistics on this?
If the test would have been a perfect test it would still have an error rate due to the presence of homozygotic twins in the population and the small chance of two people having identical DNA profiles, called a coincidental match ($\approx 1\cdot 10^{-9}\%$). There are approximatly 0.3% homozygotic twins in the population which gives an error rate of 0.15% in the case of a perfect test.
but...
Since the test is not completely perfect and humans aren't either, there is a larger error introduced. Like every other test there are type I errors which might be due to equipment, hetrogeneous DNA mixture, human error or some unknown external source.
Whenever you design a test you also test the test on its error rate by testing some samples of which you know the outcome:
$$False\;negative\;rate= \frac{False\;negative}{False\;negative+True\;positives}$$
This would yield the value of the error rate and is different for different laboratories (due to different people, equipment, etc.). If you are into bayes theorem and would like to know more about error statistics in forensic DNA test: Thompson et al.
It is unacceptable that a criminal walks because the DNA test falsely rejected the match.
Indeed, but a good judge would not make a decision solely on a DNA test and bears in mind there are errors in testing. However when a DNA test is presented, the reported error rate is the chance of getting a coincidental match!
|
DNA exoneration: what are the chances?
|
Are there any studies that have investigated this FRR for DNA?
Yes, what you are refering to is also called a type II error or a false negative. People have investigated this and also for lab work in
|
DNA exoneration: what are the chances?
Are there any studies that have investigated this FRR for DNA?
Yes, what you are refering to is also called a type II error or a false negative. People have investigated this and also for lab work in general:
Koehler et al.
Kloosterman et al.
Lapworth & Teal
NFI
A very broad range of error values, with Koehler reporting a ridiculous 12 out of 1000.
What are the statistics on this?
If the test would have been a perfect test it would still have an error rate due to the presence of homozygotic twins in the population and the small chance of two people having identical DNA profiles, called a coincidental match ($\approx 1\cdot 10^{-9}\%$). There are approximatly 0.3% homozygotic twins in the population which gives an error rate of 0.15% in the case of a perfect test.
but...
Since the test is not completely perfect and humans aren't either, there is a larger error introduced. Like every other test there are type I errors which might be due to equipment, hetrogeneous DNA mixture, human error or some unknown external source.
Whenever you design a test you also test the test on its error rate by testing some samples of which you know the outcome:
$$False\;negative\;rate= \frac{False\;negative}{False\;negative+True\;positives}$$
This would yield the value of the error rate and is different for different laboratories (due to different people, equipment, etc.). If you are into bayes theorem and would like to know more about error statistics in forensic DNA test: Thompson et al.
It is unacceptable that a criminal walks because the DNA test falsely rejected the match.
Indeed, but a good judge would not make a decision solely on a DNA test and bears in mind there are errors in testing. However when a DNA test is presented, the reported error rate is the chance of getting a coincidental match!
|
DNA exoneration: what are the chances?
Are there any studies that have investigated this FRR for DNA?
Yes, what you are refering to is also called a type II error or a false negative. People have investigated this and also for lab work in
|
43,135
|
How can I derive confidence intervals from the confusion matrix for a classifier?
|
The question makes good sense. It is specifically noted that the contingency table is a result of cross-validation. Witten et al's Data Mining book (based around Weka) discusses a modified T-Test for (Repeated) Cross-Validation. A T-Test implicitly defines a confidence interval. Given we have a CV and each cell is an averaged statistic, CIs do exist per cell, although they will be most commonly calculated for the marginal statistics, and directly or via those for whole of table statistics.
In the following paper I explore adaptation of various generalizations of the confidence intervals applied to correlation to useful multiclass cases, and validate with monte carlo simulation, but it is difficult give a clear recommendation as the same measure can be overly conservative in some cases, and insufficiently conservative in others, nonetheless a reasonably choice is suggested and illustrated in simulations across a range of parameterizations:
Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation
DMW Powers
International Journal of Machine Learning Technology 2 (1), 37-63
It is possible to calculate recall and precision from a contingency table (divide a diagonal entry by the appropriate marginal sum) and their inverses (by the complement in the diagonal vs the margins - or simply to convert to binary tables) and define confidence intervals based on the Wald or Wilson techniques. A useful rule of thumb is introduced by Agresti et al for the normal distribution assumption at alpha=0.05, which is to add 2 positive and 2 negative examples. Tony Cai shows this is appropriate for the Binomial distribution and gives modified versions for the Negative Binomial (not applicable here) and the Poisson (arguably applicable, and used as an assumption in some of my derivations above).
The Poisson modification is probably most applicable here (think in terms of when another PpRr Predicted/Real pair might arrive) as it is focussed only on the class of interest and doesn't distributed errors/negativity amongst the other classes. It adds two more arrivals to a cell before calculating the statistics relating to that cell.
Wei Pan (2001) derives some other possible measures based around the binomial distribution and the T-test.
The Cai paper is here:
http://www-stat.wharton.upenn.edu/~tcai/paper/Plugin-Exp-CI.pdf
My paper is here:
http://dspace2.flinders.edu.au/xmlui/bitstream/handle/2328/27165/Powers%20Evaluation.pdf
This is something I'm still exploring - hence returning periodically to see if there's any new/useful contributions on the topic...
|
How can I derive confidence intervals from the confusion matrix for a classifier?
|
The question makes good sense. It is specifically noted that the contingency table is a result of cross-validation. Witten et al's Data Mining book (based around Weka) discusses a modified T-Test for
|
How can I derive confidence intervals from the confusion matrix for a classifier?
The question makes good sense. It is specifically noted that the contingency table is a result of cross-validation. Witten et al's Data Mining book (based around Weka) discusses a modified T-Test for (Repeated) Cross-Validation. A T-Test implicitly defines a confidence interval. Given we have a CV and each cell is an averaged statistic, CIs do exist per cell, although they will be most commonly calculated for the marginal statistics, and directly or via those for whole of table statistics.
In the following paper I explore adaptation of various generalizations of the confidence intervals applied to correlation to useful multiclass cases, and validate with monte carlo simulation, but it is difficult give a clear recommendation as the same measure can be overly conservative in some cases, and insufficiently conservative in others, nonetheless a reasonably choice is suggested and illustrated in simulations across a range of parameterizations:
Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation
DMW Powers
International Journal of Machine Learning Technology 2 (1), 37-63
It is possible to calculate recall and precision from a contingency table (divide a diagonal entry by the appropriate marginal sum) and their inverses (by the complement in the diagonal vs the margins - or simply to convert to binary tables) and define confidence intervals based on the Wald or Wilson techniques. A useful rule of thumb is introduced by Agresti et al for the normal distribution assumption at alpha=0.05, which is to add 2 positive and 2 negative examples. Tony Cai shows this is appropriate for the Binomial distribution and gives modified versions for the Negative Binomial (not applicable here) and the Poisson (arguably applicable, and used as an assumption in some of my derivations above).
The Poisson modification is probably most applicable here (think in terms of when another PpRr Predicted/Real pair might arrive) as it is focussed only on the class of interest and doesn't distributed errors/negativity amongst the other classes. It adds two more arrivals to a cell before calculating the statistics relating to that cell.
Wei Pan (2001) derives some other possible measures based around the binomial distribution and the T-test.
The Cai paper is here:
http://www-stat.wharton.upenn.edu/~tcai/paper/Plugin-Exp-CI.pdf
My paper is here:
http://dspace2.flinders.edu.au/xmlui/bitstream/handle/2328/27165/Powers%20Evaluation.pdf
This is something I'm still exploring - hence returning periodically to see if there's any new/useful contributions on the topic...
|
How can I derive confidence intervals from the confusion matrix for a classifier?
The question makes good sense. It is specifically noted that the contingency table is a result of cross-validation. Witten et al's Data Mining book (based around Weka) discusses a modified T-Test for
|
43,136
|
How can I derive confidence intervals from the confusion matrix for a classifier?
|
I don't see the value in confidence intervals on (elements of) a contingency table. I suggest to consider ROC curves instead, because the confidence depends per prediction, not per class. That is assuming you have a model that is more informative than simply positive/negative.
Consider logistic regression at the standard threshold of 50% probability to decide an instance is positive. In terms of a contingency table, probabilities of 51% and 99% are treated the same even though the model's output clearly shows that they are not. A confidence interval on precision (for instance) would abstract all this information away.
|
How can I derive confidence intervals from the confusion matrix for a classifier?
|
I don't see the value in confidence intervals on (elements of) a contingency table. I suggest to consider ROC curves instead, because the confidence depends per prediction, not per class. That is assu
|
How can I derive confidence intervals from the confusion matrix for a classifier?
I don't see the value in confidence intervals on (elements of) a contingency table. I suggest to consider ROC curves instead, because the confidence depends per prediction, not per class. That is assuming you have a model that is more informative than simply positive/negative.
Consider logistic regression at the standard threshold of 50% probability to decide an instance is positive. In terms of a contingency table, probabilities of 51% and 99% are treated the same even though the model's output clearly shows that they are not. A confidence interval on precision (for instance) would abstract all this information away.
|
How can I derive confidence intervals from the confusion matrix for a classifier?
I don't see the value in confidence intervals on (elements of) a contingency table. I suggest to consider ROC curves instead, because the confidence depends per prediction, not per class. That is assu
|
43,137
|
How can I derive confidence intervals from the confusion matrix for a classifier?
|
I agree with the others. If you want it to make sense you'd have to give the "interval" for a particular individual. You could give for example a list of the most probable classes as a "CI" instead of just the most likely one for that observation. If you don't have the information per observation though, what's the point?
EDIT: With "the most likely classes" I mean for example those whose probabilities sum up to 95%.
|
How can I derive confidence intervals from the confusion matrix for a classifier?
|
I agree with the others. If you want it to make sense you'd have to give the "interval" for a particular individual. You could give for example a list of the most probable classes as a "CI" instead of
|
How can I derive confidence intervals from the confusion matrix for a classifier?
I agree with the others. If you want it to make sense you'd have to give the "interval" for a particular individual. You could give for example a list of the most probable classes as a "CI" instead of just the most likely one for that observation. If you don't have the information per observation though, what's the point?
EDIT: With "the most likely classes" I mean for example those whose probabilities sum up to 95%.
|
How can I derive confidence intervals from the confusion matrix for a classifier?
I agree with the others. If you want it to make sense you'd have to give the "interval" for a particular individual. You could give for example a list of the most probable classes as a "CI" instead of
|
43,138
|
First remove seasonal trend or long-term trend in time series?
|
As you suggested you one can observe non-stationarity (symptom) but the correct remedy (medicine) is unclear. The correct remedy could be multiple level shifts , multiple trends , seasonal pulses too name a few. Assuming any one approach is both simple and potentially damaging to good statistical analysis.. The high road is to to "listen to the data" ala Bacon,Box,Tukey et al and form the appropriate form of non-stationarity adjustment (much like a good drug prescription) to render the data stationary without incurring damage. The whole idea is to keep the model simple but not too simple.
Non-stationarity can be induced by changes in model form , changes in parameters, changes in error variance besides what has been previously listed here. The message is avoid presumptive cook-book rules suggested by some textbooks and commentators particularly x-11 and it's variants and use the best statistical tools available to form/identify usable solutions.
For example review the outliers and changing error variance when using x11 on the classic airline series to construct the irregular or error process.
|
First remove seasonal trend or long-term trend in time series?
|
As you suggested you one can observe non-stationarity (symptom) but the correct remedy (medicine) is unclear. The correct remedy could be multiple level shifts , multiple trends , seasonal pulses too
|
First remove seasonal trend or long-term trend in time series?
As you suggested you one can observe non-stationarity (symptom) but the correct remedy (medicine) is unclear. The correct remedy could be multiple level shifts , multiple trends , seasonal pulses too name a few. Assuming any one approach is both simple and potentially damaging to good statistical analysis.. The high road is to to "listen to the data" ala Bacon,Box,Tukey et al and form the appropriate form of non-stationarity adjustment (much like a good drug prescription) to render the data stationary without incurring damage. The whole idea is to keep the model simple but not too simple.
Non-stationarity can be induced by changes in model form , changes in parameters, changes in error variance besides what has been previously listed here. The message is avoid presumptive cook-book rules suggested by some textbooks and commentators particularly x-11 and it's variants and use the best statistical tools available to form/identify usable solutions.
For example review the outliers and changing error variance when using x11 on the classic airline series to construct the irregular or error process.
|
First remove seasonal trend or long-term trend in time series?
As you suggested you one can observe non-stationarity (symptom) but the correct remedy (medicine) is unclear. The correct remedy could be multiple level shifts , multiple trends , seasonal pulses too
|
43,139
|
First remove seasonal trend or long-term trend in time series?
|
Remove them simultaneously. This example shows you how to do it. Henderson filter extracts the trend while S(3,3) filter extracts the seasonality. In fact all de-seasonalizing software will do this, I think.
|
First remove seasonal trend or long-term trend in time series?
|
Remove them simultaneously. This example shows you how to do it. Henderson filter extracts the trend while S(3,3) filter extracts the seasonality. In fact all de-seasonalizing software will do this, I
|
First remove seasonal trend or long-term trend in time series?
Remove them simultaneously. This example shows you how to do it. Henderson filter extracts the trend while S(3,3) filter extracts the seasonality. In fact all de-seasonalizing software will do this, I think.
|
First remove seasonal trend or long-term trend in time series?
Remove them simultaneously. This example shows you how to do it. Henderson filter extracts the trend while S(3,3) filter extracts the seasonality. In fact all de-seasonalizing software will do this, I
|
43,140
|
How to tell that a reciprocal relationship exists by a residual plot?
|
To address your question directly, the key is in the increasing scatter to the right in your first image. This essentially showing you that as fitted values increase the spread of residuals also increase. This means your data is heteroscedastic. As a rule-of-thumb, a cone opening to the right, you transform with a reciprocal. That is likely why the author states a reciprocal relationship.
That said, the comment by whuber is still very relevant and looking at spread-vs-level plots would be valuable. Over time you become more familiar with distributions and their meanings.
|
How to tell that a reciprocal relationship exists by a residual plot?
|
To address your question directly, the key is in the increasing scatter to the right in your first image. This essentially showing you that as fitted values increase the spread of residuals also incr
|
How to tell that a reciprocal relationship exists by a residual plot?
To address your question directly, the key is in the increasing scatter to the right in your first image. This essentially showing you that as fitted values increase the spread of residuals also increase. This means your data is heteroscedastic. As a rule-of-thumb, a cone opening to the right, you transform with a reciprocal. That is likely why the author states a reciprocal relationship.
That said, the comment by whuber is still very relevant and looking at spread-vs-level plots would be valuable. Over time you become more familiar with distributions and their meanings.
|
How to tell that a reciprocal relationship exists by a residual plot?
To address your question directly, the key is in the increasing scatter to the right in your first image. This essentially showing you that as fitted values increase the spread of residuals also incr
|
43,141
|
How to decide the p and q for GARCH model?
|
As the commenters point out, increasing ARCH/GARCH orders amounts to including additional degrees of freedom, so the (log) likelihood is guaranteed to increase. If you simply follow this increase in (log) likelihood, you will overfit.
One possibility would be to balance the gain in (log) likelihood against model complexity by adding a penalty term. Information criteria like the AIC or BIC are helpful here.
Here is an alternative. ARCH and GARCH are fundamentally ways to forecast future volatility. They aim at producing good density forecasts, by modeling the conditional heteroskedasticity. So one way of choosing model orders would be to fit each model, then create density forecasts for a holdout sample and assess which model gives the best density forecast. Possible tools would be the Probability Integral Transform or (proper) scoring-rules.
|
How to decide the p and q for GARCH model?
|
As the commenters point out, increasing ARCH/GARCH orders amounts to including additional degrees of freedom, so the (log) likelihood is guaranteed to increase. If you simply follow this increase in (
|
How to decide the p and q for GARCH model?
As the commenters point out, increasing ARCH/GARCH orders amounts to including additional degrees of freedom, so the (log) likelihood is guaranteed to increase. If you simply follow this increase in (log) likelihood, you will overfit.
One possibility would be to balance the gain in (log) likelihood against model complexity by adding a penalty term. Information criteria like the AIC or BIC are helpful here.
Here is an alternative. ARCH and GARCH are fundamentally ways to forecast future volatility. They aim at producing good density forecasts, by modeling the conditional heteroskedasticity. So one way of choosing model orders would be to fit each model, then create density forecasts for a holdout sample and assess which model gives the best density forecast. Possible tools would be the Probability Integral Transform or (proper) scoring-rules.
|
How to decide the p and q for GARCH model?
As the commenters point out, increasing ARCH/GARCH orders amounts to including additional degrees of freedom, so the (log) likelihood is guaranteed to increase. If you simply follow this increase in (
|
43,142
|
Choice of grouping in Chi-Squared test
|
This procedure is basically the idea behind "CHi-squared Automated Interaction Detection", or "CHAID" described by G.V. Kass in 1980. The general setting is very similar to your television watching prediction example: You want to best predict the occurrence of a categorical variable by a combination of other categorical variables. You do this by finding the split with the maximal $\chi^2$ value.
A description of the algorithm and the issues around adjusting for statistical significance are given in (Kass, 1980). In that paper the Bonferroni correction is used to adjust for the selection of the maximal $\chi^2$ value.
Some actual theory is available for the case of reduction to a $2\times2$ table (Kass, 1975).
There is an R package called CHAID which implements the algorithm and is available on R-Forge.
Although it is a little different from your question, there is a similar situation that arises when dichotomizing a continuous variable to predict another dichotomous variable. Namely, where should you put the cut-point? This is discussed in (Miller and Siegmund, 1980) and (Halpern, 1982), among others.
Yet another setting where this type of question comes up is in change-point estimation or segmentation, though it has been too long since I looked at those papers to recall authors.
References:
Halpern, J. (1982). Maximally selected chi square statistics for small samples. Biometrics, 1017-1023.
Kass, G. V. (1975). Significance testing in automatic interaction detection (AID). Applied Statistics, 178-189.
Kass, G.V. (1980). An Exploratory Technique for Investigating Large Quantities of Categorical Data. Applied Statistics, 29(2), 119-127.
Miller, R. and Siegmund, D. (1980). Maximally Selected Chi-Squares. Technical Report 64. Stanford, Calif, Division of Biostatistics, Stanford University.
|
Choice of grouping in Chi-Squared test
|
This procedure is basically the idea behind "CHi-squared Automated Interaction Detection", or "CHAID" described by G.V. Kass in 1980. The general setting is very similar to your television watching
|
Choice of grouping in Chi-Squared test
This procedure is basically the idea behind "CHi-squared Automated Interaction Detection", or "CHAID" described by G.V. Kass in 1980. The general setting is very similar to your television watching prediction example: You want to best predict the occurrence of a categorical variable by a combination of other categorical variables. You do this by finding the split with the maximal $\chi^2$ value.
A description of the algorithm and the issues around adjusting for statistical significance are given in (Kass, 1980). In that paper the Bonferroni correction is used to adjust for the selection of the maximal $\chi^2$ value.
Some actual theory is available for the case of reduction to a $2\times2$ table (Kass, 1975).
There is an R package called CHAID which implements the algorithm and is available on R-Forge.
Although it is a little different from your question, there is a similar situation that arises when dichotomizing a continuous variable to predict another dichotomous variable. Namely, where should you put the cut-point? This is discussed in (Miller and Siegmund, 1980) and (Halpern, 1982), among others.
Yet another setting where this type of question comes up is in change-point estimation or segmentation, though it has been too long since I looked at those papers to recall authors.
References:
Halpern, J. (1982). Maximally selected chi square statistics for small samples. Biometrics, 1017-1023.
Kass, G. V. (1975). Significance testing in automatic interaction detection (AID). Applied Statistics, 178-189.
Kass, G.V. (1980). An Exploratory Technique for Investigating Large Quantities of Categorical Data. Applied Statistics, 29(2), 119-127.
Miller, R. and Siegmund, D. (1980). Maximally Selected Chi-Squares. Technical Report 64. Stanford, Calif, Division of Biostatistics, Stanford University.
|
Choice of grouping in Chi-Squared test
This procedure is basically the idea behind "CHi-squared Automated Interaction Detection", or "CHAID" described by G.V. Kass in 1980. The general setting is very similar to your television watching
|
43,143
|
How to account for overdispersion in a glm with negative binomial distribution?
|
I'm not sure how to correct the p-values. However you can typically examine the mean-variance assumption in a negative binomial regression by looking at the residuals versus fitted values plot.
If this plot of residuals versus fitted values is not (roughly) an amorphous, random cloud of data points, then you can try using quasi-Poisson regression. Another alternative is to construct your own mean-variance relationship using quasi-likelihood.
Hope this helps!
|
How to account for overdispersion in a glm with negative binomial distribution?
|
I'm not sure how to correct the p-values. However you can typically examine the mean-variance assumption in a negative binomial regression by looking at the residuals versus fitted values plot.
If thi
|
How to account for overdispersion in a glm with negative binomial distribution?
I'm not sure how to correct the p-values. However you can typically examine the mean-variance assumption in a negative binomial regression by looking at the residuals versus fitted values plot.
If this plot of residuals versus fitted values is not (roughly) an amorphous, random cloud of data points, then you can try using quasi-Poisson regression. Another alternative is to construct your own mean-variance relationship using quasi-likelihood.
Hope this helps!
|
How to account for overdispersion in a glm with negative binomial distribution?
I'm not sure how to correct the p-values. However you can typically examine the mean-variance assumption in a negative binomial regression by looking at the residuals versus fitted values plot.
If thi
|
43,144
|
How to formulate linear mixed model to find out effects of continuous variables?
|
An idea for improvement of the marginal $R^2$ calculation used would be to assess this with the other predictors included in the model. As it stands here, the marginal $R^2$ calculation only takes into account one predictor at a time.
An alternative is to fit two models. One model contains all predictors, the other has one predictor dropped. The models can then be compared to see the decrease in marginal $R^2$ that is due to removal of the predictor. For instance:
m1 <- lmer(resp ~ pred1 + pred2 + pred3 + (1|weeks) + (1|Sample), data = Xs)
m2 <- lmer(resp ~ pred2 + pred3 + (1|weeks) + (1|Sample), data = Xs)
r.squaredGLMM(m1)[[1]]-r.squaredGLMM(m2)[[1]]
This tells you that the marginal $R^2$ drops quite a bit by simply removing the first predictor. This echos your approach, but has the added benefit of including all relevant predictors in the model that is used to calculate the goodness of fit.
With regards to building a suitable model, why have you removed the intercept? This is a key piece of information. When you do that you are forcing the model to pass through the origin. Specifically, you are enforcing that when the predictors take on values of 0 the predicted response must be 0. I am suspecting that this is probably not what you want.
Since you said that you are interested in the relative effects of the predictors, standardizing your predictors as you have done is a good idea.
An alternative is to fit a model with scaled predictors such as this:
m3 <- lmer(resp ~ pred1 + pred2 + pred3 + (1|weeks) + (1|Sample), data = X)
Since you standardized the predictors the estimated $\beta$s represent the relative effect of the predictors on the outcome $resp$.
To test whether these relationships are likely to be true not only in the sample, but also in the population, a sensible approach is to conduct model comparisons such as likelihood ratio tests, AIC or BIC.
The way this is done is to remove predictors in a stepwise manner and compare the two models with your comparison method of choice. If such a comparison reveals that a predictor does not significantly contribute to an improvement in overall fit, then you can remove this predictor from your model and also consider reporting that there appears to be no relationship between that predictor and your outcome. Lots of info on this site for doing model comparison.
|
How to formulate linear mixed model to find out effects of continuous variables?
|
An idea for improvement of the marginal $R^2$ calculation used would be to assess this with the other predictors included in the model. As it stands here, the marginal $R^2$ calculation only takes int
|
How to formulate linear mixed model to find out effects of continuous variables?
An idea for improvement of the marginal $R^2$ calculation used would be to assess this with the other predictors included in the model. As it stands here, the marginal $R^2$ calculation only takes into account one predictor at a time.
An alternative is to fit two models. One model contains all predictors, the other has one predictor dropped. The models can then be compared to see the decrease in marginal $R^2$ that is due to removal of the predictor. For instance:
m1 <- lmer(resp ~ pred1 + pred2 + pred3 + (1|weeks) + (1|Sample), data = Xs)
m2 <- lmer(resp ~ pred2 + pred3 + (1|weeks) + (1|Sample), data = Xs)
r.squaredGLMM(m1)[[1]]-r.squaredGLMM(m2)[[1]]
This tells you that the marginal $R^2$ drops quite a bit by simply removing the first predictor. This echos your approach, but has the added benefit of including all relevant predictors in the model that is used to calculate the goodness of fit.
With regards to building a suitable model, why have you removed the intercept? This is a key piece of information. When you do that you are forcing the model to pass through the origin. Specifically, you are enforcing that when the predictors take on values of 0 the predicted response must be 0. I am suspecting that this is probably not what you want.
Since you said that you are interested in the relative effects of the predictors, standardizing your predictors as you have done is a good idea.
An alternative is to fit a model with scaled predictors such as this:
m3 <- lmer(resp ~ pred1 + pred2 + pred3 + (1|weeks) + (1|Sample), data = X)
Since you standardized the predictors the estimated $\beta$s represent the relative effect of the predictors on the outcome $resp$.
To test whether these relationships are likely to be true not only in the sample, but also in the population, a sensible approach is to conduct model comparisons such as likelihood ratio tests, AIC or BIC.
The way this is done is to remove predictors in a stepwise manner and compare the two models with your comparison method of choice. If such a comparison reveals that a predictor does not significantly contribute to an improvement in overall fit, then you can remove this predictor from your model and also consider reporting that there appears to be no relationship between that predictor and your outcome. Lots of info on this site for doing model comparison.
|
How to formulate linear mixed model to find out effects of continuous variables?
An idea for improvement of the marginal $R^2$ calculation used would be to assess this with the other predictors included in the model. As it stands here, the marginal $R^2$ calculation only takes int
|
43,145
|
Linear regression or mixed effects models for data with two time points?
|
If you limit yourself to a frequentist framework for the change analysis, then study participants with only 1 observation will be eliminated. An alternative might be to switch to a Bayesian framework where individuals with only a single observed period in a multiperiod model do not represent a limitation. See chapter 13 of Gelman and Hill's book Data Analysis Using Regression and Multilevel/Hierarchical Modeling.
|
Linear regression or mixed effects models for data with two time points?
|
If you limit yourself to a frequentist framework for the change analysis, then study participants with only 1 observation will be eliminated. An alternative might be to switch to a Bayesian framework
|
Linear regression or mixed effects models for data with two time points?
If you limit yourself to a frequentist framework for the change analysis, then study participants with only 1 observation will be eliminated. An alternative might be to switch to a Bayesian framework where individuals with only a single observed period in a multiperiod model do not represent a limitation. See chapter 13 of Gelman and Hill's book Data Analysis Using Regression and Multilevel/Hierarchical Modeling.
|
Linear regression or mixed effects models for data with two time points?
If you limit yourself to a frequentist framework for the change analysis, then study participants with only 1 observation will be eliminated. An alternative might be to switch to a Bayesian framework
|
43,146
|
Linear regression or mixed effects models for data with two time points?
|
I recommend to perform multiple imputation (eg with mice in R), and then use a mixed model or generalizing estimating equations, explicitly recognizing the clustering features.
Reliance on multiple imputation according to Rubin approach will force you to recognize the uncertainty due to missingness without discarding potentially useful observations.
|
Linear regression or mixed effects models for data with two time points?
|
I recommend to perform multiple imputation (eg with mice in R), and then use a mixed model or generalizing estimating equations, explicitly recognizing the clustering features.
Reliance on multiple im
|
Linear regression or mixed effects models for data with two time points?
I recommend to perform multiple imputation (eg with mice in R), and then use a mixed model or generalizing estimating equations, explicitly recognizing the clustering features.
Reliance on multiple imputation according to Rubin approach will force you to recognize the uncertainty due to missingness without discarding potentially useful observations.
|
Linear regression or mixed effects models for data with two time points?
I recommend to perform multiple imputation (eg with mice in R), and then use a mixed model or generalizing estimating equations, explicitly recognizing the clustering features.
Reliance on multiple im
|
43,147
|
Linear regression or mixed effects models for data with two time points?
|
After reviewing this paper
Peugh, J. L. (2010). A practical guide to multilevel modeling. Journal of School Psychology. Retrieved from http://www.sciencedirect.com/science/article/pii/S0022440509000545
A potential answer for this question can be obtained by calculating the ICC and design effect (DE), which can be used to quantify the amount of between group variation and need the for using multilevel modeling.
ICC is defined as: level 2 variance/(level 2 variance + level 1 variance)
DE is defined as: 1 + (Average number of individuals in each group - 1) x ICC.
So for the reduced models I presented in my question this would be
fm1 <- lme(mmse ~ 1, random = ~1|patientid, data = dat.long, method = "ML", na.action = na.exclude)
Linear mixed-effects model fit by maximum likelihood
Data: dat.long
Log-likelihood: -1459.29
Fixed: mmse ~ 1
(Intercept)
20.90421
Random effects:
Formula: ~1 | patientid
(Intercept) Residual
StdDev: 6.526672 6.583354
Number of Observations: 407
Number of Groups: 233
VarCorr(fm1)
patientid = pdLogChol(1)
Variance StdDev
(Intercept) 42.59744 6.526672
Residual 43.34054 6.583354
Calculating the ICC indicates that 49% of response variables variance occurred between individuals.
> 42.60/(42.60 + 43.34)
[1] 0.4956947
For the DE, the paper above indicates that a DE greater then 2 indicates the need to use a multilevel model, however for the model above the DE is 1.37, so a multilevel model may not be needed for this data.
> 1+((407/233)-1)*(42.60/(42.60 + 43.34))
[1] 1.370175
|
Linear regression or mixed effects models for data with two time points?
|
After reviewing this paper
Peugh, J. L. (2010). A practical guide to multilevel modeling. Journal of School Psychology. Retrieved from http://www.sciencedirect.com/science/article/pii/S0022440509000
|
Linear regression or mixed effects models for data with two time points?
After reviewing this paper
Peugh, J. L. (2010). A practical guide to multilevel modeling. Journal of School Psychology. Retrieved from http://www.sciencedirect.com/science/article/pii/S0022440509000545
A potential answer for this question can be obtained by calculating the ICC and design effect (DE), which can be used to quantify the amount of between group variation and need the for using multilevel modeling.
ICC is defined as: level 2 variance/(level 2 variance + level 1 variance)
DE is defined as: 1 + (Average number of individuals in each group - 1) x ICC.
So for the reduced models I presented in my question this would be
fm1 <- lme(mmse ~ 1, random = ~1|patientid, data = dat.long, method = "ML", na.action = na.exclude)
Linear mixed-effects model fit by maximum likelihood
Data: dat.long
Log-likelihood: -1459.29
Fixed: mmse ~ 1
(Intercept)
20.90421
Random effects:
Formula: ~1 | patientid
(Intercept) Residual
StdDev: 6.526672 6.583354
Number of Observations: 407
Number of Groups: 233
VarCorr(fm1)
patientid = pdLogChol(1)
Variance StdDev
(Intercept) 42.59744 6.526672
Residual 43.34054 6.583354
Calculating the ICC indicates that 49% of response variables variance occurred between individuals.
> 42.60/(42.60 + 43.34)
[1] 0.4956947
For the DE, the paper above indicates that a DE greater then 2 indicates the need to use a multilevel model, however for the model above the DE is 1.37, so a multilevel model may not be needed for this data.
> 1+((407/233)-1)*(42.60/(42.60 + 43.34))
[1] 1.370175
|
Linear regression or mixed effects models for data with two time points?
After reviewing this paper
Peugh, J. L. (2010). A practical guide to multilevel modeling. Journal of School Psychology. Retrieved from http://www.sciencedirect.com/science/article/pii/S0022440509000
|
43,148
|
Logistic regression gives very different result to Fisher's exact test - why?
|
I think the problem is that you are trying to answer two questions with the same model. The Fisher test is for the crude odds ratio, collapsing across experimental levels. The logistic model does not test the crude OR. We do not want to conduct a crude or stratified analysis if the homogeneity of the odds ratio is not met in the experimental strata. The output from your saturated logistic model can be used to check this.
Because you have coded Experimental level as a numeric, it's difficult to interpret the output. The ChoiceVAV effect is projected for an experimental level of 0. We have to use post-estimation to predict the actual, meaningful results from the experiment from the saturated logistic model.
To get the OR for experiment one, you must add ChoiceVAV and ChoiceVAV:Experiment: 3.52 - 2.19 = 1.33 = log(35*13 /(20*6)). The last expression being the expression for the stratum specific OR for the first experiment.
The second experiment has stratum specific OR 3.52 - 2*2.19 = -0.86 = log(12*11 /(10*31)). These are of opposite signs. So the homogeneity of the OR is violated. The crude OR is not quite (but close to) a test of the weighted average of these ORs. You can plainly see they average out to around 0. The crude OR verifies this.
In summary, the two experiments produce massively conflicting findings and should not be combined into a single analysis. This is the problem of overfocusing on statistical significance. That the FET is not significant is exactly what is expected.
|
Logistic regression gives very different result to Fisher's exact test - why?
|
I think the problem is that you are trying to answer two questions with the same model. The Fisher test is for the crude odds ratio, collapsing across experimental levels. The logistic model does not
|
Logistic regression gives very different result to Fisher's exact test - why?
I think the problem is that you are trying to answer two questions with the same model. The Fisher test is for the crude odds ratio, collapsing across experimental levels. The logistic model does not test the crude OR. We do not want to conduct a crude or stratified analysis if the homogeneity of the odds ratio is not met in the experimental strata. The output from your saturated logistic model can be used to check this.
Because you have coded Experimental level as a numeric, it's difficult to interpret the output. The ChoiceVAV effect is projected for an experimental level of 0. We have to use post-estimation to predict the actual, meaningful results from the experiment from the saturated logistic model.
To get the OR for experiment one, you must add ChoiceVAV and ChoiceVAV:Experiment: 3.52 - 2.19 = 1.33 = log(35*13 /(20*6)). The last expression being the expression for the stratum specific OR for the first experiment.
The second experiment has stratum specific OR 3.52 - 2*2.19 = -0.86 = log(12*11 /(10*31)). These are of opposite signs. So the homogeneity of the OR is violated. The crude OR is not quite (but close to) a test of the weighted average of these ORs. You can plainly see they average out to around 0. The crude OR verifies this.
In summary, the two experiments produce massively conflicting findings and should not be combined into a single analysis. This is the problem of overfocusing on statistical significance. That the FET is not significant is exactly what is expected.
|
Logistic regression gives very different result to Fisher's exact test - why?
I think the problem is that you are trying to answer two questions with the same model. The Fisher test is for the crude odds ratio, collapsing across experimental levels. The logistic model does not
|
43,149
|
Why and how does the inclusion of random effects in mixed models influence the fixed-effect intercept term?
|
You have to consider that you are not predicting a single intercept, but a distribution (random intercepts) across a nonlinear link function. Remember http://en.wikipedia.org/wiki/Jensen%27s_inequality , which basically tells you that in general f(mean(x) != mean(f(x))
If I look at the values in your example and roughly estimate that we have an intercept of 2, random effect variance of 3, and a log link, I expect a log mean on the response of
log(mean(exp(rnorm(100, 2, 3))))
which gives 5.036582, roughly fitting to your raw values.
So, to answer your general question: if you have a nonlinear link, a random effect can change the intercept because the random term is applied on a nonlinear link function.
Model misspecification could be another reason for changes in the intercept (e.g. that the random effects are not normal), but I suspect the former reason explains most of your puzzle.
|
Why and how does the inclusion of random effects in mixed models influence the fixed-effect intercep
|
You have to consider that you are not predicting a single intercept, but a distribution (random intercepts) across a nonlinear link function. Remember http://en.wikipedia.org/wiki/Jensen%27s_inequalit
|
Why and how does the inclusion of random effects in mixed models influence the fixed-effect intercept term?
You have to consider that you are not predicting a single intercept, but a distribution (random intercepts) across a nonlinear link function. Remember http://en.wikipedia.org/wiki/Jensen%27s_inequality , which basically tells you that in general f(mean(x) != mean(f(x))
If I look at the values in your example and roughly estimate that we have an intercept of 2, random effect variance of 3, and a log link, I expect a log mean on the response of
log(mean(exp(rnorm(100, 2, 3))))
which gives 5.036582, roughly fitting to your raw values.
So, to answer your general question: if you have a nonlinear link, a random effect can change the intercept because the random term is applied on a nonlinear link function.
Model misspecification could be another reason for changes in the intercept (e.g. that the random effects are not normal), but I suspect the former reason explains most of your puzzle.
|
Why and how does the inclusion of random effects in mixed models influence the fixed-effect intercep
You have to consider that you are not predicting a single intercept, but a distribution (random intercepts) across a nonlinear link function. Remember http://en.wikipedia.org/wiki/Jensen%27s_inequalit
|
43,150
|
Why is the restricted boltzmann machine both unsupervised and generative?
|
The definition of generative model as learning the joint probability $P(X,Y)$ is given in the context of supervised learning.
In a more general setting, the process of learning the joint probability is "generative" because knowing the joint probability allows the generation of new data - in the supervised context, having $P(X,Y)$ gives the possibility to generate new $(x,y)$ pairs.
Now, what does generative mean in the unsupervised learning context? It means sampling. And sampling is something RBM can do very conveniently, because the lack of inter-layer connections makes Gibbs sampling particularly easy.
Leaving the details of Gibbs sampling aside, it is worth to note that in case of RBM we have in fact $P(v,h)$ where $v$ is the visible layer and $h$ is the hidden layer.
Clarification
Sampling is used in different contexts referring to different ideas. I am often somehow sloppy in the use of this term myself.
In the context of the answer, with sampling I mean generating new samples as opposed to sampling from an available set of elements (just picking a bunch of them with/or without replacement, for example).
In order to be able to generate new (x,y) pairs, you need to model the joint distribution and then sample from it (as done here in a more trivial example where the distribution is just a gaussian).
|
Why is the restricted boltzmann machine both unsupervised and generative?
|
The definition of generative model as learning the joint probability $P(X,Y)$ is given in the context of supervised learning.
In a more general setting, the process of learning the joint probability
|
Why is the restricted boltzmann machine both unsupervised and generative?
The definition of generative model as learning the joint probability $P(X,Y)$ is given in the context of supervised learning.
In a more general setting, the process of learning the joint probability is "generative" because knowing the joint probability allows the generation of new data - in the supervised context, having $P(X,Y)$ gives the possibility to generate new $(x,y)$ pairs.
Now, what does generative mean in the unsupervised learning context? It means sampling. And sampling is something RBM can do very conveniently, because the lack of inter-layer connections makes Gibbs sampling particularly easy.
Leaving the details of Gibbs sampling aside, it is worth to note that in case of RBM we have in fact $P(v,h)$ where $v$ is the visible layer and $h$ is the hidden layer.
Clarification
Sampling is used in different contexts referring to different ideas. I am often somehow sloppy in the use of this term myself.
In the context of the answer, with sampling I mean generating new samples as opposed to sampling from an available set of elements (just picking a bunch of them with/or without replacement, for example).
In order to be able to generate new (x,y) pairs, you need to model the joint distribution and then sample from it (as done here in a more trivial example where the distribution is just a gaussian).
|
Why is the restricted boltzmann machine both unsupervised and generative?
The definition of generative model as learning the joint probability $P(X,Y)$ is given in the context of supervised learning.
In a more general setting, the process of learning the joint probability
|
43,151
|
Why is the restricted boltzmann machine both unsupervised and generative?
|
Lets X=(x1,x2,x3,x4,x5) and let the target variable Y=(y1,y2). Generative model learns a joint probability distribution P(X,Y)=P(x1,x2,x3,x4,x5,y1,y2).
So now think of this P(X,Y) in the form of a table with all these variables and with another column appended to it as the probability of the particular configuration of the variable values. Generative model as you know defines how likely the label(y1,y2) generated the data(x1,x2,x3,x4.x5). Now consider this, if my data is (x1,x2,x3,x4,x5,x6,x7) then I can learn a joint probability distribution and use bayes theorem to fill in any missing values like (x3,x7) as P(x3,x7|(x1,x2,x4,x5,x6)).
These missing values can be your target variable too.
Now you can call them target variable or whatever. But the point here is that generative model defines joint probability distribution over variables irrelevant to what names you give them. Since RBM defines joint probability distribution on input variables that is basically just the data and no labels it is therefore unsupervised learning.
|
Why is the restricted boltzmann machine both unsupervised and generative?
|
Lets X=(x1,x2,x3,x4,x5) and let the target variable Y=(y1,y2). Generative model learns a joint probability distribution P(X,Y)=P(x1,x2,x3,x4,x5,y1,y2).
So now think of this P(X,Y) in the form of a t
|
Why is the restricted boltzmann machine both unsupervised and generative?
Lets X=(x1,x2,x3,x4,x5) and let the target variable Y=(y1,y2). Generative model learns a joint probability distribution P(X,Y)=P(x1,x2,x3,x4,x5,y1,y2).
So now think of this P(X,Y) in the form of a table with all these variables and with another column appended to it as the probability of the particular configuration of the variable values. Generative model as you know defines how likely the label(y1,y2) generated the data(x1,x2,x3,x4.x5). Now consider this, if my data is (x1,x2,x3,x4,x5,x6,x7) then I can learn a joint probability distribution and use bayes theorem to fill in any missing values like (x3,x7) as P(x3,x7|(x1,x2,x4,x5,x6)).
These missing values can be your target variable too.
Now you can call them target variable or whatever. But the point here is that generative model defines joint probability distribution over variables irrelevant to what names you give them. Since RBM defines joint probability distribution on input variables that is basically just the data and no labels it is therefore unsupervised learning.
|
Why is the restricted boltzmann machine both unsupervised and generative?
Lets X=(x1,x2,x3,x4,x5) and let the target variable Y=(y1,y2). Generative model learns a joint probability distribution P(X,Y)=P(x1,x2,x3,x4,x5,y1,y2).
So now think of this P(X,Y) in the form of a t
|
43,152
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
|
I can recommend this article dicussing good CV practice.
(A) When simply running one RF model: Yes OOB-CV is a fine estimate of your future prediction performance, given i.i.d. sampling. For many practical instances you don't have time nor need for anything more. A default RF model is simply good enough, you will first start fiddling with the hyper parameters later, if ever. I would spent more time wondering which prediction performance metric(AUC, accuracy, recall etc) best answered my questions. A little fiddling with hyperparameters (mtry,samplesize), does not make your OOB-CV vastly over-optimistic.
(B1) When comparing across classifiers (SVM, logistic, RF, etc.): you need to use the same CV regime, thus you cannot use OOB-CV only available for RF. Use e.g. 20-repeated, 10-fold CV, where all models are tested in the same folds(/by same partitions).
(B2) When performing gridSearch and variable selection To evaluate the predictive performance of each variant of your model you would probably use OOB-CV or some other CV. To unbiased estimate the overall performance, you need to wrap your model-selection process in a outer cross-validation, called nested-CV.
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
|
I can recommend this article dicussing good CV practice.
(A) When simply running one RF model: Yes OOB-CV is a fine estimate of your future prediction performance, given i.i.d. sampling. For many pra
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
I can recommend this article dicussing good CV practice.
(A) When simply running one RF model: Yes OOB-CV is a fine estimate of your future prediction performance, given i.i.d. sampling. For many practical instances you don't have time nor need for anything more. A default RF model is simply good enough, you will first start fiddling with the hyper parameters later, if ever. I would spent more time wondering which prediction performance metric(AUC, accuracy, recall etc) best answered my questions. A little fiddling with hyperparameters (mtry,samplesize), does not make your OOB-CV vastly over-optimistic.
(B1) When comparing across classifiers (SVM, logistic, RF, etc.): you need to use the same CV regime, thus you cannot use OOB-CV only available for RF. Use e.g. 20-repeated, 10-fold CV, where all models are tested in the same folds(/by same partitions).
(B2) When performing gridSearch and variable selection To evaluate the predictive performance of each variant of your model you would probably use OOB-CV or some other CV. To unbiased estimate the overall performance, you need to wrap your model-selection process in a outer cross-validation, called nested-CV.
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
I can recommend this article dicussing good CV practice.
(A) When simply running one RF model: Yes OOB-CV is a fine estimate of your future prediction performance, given i.i.d. sampling. For many pra
|
43,153
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
|
For decision trees, is it better to use the full train data set to construct the tree?
It is always better to have more data to train your model. But if you use all data that you have in hand, then you have no idea about your test error (of course you can indirectly estimate it but estimations remain estimations), and furthermore, it is hard to know if you overfit your data or not. So in my experience, it is not recommended to use all data to fit your model.
In a random forest, is k-fold cross-validation necessary? I thought you could use OOB error?
OOB error can be used to tune your parameters, once it's done, OOB is no longer a valid test set for evaluating your model.
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
|
For decision trees, is it better to use the full train data set to construct the tree?
It is always better to have more data to train your model. But if you use all data that you have in hand, then yo
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
For decision trees, is it better to use the full train data set to construct the tree?
It is always better to have more data to train your model. But if you use all data that you have in hand, then you have no idea about your test error (of course you can indirectly estimate it but estimations remain estimations), and furthermore, it is hard to know if you overfit your data or not. So in my experience, it is not recommended to use all data to fit your model.
In a random forest, is k-fold cross-validation necessary? I thought you could use OOB error?
OOB error can be used to tune your parameters, once it's done, OOB is no longer a valid test set for evaluating your model.
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
For decision trees, is it better to use the full train data set to construct the tree?
It is always better to have more data to train your model. But if you use all data that you have in hand, then yo
|
43,154
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
|
The answer is No. Random forests don't need k-fold CV. As an example, when I compare RF classification results with other classifiers for which k-fold CV was used, I only perform a single run with the entire input dataset for the RF model. RF will take care of the training and testing data by itself -- so it is unlike many other classifiers, and this is by design (by Leo Breiman).
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
|
The answer is No. Random forests don't need k-fold CV. As an example, when I compare RF classification results with other classifiers for which k-fold CV was used, I only perform a single run with th
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
The answer is No. Random forests don't need k-fold CV. As an example, when I compare RF classification results with other classifiers for which k-fold CV was used, I only perform a single run with the entire input dataset for the RF model. RF will take care of the training and testing data by itself -- so it is unlike many other classifiers, and this is by design (by Leo Breiman).
|
Is it necessary to do $k$-fold cross validation for decision trees in random forests?
The answer is No. Random forests don't need k-fold CV. As an example, when I compare RF classification results with other classifiers for which k-fold CV was used, I only perform a single run with th
|
43,155
|
Multiple and long seasonality for a SARIMA model in R [closed]
|
Structure your data as an msts (multiple seasonality time series), where you can specify
msts(your_data, start=2010, seasonal.periods=c(144,1008,52560)).
Then, when fitting fourier terms for seasonality you must specify the three seasonal periods on fourier function:
reg <- fourier(your_data, K=c(i,n,j)), this will look for fourier terms over each seasonal period.
You can loop through a linear regression where the explanatory variables are the fourier terms to obtain the best fit and then put them into the ARIMA model as regressors. These will be equivalent, because ARIMA with regressors actually is a regression with ARIMA errors.
auto.arima(your_data, xreg= as.matrix(reg), seasonal= F)
Don´t forget to specify seasonal=F to avoid maximum supported lag = 350.
|
Multiple and long seasonality for a SARIMA model in R [closed]
|
Structure your data as an msts (multiple seasonality time series), where you can specify
msts(your_data, start=2010, seasonal.periods=c(144,1008,52560)).
Then, when fitting fourier terms for seasona
|
Multiple and long seasonality for a SARIMA model in R [closed]
Structure your data as an msts (multiple seasonality time series), where you can specify
msts(your_data, start=2010, seasonal.periods=c(144,1008,52560)).
Then, when fitting fourier terms for seasonality you must specify the three seasonal periods on fourier function:
reg <- fourier(your_data, K=c(i,n,j)), this will look for fourier terms over each seasonal period.
You can loop through a linear regression where the explanatory variables are the fourier terms to obtain the best fit and then put them into the ARIMA model as regressors. These will be equivalent, because ARIMA with regressors actually is a regression with ARIMA errors.
auto.arima(your_data, xreg= as.matrix(reg), seasonal= F)
Don´t forget to specify seasonal=F to avoid maximum supported lag = 350.
|
Multiple and long seasonality for a SARIMA model in R [closed]
Structure your data as an msts (multiple seasonality time series), where you can specify
msts(your_data, start=2010, seasonal.periods=c(144,1008,52560)).
Then, when fitting fourier terms for seasona
|
43,156
|
Given loads of data, can we always model it with polynomials?
|
Imagine replacing an arbitrary model with polynomial parameters with a series of dummies for all values of the explanatory variables and all their interactions. If you have enough experimental data that's going to be as general as possible. The highest order polynomial function of all the interactions is going to fit the data as well. So if you have the data and setting to estimate the fully saturated model that will be model free.
For example, consider that the true model is where X and Y take dummy values only:
Z = F(X,Y)
You could also write this model as:
Z = beta0 + beta1 * 1(X) + beta2 * 1(Y) + beta3 * 1(X) * 1(Y)
These functions match everywhere on the values taken by X and Y. This gets to be quite thorny if you have multiple parameters and they take multiple values, but the principle is the same.
I learned about this from Mostly Harmless Econometrics (p. 48 - 51) where they argue that the case of saturated modeling implies that linear modeling is equally general as non-linear modeling. Moving into non-saturated models means that non linear models can cover functions with fewer parameters, but with enough free parameters and the data to estimate them, they cover the same set of models.
|
Given loads of data, can we always model it with polynomials?
|
Imagine replacing an arbitrary model with polynomial parameters with a series of dummies for all values of the explanatory variables and all their interactions. If you have enough experimental data th
|
Given loads of data, can we always model it with polynomials?
Imagine replacing an arbitrary model with polynomial parameters with a series of dummies for all values of the explanatory variables and all their interactions. If you have enough experimental data that's going to be as general as possible. The highest order polynomial function of all the interactions is going to fit the data as well. So if you have the data and setting to estimate the fully saturated model that will be model free.
For example, consider that the true model is where X and Y take dummy values only:
Z = F(X,Y)
You could also write this model as:
Z = beta0 + beta1 * 1(X) + beta2 * 1(Y) + beta3 * 1(X) * 1(Y)
These functions match everywhere on the values taken by X and Y. This gets to be quite thorny if you have multiple parameters and they take multiple values, but the principle is the same.
I learned about this from Mostly Harmless Econometrics (p. 48 - 51) where they argue that the case of saturated modeling implies that linear modeling is equally general as non-linear modeling. Moving into non-saturated models means that non linear models can cover functions with fewer parameters, but with enough free parameters and the data to estimate them, they cover the same set of models.
|
Given loads of data, can we always model it with polynomials?
Imagine replacing an arbitrary model with polynomial parameters with a series of dummies for all values of the explanatory variables and all their interactions. If you have enough experimental data th
|
43,157
|
ACF and PACF plot analysis
|
The threshold statistical significance of the autocorrelations has been noted in the comments and in another answer. What looks interesting is that the autocorrelations in Lag 4 and Lag 8 persist also in the Partial ACF.
Reality should come into play at this point: what are these data? By whatever knowledge you have on the process they measure, is it reasonable to expect that the current level should depend on Lags 4 and 8? If yes, the low estimated strength of autocorrelation is not necessarily an artifact, but an indication that said autocorrelation exists but it is low in strength.
|
ACF and PACF plot analysis
|
The threshold statistical significance of the autocorrelations has been noted in the comments and in another answer. What looks interesting is that the autocorrelations in Lag 4 and Lag 8 persist also
|
ACF and PACF plot analysis
The threshold statistical significance of the autocorrelations has been noted in the comments and in another answer. What looks interesting is that the autocorrelations in Lag 4 and Lag 8 persist also in the Partial ACF.
Reality should come into play at this point: what are these data? By whatever knowledge you have on the process they measure, is it reasonable to expect that the current level should depend on Lags 4 and 8? If yes, the low estimated strength of autocorrelation is not necessarily an artifact, but an indication that said autocorrelation exists but it is low in strength.
|
ACF and PACF plot analysis
The threshold statistical significance of the autocorrelations has been noted in the comments and in another answer. What looks interesting is that the autocorrelations in Lag 4 and Lag 8 persist also
|
43,158
|
Backshift operator property not clear
|
That step comes from the Taylor expansion of $\frac{1}{1-x}$, which is $1 + x + x^2 + ...$. Just substitute $x$ for the backward shift operator $B$ in the author's derivation and you'll arrive at the same result.
Have you taken a class on integral calculus? Usually you'll go through that derivation when you cover series. Here's mine:
Let $f(x) = \frac{1}{1-x}$. Then:
$f'(x) = -\frac{1}{(1-x)^2}$
$f''(x) = \frac{1}{(1-x)^3}$
$...$
$f^{(n)}(x) = (-1)^{2n+1}\frac{1}{(1-x)^{n+1}}$
The Maclaurin series (Taylor series centered at $x=0$) is therefore
$f(x) = f(0) + \frac{f'(0)}{1!}x + \frac{f''(0)}{2!}x^2 + ... = \frac{1}{1-0} - \frac{1}{(1-0)^2}x^2 + \frac{1}{(1-0)^3}x^3 + ... = 1 + x + x^2 + ...$
Now replace $x$ in all of that with the backwards shift operator $B$ and you'll get the author's expression. Does that help?
|
Backshift operator property not clear
|
That step comes from the Taylor expansion of $\frac{1}{1-x}$, which is $1 + x + x^2 + ...$. Just substitute $x$ for the backward shift operator $B$ in the author's derivation and you'll arrive at the
|
Backshift operator property not clear
That step comes from the Taylor expansion of $\frac{1}{1-x}$, which is $1 + x + x^2 + ...$. Just substitute $x$ for the backward shift operator $B$ in the author's derivation and you'll arrive at the same result.
Have you taken a class on integral calculus? Usually you'll go through that derivation when you cover series. Here's mine:
Let $f(x) = \frac{1}{1-x}$. Then:
$f'(x) = -\frac{1}{(1-x)^2}$
$f''(x) = \frac{1}{(1-x)^3}$
$...$
$f^{(n)}(x) = (-1)^{2n+1}\frac{1}{(1-x)^{n+1}}$
The Maclaurin series (Taylor series centered at $x=0$) is therefore
$f(x) = f(0) + \frac{f'(0)}{1!}x + \frac{f''(0)}{2!}x^2 + ... = \frac{1}{1-0} - \frac{1}{(1-0)^2}x^2 + \frac{1}{(1-0)^3}x^3 + ... = 1 + x + x^2 + ...$
Now replace $x$ in all of that with the backwards shift operator $B$ and you'll get the author's expression. Does that help?
|
Backshift operator property not clear
That step comes from the Taylor expansion of $\frac{1}{1-x}$, which is $1 + x + x^2 + ...$. Just substitute $x$ for the backward shift operator $B$ in the author's derivation and you'll arrive at the
|
43,159
|
How to do a power analysis for an unbalanced mixed effects ANOVA?
|
This is only a way to solution not a definitive answer: The paper below could give some direction to this kind of design:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4394709/
The supplementary material show power calculations examples for t tests and regressions in R. Since regressions and Anovas are the same thing in principle, I believe that this a very fruitful start.
|
How to do a power analysis for an unbalanced mixed effects ANOVA?
|
This is only a way to solution not a definitive answer: The paper below could give some direction to this kind of design:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4394709/
The supplementary materi
|
How to do a power analysis for an unbalanced mixed effects ANOVA?
This is only a way to solution not a definitive answer: The paper below could give some direction to this kind of design:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4394709/
The supplementary material show power calculations examples for t tests and regressions in R. Since regressions and Anovas are the same thing in principle, I believe that this a very fruitful start.
|
How to do a power analysis for an unbalanced mixed effects ANOVA?
This is only a way to solution not a definitive answer: The paper below could give some direction to this kind of design:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4394709/
The supplementary materi
|
43,160
|
How to do a power analysis for an unbalanced mixed effects ANOVA?
|
A pragmatic choice is to take the smaller of the two treatment groups as the $n$ of both groups. This gives you a conservative measure of power. In G*Power you would just use the default proportion test sample size calculation.
|
How to do a power analysis for an unbalanced mixed effects ANOVA?
|
A pragmatic choice is to take the smaller of the two treatment groups as the $n$ of both groups. This gives you a conservative measure of power. In G*Power you would just use the default proportion te
|
How to do a power analysis for an unbalanced mixed effects ANOVA?
A pragmatic choice is to take the smaller of the two treatment groups as the $n$ of both groups. This gives you a conservative measure of power. In G*Power you would just use the default proportion test sample size calculation.
|
How to do a power analysis for an unbalanced mixed effects ANOVA?
A pragmatic choice is to take the smaller of the two treatment groups as the $n$ of both groups. This gives you a conservative measure of power. In G*Power you would just use the default proportion te
|
43,161
|
Most significantly frequent category
|
Not directly, because the categories you chose to compare are based on their observed values.
It's still possible to test such a thing using a chi-squared test statistic, but the distribution of the test statistic under the null hypothesis may not be (and I expect is not) well approximated by the distribution that applies when the categories being compared is not based on the observed data.
That is, you need to compute a new distribution for the test statistic.
Please also note that if you've already made one comparison (such as the overall chi-squared) and the decision to make this comparison is conditional on that one, the test is also affected by that conditional decision.
Some details:
Here's the situation as I comprehend what's going on.
There's a contingency table of consonant counts.
We decide to test for equality of proportion among two categories. We can construct a chi-square goodness of fit test in the usual manner, by conditioning on their total:
p t Total
278 256 | 534
(However, this is effectively a one-tailed test, since we know the observed p-count is greater than the observed t-count.)
Then we can, of course, compute a chi=square test statistic:
> chisq.test(c(278,256))
Chi-squared test for given probabilities
data: c(278, 256)
X-squared = 0.9064, df = 1, p-value = 0.3411
The p-value may not mean much, but the statistic is still a measure of the size of the difference between the two.
So how do we generate the distribution under the null? It depends on what we assume about the situation and whether this is a post hoc test.
As an example, let's say we are in the situation where we have the 6-category table without any previous test and we're interested in the question "Is the most common one unusually more common than the second-most common one?" against the null that they're just both coming from a distribution where all 6 categories are equally likely
Then we can easily simulate from the distribution under the null. This:
chisq.test(sort(table(sample(1:6,1000,repl=TRUE)),decr=TRUE)[1:2])$statistic
generates a single observation from that null. We can repeat that many times to get a sense of what the distribution looks like:
Because of the discreteness, it's a bit hard to tell if that's well approximated by the $\chi^2_1$ distribution or not, but a look at the mean and variance suggests not. If we match by mean and variance, however, an appropriately scaled version of the statistic has approximately the right distribution at a different d.f., particularly in the lower tail (of the p-values, i.e. the upper-tail of the chi-square):
If we now look at the original data:
> pchisq(2.4*chisq.test(c(278,256))$statistic,df=0.8,lower.tail=FALSE)
X-squared
0.1059023
This approach suggests a p-value of around 0.1 using a chi-=square statistic and a modified chi-square distribution. If we compute the p-value directly from the simulated distribution, we get a p-value of 0.0993.
In other words, can I claim that p is the most frequent consonant?
While I think it's possible to do something like that, I don't think testing the most popular against the second most popular is necessarily the best approach. We could simply consider, for example, the distribution of the proportion of the largest group under the null of equal proportions.
|
Most significantly frequent category
|
Not directly, because the categories you chose to compare are based on their observed values.
It's still possible to test such a thing using a chi-squared test statistic, but the distribution of the t
|
Most significantly frequent category
Not directly, because the categories you chose to compare are based on their observed values.
It's still possible to test such a thing using a chi-squared test statistic, but the distribution of the test statistic under the null hypothesis may not be (and I expect is not) well approximated by the distribution that applies when the categories being compared is not based on the observed data.
That is, you need to compute a new distribution for the test statistic.
Please also note that if you've already made one comparison (such as the overall chi-squared) and the decision to make this comparison is conditional on that one, the test is also affected by that conditional decision.
Some details:
Here's the situation as I comprehend what's going on.
There's a contingency table of consonant counts.
We decide to test for equality of proportion among two categories. We can construct a chi-square goodness of fit test in the usual manner, by conditioning on their total:
p t Total
278 256 | 534
(However, this is effectively a one-tailed test, since we know the observed p-count is greater than the observed t-count.)
Then we can, of course, compute a chi=square test statistic:
> chisq.test(c(278,256))
Chi-squared test for given probabilities
data: c(278, 256)
X-squared = 0.9064, df = 1, p-value = 0.3411
The p-value may not mean much, but the statistic is still a measure of the size of the difference between the two.
So how do we generate the distribution under the null? It depends on what we assume about the situation and whether this is a post hoc test.
As an example, let's say we are in the situation where we have the 6-category table without any previous test and we're interested in the question "Is the most common one unusually more common than the second-most common one?" against the null that they're just both coming from a distribution where all 6 categories are equally likely
Then we can easily simulate from the distribution under the null. This:
chisq.test(sort(table(sample(1:6,1000,repl=TRUE)),decr=TRUE)[1:2])$statistic
generates a single observation from that null. We can repeat that many times to get a sense of what the distribution looks like:
Because of the discreteness, it's a bit hard to tell if that's well approximated by the $\chi^2_1$ distribution or not, but a look at the mean and variance suggests not. If we match by mean and variance, however, an appropriately scaled version of the statistic has approximately the right distribution at a different d.f., particularly in the lower tail (of the p-values, i.e. the upper-tail of the chi-square):
If we now look at the original data:
> pchisq(2.4*chisq.test(c(278,256))$statistic,df=0.8,lower.tail=FALSE)
X-squared
0.1059023
This approach suggests a p-value of around 0.1 using a chi-=square statistic and a modified chi-square distribution. If we compute the p-value directly from the simulated distribution, we get a p-value of 0.0993.
In other words, can I claim that p is the most frequent consonant?
While I think it's possible to do something like that, I don't think testing the most popular against the second most popular is necessarily the best approach. We could simply consider, for example, the distribution of the proportion of the largest group under the null of equal proportions.
|
Most significantly frequent category
Not directly, because the categories you chose to compare are based on their observed values.
It's still possible to test such a thing using a chi-squared test statistic, but the distribution of the t
|
43,162
|
How to visualize a range (min/med/max)?
|
Example number 1 seems to be nice if you have different minimum thresholds among the categories.
As pointed by Glen_b and whuber, it seems that examples number 2 and number 3 do not show the ranges of your categories, but just one unique statistic (it could be the median, or the maximum values) at the top of the horizontal bars.
The example number 4 is a little bit strange because the bell curve does not represent the distribution of the bars (for example, the blue light dot 'average paid' is the average of the bell curve, not the average of quantities shown in the bars). It is not "visually compelling yet immediately understandable" to me.
As you asked for another option, I would suggest the boxplot, which shows:
outliers (the dots),
minimum and maximum values without considering outliers (the end of the whiskers)1,
first and third quartiles (the edges of the box), and
median (the horizontal bar inside the box).
Each box is a category. Order the boxes from left to right starting with the category with greatest median.
The example number 1 is simpler to understand, so it will depend if a boxplot will really help.
1: see whuber's comment for clarification.
|
How to visualize a range (min/med/max)?
|
Example number 1 seems to be nice if you have different minimum thresholds among the categories.
As pointed by Glen_b and whuber, it seems that examples number 2 and number 3 do not show the ranges o
|
How to visualize a range (min/med/max)?
Example number 1 seems to be nice if you have different minimum thresholds among the categories.
As pointed by Glen_b and whuber, it seems that examples number 2 and number 3 do not show the ranges of your categories, but just one unique statistic (it could be the median, or the maximum values) at the top of the horizontal bars.
The example number 4 is a little bit strange because the bell curve does not represent the distribution of the bars (for example, the blue light dot 'average paid' is the average of the bell curve, not the average of quantities shown in the bars). It is not "visually compelling yet immediately understandable" to me.
As you asked for another option, I would suggest the boxplot, which shows:
outliers (the dots),
minimum and maximum values without considering outliers (the end of the whiskers)1,
first and third quartiles (the edges of the box), and
median (the horizontal bar inside the box).
Each box is a category. Order the boxes from left to right starting with the category with greatest median.
The example number 1 is simpler to understand, so it will depend if a boxplot will really help.
1: see whuber's comment for clarification.
|
How to visualize a range (min/med/max)?
Example number 1 seems to be nice if you have different minimum thresholds among the categories.
As pointed by Glen_b and whuber, it seems that examples number 2 and number 3 do not show the ranges o
|
43,163
|
Linear regression without intercept - sampling variance of coefficient
|
In your problem, assuming joint-normality of the variables, you can write the joint distribution of a single data point $(X_i, Y_i)$ as:
$$\begin{bmatrix} X_i \\ Y_i \end{bmatrix} \text{ ~ N} \left( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} \sigma_X^2 & \rho \sigma_X \sigma_y \\ \rho \sigma_X \sigma_y & \sigma_Y^2 \end{bmatrix} \right).$$
With a bit of algebra, the value $Y_i$ can be shown to be equivalent to:
$$Y_i = \rho \frac{\sigma_Y}{\sigma_X} \cdot X_i + \sqrt{1 - \rho^2} \sigma_Y \cdot \varepsilon_i,$$
where $\varepsilon_i$ is an independent standard normal error term. Hence, the true regression model is:
$$\begin{matrix} Y_i = \beta_0 + \beta_1 \cdot X_i + \sigma \cdot \varepsilon_i & & & \beta_0 = 0 & \beta_1 = \rho \frac{\sigma_Y}{\sigma_X} & \sigma = \sqrt{1 - \rho^2} \sigma_Y \end{matrix}.$$
Simplifying the variance problem: When you estimate the model coefficients in the model with an intercept term, you would expect to get an intercept estimate close to the true value of zero, which means you would also expect the estimated slope coefficients to be similar with or without the inclusion of an intercept term in the model (as you have pointed out). However, the inclusion of an intercept term will tend to reduce the variance of the estimated slope coefficient. You have:
$$X_i Y_i = \rho \frac{\sigma_Y}{\sigma_X} \cdot X_i^2 + \sqrt{1 - \rho^2} \sigma_Y \cdot X_i \varepsilon_i.$$
Defining $\boldsymbol{Z} \equiv \boldsymbol{X} / \sigma_X$ and $\boldsymbol{U} \equiv \| \boldsymbol{Z} \|^2$ allows us to write:
$$\hat{\beta}_1 = \frac{\sum_{i=1}^n X_i Y_i}{\sum_{i=1}^n X_i^2} = \rho \frac{\sigma_Y}{\sigma_X} + \sqrt{1 - \rho^2} \frac{\sigma_Y}{\sigma_X} \cdot \frac{\boldsymbol{Z} \cdot \boldsymbol{\varepsilon}}{\boldsymbol{U}}.$$
Taking the variance gives:
$$\mathbb{V}{(\hat{\beta}_1)} = (1 - \rho^2) \frac{\sigma_Y^2}{\sigma_X^2} \mathbb{V}\left(\frac{\boldsymbol{Z} \cdot \boldsymbol{\varepsilon}}{\boldsymbol{U}} \right).$$
Now, the three vectors in the variance operator are independent (using Cochran's theorem). The numerator is a sum of products of independent standard normal random variables, and the denominator is a chi-squared random variable.
I must confess that I am not sure where to go from here. I do not recognise the variance expression as any simple form, though maybe others will. In any case, I hope that gives you some progress towards what you want.
|
Linear regression without intercept - sampling variance of coefficient
|
In your problem, assuming joint-normality of the variables, you can write the joint distribution of a single data point $(X_i, Y_i)$ as:
$$\begin{bmatrix} X_i \\ Y_i \end{bmatrix} \text{ ~ N} \left( \
|
Linear regression without intercept - sampling variance of coefficient
In your problem, assuming joint-normality of the variables, you can write the joint distribution of a single data point $(X_i, Y_i)$ as:
$$\begin{bmatrix} X_i \\ Y_i \end{bmatrix} \text{ ~ N} \left( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} \sigma_X^2 & \rho \sigma_X \sigma_y \\ \rho \sigma_X \sigma_y & \sigma_Y^2 \end{bmatrix} \right).$$
With a bit of algebra, the value $Y_i$ can be shown to be equivalent to:
$$Y_i = \rho \frac{\sigma_Y}{\sigma_X} \cdot X_i + \sqrt{1 - \rho^2} \sigma_Y \cdot \varepsilon_i,$$
where $\varepsilon_i$ is an independent standard normal error term. Hence, the true regression model is:
$$\begin{matrix} Y_i = \beta_0 + \beta_1 \cdot X_i + \sigma \cdot \varepsilon_i & & & \beta_0 = 0 & \beta_1 = \rho \frac{\sigma_Y}{\sigma_X} & \sigma = \sqrt{1 - \rho^2} \sigma_Y \end{matrix}.$$
Simplifying the variance problem: When you estimate the model coefficients in the model with an intercept term, you would expect to get an intercept estimate close to the true value of zero, which means you would also expect the estimated slope coefficients to be similar with or without the inclusion of an intercept term in the model (as you have pointed out). However, the inclusion of an intercept term will tend to reduce the variance of the estimated slope coefficient. You have:
$$X_i Y_i = \rho \frac{\sigma_Y}{\sigma_X} \cdot X_i^2 + \sqrt{1 - \rho^2} \sigma_Y \cdot X_i \varepsilon_i.$$
Defining $\boldsymbol{Z} \equiv \boldsymbol{X} / \sigma_X$ and $\boldsymbol{U} \equiv \| \boldsymbol{Z} \|^2$ allows us to write:
$$\hat{\beta}_1 = \frac{\sum_{i=1}^n X_i Y_i}{\sum_{i=1}^n X_i^2} = \rho \frac{\sigma_Y}{\sigma_X} + \sqrt{1 - \rho^2} \frac{\sigma_Y}{\sigma_X} \cdot \frac{\boldsymbol{Z} \cdot \boldsymbol{\varepsilon}}{\boldsymbol{U}}.$$
Taking the variance gives:
$$\mathbb{V}{(\hat{\beta}_1)} = (1 - \rho^2) \frac{\sigma_Y^2}{\sigma_X^2} \mathbb{V}\left(\frac{\boldsymbol{Z} \cdot \boldsymbol{\varepsilon}}{\boldsymbol{U}} \right).$$
Now, the three vectors in the variance operator are independent (using Cochran's theorem). The numerator is a sum of products of independent standard normal random variables, and the denominator is a chi-squared random variable.
I must confess that I am not sure where to go from here. I do not recognise the variance expression as any simple form, though maybe others will. In any case, I hope that gives you some progress towards what you want.
|
Linear regression without intercept - sampling variance of coefficient
In your problem, assuming joint-normality of the variables, you can write the joint distribution of a single data point $(X_i, Y_i)$ as:
$$\begin{bmatrix} X_i \\ Y_i \end{bmatrix} \text{ ~ N} \left( \
|
43,164
|
Linear regression without intercept - sampling variance of coefficient
|
I don't know how you found that $\sigma_{\hat{\alpha_1}}^2 \neq \sigma_{\hat{\beta_1}}^2$. Using this R code
alpha <- c()
beta <- c()
for(i in 1:1000){
dat<-matrix(rnorm(200000,0,1),nrow=100000,byrow=T)
cov <- chol(matrix(c(5,2,2,5),byrow=T,nrow=2))
dat <- dat%*%cov
X1 <- matrix(0,nrow=100000,ncol=2)
X1[,1] <- 1
X1[,2] <- dat[,2]
alpha <- cbind(alpha,solve(t(X1)%*%(X1))%*%t(X1)%*%dat[,1])
beta <- cbind(beta,solve(dat[,2]%*%dat[,2])%*%dat[,2]%*%dat[,1])
}
I got
> var(alpha[2,])
[1] 8.310618e-06
> var(beta[1,])
[1] 8.309166e-06
However, your model is equal to $y = Xb+e$, where $\sigma_{b}^2 = (X^{T}X)^{-1}\sigma_e^2$. Since $\sigma_e^2 = \sigma_y^2 - \sigma_{xy} \frac{\sigma_{xy}}{\sigma_x^2}$, which is equal to $\frac{\sigma_y^2}{\sigma_x^2}(1-r^2)$, $\sigma_{b_1}^2 = \frac{\sigma_e^2}{\sum_i^n x_i^2}$, and $\sigma_{b_0}^2 = \frac{\sigma_e^2}{n}$
If you expand the above script for calculating $e$, and storing $e^{T}e$, $y^{T}y$, $x^{T}x$ and $x^{T}y$, you can calculate all these values.
Cheers
|
Linear regression without intercept - sampling variance of coefficient
|
I don't know how you found that $\sigma_{\hat{\alpha_1}}^2 \neq \sigma_{\hat{\beta_1}}^2$. Using this R code
alpha <- c()
beta <- c()
for(i in 1:1000){
dat<-matrix(rnorm(200000,0,1),nrow=100000,by
|
Linear regression without intercept - sampling variance of coefficient
I don't know how you found that $\sigma_{\hat{\alpha_1}}^2 \neq \sigma_{\hat{\beta_1}}^2$. Using this R code
alpha <- c()
beta <- c()
for(i in 1:1000){
dat<-matrix(rnorm(200000,0,1),nrow=100000,byrow=T)
cov <- chol(matrix(c(5,2,2,5),byrow=T,nrow=2))
dat <- dat%*%cov
X1 <- matrix(0,nrow=100000,ncol=2)
X1[,1] <- 1
X1[,2] <- dat[,2]
alpha <- cbind(alpha,solve(t(X1)%*%(X1))%*%t(X1)%*%dat[,1])
beta <- cbind(beta,solve(dat[,2]%*%dat[,2])%*%dat[,2]%*%dat[,1])
}
I got
> var(alpha[2,])
[1] 8.310618e-06
> var(beta[1,])
[1] 8.309166e-06
However, your model is equal to $y = Xb+e$, where $\sigma_{b}^2 = (X^{T}X)^{-1}\sigma_e^2$. Since $\sigma_e^2 = \sigma_y^2 - \sigma_{xy} \frac{\sigma_{xy}}{\sigma_x^2}$, which is equal to $\frac{\sigma_y^2}{\sigma_x^2}(1-r^2)$, $\sigma_{b_1}^2 = \frac{\sigma_e^2}{\sum_i^n x_i^2}$, and $\sigma_{b_0}^2 = \frac{\sigma_e^2}{n}$
If you expand the above script for calculating $e$, and storing $e^{T}e$, $y^{T}y$, $x^{T}x$ and $x^{T}y$, you can calculate all these values.
Cheers
|
Linear regression without intercept - sampling variance of coefficient
I don't know how you found that $\sigma_{\hat{\alpha_1}}^2 \neq \sigma_{\hat{\beta_1}}^2$. Using this R code
alpha <- c()
beta <- c()
for(i in 1:1000){
dat<-matrix(rnorm(200000,0,1),nrow=100000,by
|
43,165
|
Test whether time of maximum differs across two groups
|
Well, I'm not entirely sure whether I found an answer... but my idea won't fit into a comment. So I'll post and see what the smarter people here point out.
As I commented above, I'd bootstrap the max time within each group, but stratified by individual - by resampling rows of X1 and X2:
library(boot)
b1 <- boot(X1,statistic=function(X,index)which.max(apply(X[index,],2,mean)),10000)
b2 <- boot(X2,statistic=function(X,index)which.max(apply(X[index,],2,mean)),10000)
Then we can look at the distribution of the differences in the bootstrapped maxima:
foo <- b1$t-b2$t
ecdf(foo)(0)
hist(foo,breaks=seq(-10.5,10.5))
Now, the ecdf at zero tells us that 96.89% of differences are less than zero, which in some circles would be enough to claim $p<0.05$ and call it a day ;-) However, I really don't understand where the two peaks in the histogram come from. Does anybody have an idea?
(Fun question, anyway...)
EDIT: just for kicks, here is the other approach that could reasonably be taken - tabulate the max times for each individual by group and perform a $\chi^2$ test:
incidence <- cbind(
table(factor(apply(X1,1,which.max),levels=1:10)),
table(factor(apply(X2,1,which.max),levels=1:10)))
chisq.test(incidence)
|
Test whether time of maximum differs across two groups
|
Well, I'm not entirely sure whether I found an answer... but my idea won't fit into a comment. So I'll post and see what the smarter people here point out.
As I commented above, I'd bootstrap the max
|
Test whether time of maximum differs across two groups
Well, I'm not entirely sure whether I found an answer... but my idea won't fit into a comment. So I'll post and see what the smarter people here point out.
As I commented above, I'd bootstrap the max time within each group, but stratified by individual - by resampling rows of X1 and X2:
library(boot)
b1 <- boot(X1,statistic=function(X,index)which.max(apply(X[index,],2,mean)),10000)
b2 <- boot(X2,statistic=function(X,index)which.max(apply(X[index,],2,mean)),10000)
Then we can look at the distribution of the differences in the bootstrapped maxima:
foo <- b1$t-b2$t
ecdf(foo)(0)
hist(foo,breaks=seq(-10.5,10.5))
Now, the ecdf at zero tells us that 96.89% of differences are less than zero, which in some circles would be enough to claim $p<0.05$ and call it a day ;-) However, I really don't understand where the two peaks in the histogram come from. Does anybody have an idea?
(Fun question, anyway...)
EDIT: just for kicks, here is the other approach that could reasonably be taken - tabulate the max times for each individual by group and perform a $\chi^2$ test:
incidence <- cbind(
table(factor(apply(X1,1,which.max),levels=1:10)),
table(factor(apply(X2,1,which.max),levels=1:10)))
chisq.test(incidence)
|
Test whether time of maximum differs across two groups
Well, I'm not entirely sure whether I found an answer... but my idea won't fit into a comment. So I'll post and see what the smarter people here point out.
As I commented above, I'd bootstrap the max
|
43,166
|
Handling missing data in a time series
|
You may want to see Honaker, J. and King, G. (2010). What to do about missing values in time-series cross-section data. American Journal of Political Science, 54(2):561–581, and the related R package Amelia II for multiple imputation with respect to time series. I am not quite sure about your application as to location within the time series, but their imputation model is pretty sophisticated and configurable... so perhaps.
|
Handling missing data in a time series
|
You may want to see Honaker, J. and King, G. (2010). What to do about missing values in time-series cross-section data. American Journal of Political Science, 54(2):561–581, and the related R package
|
Handling missing data in a time series
You may want to see Honaker, J. and King, G. (2010). What to do about missing values in time-series cross-section data. American Journal of Political Science, 54(2):561–581, and the related R package Amelia II for multiple imputation with respect to time series. I am not quite sure about your application as to location within the time series, but their imputation model is pretty sophisticated and configurable... so perhaps.
|
Handling missing data in a time series
You may want to see Honaker, J. and King, G. (2010). What to do about missing values in time-series cross-section data. American Journal of Political Science, 54(2):561–581, and the related R package
|
43,167
|
Handling missing data in a time series
|
No, this no task for imputation algorithms.
Since there are no missing values in the time series.
What is described are measurement errors.
So instead of searching for "imputation" the term "measurment error correction" should provide better results.
|
Handling missing data in a time series
|
No, this no task for imputation algorithms.
Since there are no missing values in the time series.
What is described are measurement errors.
So instead of searching for "imputation" the term "measurme
|
Handling missing data in a time series
No, this no task for imputation algorithms.
Since there are no missing values in the time series.
What is described are measurement errors.
So instead of searching for "imputation" the term "measurment error correction" should provide better results.
|
Handling missing data in a time series
No, this no task for imputation algorithms.
Since there are no missing values in the time series.
What is described are measurement errors.
So instead of searching for "imputation" the term "measurme
|
43,168
|
How to forecast multivariate time-series 'accurately' with a large number of unknown factors using R?
|
In my experience you only get so far with traditional time series models. Given the complexity you describe I'd try a non-linear machine learning algorithm like random forests. Have a play with the R package 'rf'. There's a nice blog post with example code from a Kaggle competition here:
http://blog.kaggle.com/2012/05/01/chucking-everything-into-a-random-forest-ben-hamner-on-winning-the-air-quality-prediction-hackathon/
If machine learning is unfamiliar to you, then this is the reference:
http://statweb.stanford.edu/~tibs/ElemStatLearn/
See also Andrew Ng's Stanford lectures or Coursera course.
|
How to forecast multivariate time-series 'accurately' with a large number of unknown factors using R
|
In my experience you only get so far with traditional time series models. Given the complexity you describe I'd try a non-linear machine learning algorithm like random forests. Have a play with the R
|
How to forecast multivariate time-series 'accurately' with a large number of unknown factors using R?
In my experience you only get so far with traditional time series models. Given the complexity you describe I'd try a non-linear machine learning algorithm like random forests. Have a play with the R package 'rf'. There's a nice blog post with example code from a Kaggle competition here:
http://blog.kaggle.com/2012/05/01/chucking-everything-into-a-random-forest-ben-hamner-on-winning-the-air-quality-prediction-hackathon/
If machine learning is unfamiliar to you, then this is the reference:
http://statweb.stanford.edu/~tibs/ElemStatLearn/
See also Andrew Ng's Stanford lectures or Coursera course.
|
How to forecast multivariate time-series 'accurately' with a large number of unknown factors using R
In my experience you only get so far with traditional time series models. Given the complexity you describe I'd try a non-linear machine learning algorithm like random forests. Have a play with the R
|
43,169
|
Weibull distribution with the negative shape parameter
|
There is no good reason not to do such a generalization, which would unite the Weibull and inverse Weibull distributions. So reasons must be historical or accidental.
Also see the comments thread.
|
Weibull distribution with the negative shape parameter
|
There is no good reason not to do such a generalization, which would unite the Weibull and inverse Weibull distributions. So reasons must be historical or accidental.
Also see the comments thread.
|
Weibull distribution with the negative shape parameter
There is no good reason not to do such a generalization, which would unite the Weibull and inverse Weibull distributions. So reasons must be historical or accidental.
Also see the comments thread.
|
Weibull distribution with the negative shape parameter
There is no good reason not to do such a generalization, which would unite the Weibull and inverse Weibull distributions. So reasons must be historical or accidental.
Also see the comments thread.
|
43,170
|
How should I interpret these strange density and mixing plots when fitting a generalised pareto distribution using MCMC with JAGS?
|
The generalised pareto distribution has the limitation that $mu < x$. Thus, the posterior density is capped at x, as COOLSerdash noted. This is intended behaviour, not a bug.
The lesser, left-most peak in the data is due to the prior distribution being incorrectly specified with a floor of zero.
|
How should I interpret these strange density and mixing plots when fitting a generalised pareto dist
|
The generalised pareto distribution has the limitation that $mu < x$. Thus, the posterior density is capped at x, as COOLSerdash noted. This is intended behaviour, not a bug.
The lesser, left-most pe
|
How should I interpret these strange density and mixing plots when fitting a generalised pareto distribution using MCMC with JAGS?
The generalised pareto distribution has the limitation that $mu < x$. Thus, the posterior density is capped at x, as COOLSerdash noted. This is intended behaviour, not a bug.
The lesser, left-most peak in the data is due to the prior distribution being incorrectly specified with a floor of zero.
|
How should I interpret these strange density and mixing plots when fitting a generalised pareto dist
The generalised pareto distribution has the limitation that $mu < x$. Thus, the posterior density is capped at x, as COOLSerdash noted. This is intended behaviour, not a bug.
The lesser, left-most pe
|
43,171
|
Repeated measures with single measurements
|
It's tough without truly understanding what your outcome y is, but I'll give it my best shot.
(1) This sounds like it might be approximated by a λ=1 Poisson distribution. Unless you're seeing it messing up the variance components of the model, I wouldn't worry about it. If it is, I would try transforming it by sqrt(cons_x) link and seeing if that works better. Don't forget to check your model for sensibility by plugging in some examples in your final model, varying the # of consultations, and seeing if it makes sense.
(2) Sounds like interaction variables need to be added to see the actual effect of consultations on the sex and age.
y ~ gp_sex + pat_sex + pat_age + cons_x + cons_x*pat_sex + cons_x*pat_age + cons_x*pat_age*pat_sex + ....
|
Repeated measures with single measurements
|
It's tough without truly understanding what your outcome y is, but I'll give it my best shot.
(1) This sounds like it might be approximated by a λ=1 Poisson distribution. Unless you're seeing it messi
|
Repeated measures with single measurements
It's tough without truly understanding what your outcome y is, but I'll give it my best shot.
(1) This sounds like it might be approximated by a λ=1 Poisson distribution. Unless you're seeing it messing up the variance components of the model, I wouldn't worry about it. If it is, I would try transforming it by sqrt(cons_x) link and seeing if that works better. Don't forget to check your model for sensibility by plugging in some examples in your final model, varying the # of consultations, and seeing if it makes sense.
(2) Sounds like interaction variables need to be added to see the actual effect of consultations on the sex and age.
y ~ gp_sex + pat_sex + pat_age + cons_x + cons_x*pat_sex + cons_x*pat_age + cons_x*pat_age*pat_sex + ....
|
Repeated measures with single measurements
It's tough without truly understanding what your outcome y is, but I'll give it my best shot.
(1) This sounds like it might be approximated by a λ=1 Poisson distribution. Unless you're seeing it messi
|
43,172
|
Repeated measures with single measurements
|
I can deliver an answer to my question only empirically using a simulation. Using this two cross validation contributions mixed and logistic I could create some fake datasets and using the mixed logistic regression. With R and glmer from library(lme4) I used this formula:
fit1 <- glmer(y ~ x1 + (1|j), data = d, family=binomial)
y is a dichotomous variable, x1 is continuous and j is random. First I built a balanced dataset d with the grouping variable j with 20 groups. Then I build two datasets from d with m groups which have only one observation.
The simulations shows that the variance of j and the fixed coefficient b1 of x are very similar to the "true" values for all kind of datasets. This is due to "perfect" randomization. In reality there will be some bias.
The limits of this answer: lacking of theoretical foundation (but intuitively it makes sense). The simulation could be improved if packed in a loop to get many estimates and comparing the average with the true values var(uj) and coefficient b1.
Simulation:
# -- Model
# yj[i] = b*0 b1*xj[i]
# b0 = g00 + u0j, u0j ~ N(0,1)
# b1 = const
# => zj[i] = g00 + u0j[i] + b1*xj[i]
# -- Libraries
library(lme4)
library(sqldf)
# -- Create balanced dataset d
# Number of clusters (level 2)
N <- 20
# Number of observations (level 1) for cluster j
nj <- 200
# intercept
g00 <- 1
# slope
b1 <- 3
# Vector of clusters indices 1,1...n1,2,2,2,....n2,...N,N,....nN
j <- c(sapply(1:N, function(x) rep(x, nj)))
# Vector of random variable
uj <- c(sapply(1:N, function(x)rep(rnorm(1), nj)))
# Vector of fixed variable
x1 <- rep(rnorm(nj),N)
# linear combination
z <- g00 + uj + b1*x1
# pass trhough an inv-logit function
pr <- 1/(1 + exp(-z))
# bernoully response variable
y <- rbinom(N*nj,1,pr)
d <- data.frame(j, y=y, z=z,x1=x1, uj=uj)
# -- Create unbalanced datasets d2 and d3
# sort j
d <- sqldf("SELECT * FROM d ORDER BY j")
# count each observation within j
d$ord <- NA
d$ord[1] <- 1
k <- 2
for (i in 2:nrow(d) ) {
if ( d$j[i] == d$j[i-1] ) {
d$ord[i] <- k
k = k+1
} else {
d$ord[i] <- 1
k = 2
}
}
# result
d[c(190:210),]
# Define a sample with m groups which have only one observation
m <- 5
d2 <- subset(d, j %in% c(1:m)
| ( j %in% c((m+1):20) & ord == 1)
)
# Another sample
d3 <- subset(d, j %in% c(1:m)
| ( j %in% c((m+1):20) & ord == 10)
)
# Fit with balanced dataset d
fit1 <- glmer(y ~ x1 + (1|j), data = d, family=binomial)
summary(fit1)
# fit with unbalanced data d2
fit2 <- glmer(y ~ x1 + (1|j), data = d2, family=binomial)
summary(fit2)
# fit with unbalanced data d3
fit3 <- glmer(y ~ x1 + (1|j), data = d3, family=binomial)
summary(fit3)
|
Repeated measures with single measurements
|
I can deliver an answer to my question only empirically using a simulation. Using this two cross validation contributions mixed and logistic I could create some fake datasets and using the mixed logis
|
Repeated measures with single measurements
I can deliver an answer to my question only empirically using a simulation. Using this two cross validation contributions mixed and logistic I could create some fake datasets and using the mixed logistic regression. With R and glmer from library(lme4) I used this formula:
fit1 <- glmer(y ~ x1 + (1|j), data = d, family=binomial)
y is a dichotomous variable, x1 is continuous and j is random. First I built a balanced dataset d with the grouping variable j with 20 groups. Then I build two datasets from d with m groups which have only one observation.
The simulations shows that the variance of j and the fixed coefficient b1 of x are very similar to the "true" values for all kind of datasets. This is due to "perfect" randomization. In reality there will be some bias.
The limits of this answer: lacking of theoretical foundation (but intuitively it makes sense). The simulation could be improved if packed in a loop to get many estimates and comparing the average with the true values var(uj) and coefficient b1.
Simulation:
# -- Model
# yj[i] = b*0 b1*xj[i]
# b0 = g00 + u0j, u0j ~ N(0,1)
# b1 = const
# => zj[i] = g00 + u0j[i] + b1*xj[i]
# -- Libraries
library(lme4)
library(sqldf)
# -- Create balanced dataset d
# Number of clusters (level 2)
N <- 20
# Number of observations (level 1) for cluster j
nj <- 200
# intercept
g00 <- 1
# slope
b1 <- 3
# Vector of clusters indices 1,1...n1,2,2,2,....n2,...N,N,....nN
j <- c(sapply(1:N, function(x) rep(x, nj)))
# Vector of random variable
uj <- c(sapply(1:N, function(x)rep(rnorm(1), nj)))
# Vector of fixed variable
x1 <- rep(rnorm(nj),N)
# linear combination
z <- g00 + uj + b1*x1
# pass trhough an inv-logit function
pr <- 1/(1 + exp(-z))
# bernoully response variable
y <- rbinom(N*nj,1,pr)
d <- data.frame(j, y=y, z=z,x1=x1, uj=uj)
# -- Create unbalanced datasets d2 and d3
# sort j
d <- sqldf("SELECT * FROM d ORDER BY j")
# count each observation within j
d$ord <- NA
d$ord[1] <- 1
k <- 2
for (i in 2:nrow(d) ) {
if ( d$j[i] == d$j[i-1] ) {
d$ord[i] <- k
k = k+1
} else {
d$ord[i] <- 1
k = 2
}
}
# result
d[c(190:210),]
# Define a sample with m groups which have only one observation
m <- 5
d2 <- subset(d, j %in% c(1:m)
| ( j %in% c((m+1):20) & ord == 1)
)
# Another sample
d3 <- subset(d, j %in% c(1:m)
| ( j %in% c((m+1):20) & ord == 10)
)
# Fit with balanced dataset d
fit1 <- glmer(y ~ x1 + (1|j), data = d, family=binomial)
summary(fit1)
# fit with unbalanced data d2
fit2 <- glmer(y ~ x1 + (1|j), data = d2, family=binomial)
summary(fit2)
# fit with unbalanced data d3
fit3 <- glmer(y ~ x1 + (1|j), data = d3, family=binomial)
summary(fit3)
|
Repeated measures with single measurements
I can deliver an answer to my question only empirically using a simulation. Using this two cross validation contributions mixed and logistic I could create some fake datasets and using the mixed logis
|
43,173
|
Repeated measures with single measurements
|
I'm certainly no expert and would love others to comment on this, but:
I'm not sure what your outcome is but you said it was measured 1/0. for the sake of the example i'll pretend that your outcome is "happy with consultation" yes=1 and No=0.
I think that for the people with multiple consultations you have a easier task. You could work out proportion happy responses for each person based on their multiple consultation (e.g person 1 had 6 consultations was happy 3 times and not happy 3 time = 50% happy). You could then work out an 'Overall'average happiness across all people by adding up the % happy for each person and dividing by the number of people. You would then get an 'overall average % happy' measure. You'd also get a standard error around this overall % . You basically now have data that could be analysed with a t test or other measures based on a normal distribution.
Where you don't have a repeat for the person, you cannot do this. the best you can do is work out a proportion of happy responses, across all participants. lets say you had 4 participants who only had 1 consultation (p1 was happy=1 p2 was happy=1, p3= 1 and p4=0) so the proportion happy is 75%. You would now have to use binomial tests e.g a z test - which i believe are not as powerful (I have just posted a question on this).
YOu may have to analyse your data for the no repeats, and the repeats separately. Unfortunately you can't just pretend that you repeated data is from different people because this would violate assumption of independence, so i sense that a different approach is needed with the repeated measures vs no repeated measures data.
|
Repeated measures with single measurements
|
I'm certainly no expert and would love others to comment on this, but:
I'm not sure what your outcome is but you said it was measured 1/0. for the sake of the example i'll pretend that your outcome
|
Repeated measures with single measurements
I'm certainly no expert and would love others to comment on this, but:
I'm not sure what your outcome is but you said it was measured 1/0. for the sake of the example i'll pretend that your outcome is "happy with consultation" yes=1 and No=0.
I think that for the people with multiple consultations you have a easier task. You could work out proportion happy responses for each person based on their multiple consultation (e.g person 1 had 6 consultations was happy 3 times and not happy 3 time = 50% happy). You could then work out an 'Overall'average happiness across all people by adding up the % happy for each person and dividing by the number of people. You would then get an 'overall average % happy' measure. You'd also get a standard error around this overall % . You basically now have data that could be analysed with a t test or other measures based on a normal distribution.
Where you don't have a repeat for the person, you cannot do this. the best you can do is work out a proportion of happy responses, across all participants. lets say you had 4 participants who only had 1 consultation (p1 was happy=1 p2 was happy=1, p3= 1 and p4=0) so the proportion happy is 75%. You would now have to use binomial tests e.g a z test - which i believe are not as powerful (I have just posted a question on this).
YOu may have to analyse your data for the no repeats, and the repeats separately. Unfortunately you can't just pretend that you repeated data is from different people because this would violate assumption of independence, so i sense that a different approach is needed with the repeated measures vs no repeated measures data.
|
Repeated measures with single measurements
I'm certainly no expert and would love others to comment on this, but:
I'm not sure what your outcome is but you said it was measured 1/0. for the sake of the example i'll pretend that your outcome
|
43,174
|
Would there be a model selection problem if we had access to an oracle that gave us the exact generalization error?
|
Basically I feel the question assumes that we have an oracle for determining when generalization is minimum too.
Of course, having this would be superb. Having an oracle that gives us the best model would be even better. However, you seem to misinterpret the function of the oracle.
The task of model selection is to pick the best model from a given set. We do this by choosing the model which we believe to have the best generalization performance. Without an oracle to tell us $\mathcal{E}(h)$ we are forced to estimate the generalization performance instead, lets say $\hat{\mathcal{E}}(h)$.
Because we need to choose a model based on its estimated generalization performance we have no guarantees of choosing the right one. This is what makes model selection tricky (and somewhat arbitrary). If we had access to the true generalization performance, model selection would be a trivial.
The reason is, say that the model classes $\mathcal{H}$ is infinite (i.e. there is an infinite set of models to choose from).
This is a nice theoretical question but is somewhat tangential to the practical problem since one typically wishes to choose the best model within a finite set of options.
You are correct that a truly infinite set of models would yield an undecidable problem without making further assumptions. In practice, however, some further assumptions are reasonable.
It is common and often reasonable to assume that the functional form of $\mathcal{E}(h)$ behaves in a certain way with regards to hyperparameters of a given model class (for example convex). If such assumptions hold, the globally optimal hyperparameters could be found in polynomial time.
|
Would there be a model selection problem if we had access to an oracle that gave us the exact genera
|
Basically I feel the question assumes that we have an oracle for determining when generalization is minimum too.
Of course, having this would be superb. Having an oracle that gives us the best model
|
Would there be a model selection problem if we had access to an oracle that gave us the exact generalization error?
Basically I feel the question assumes that we have an oracle for determining when generalization is minimum too.
Of course, having this would be superb. Having an oracle that gives us the best model would be even better. However, you seem to misinterpret the function of the oracle.
The task of model selection is to pick the best model from a given set. We do this by choosing the model which we believe to have the best generalization performance. Without an oracle to tell us $\mathcal{E}(h)$ we are forced to estimate the generalization performance instead, lets say $\hat{\mathcal{E}}(h)$.
Because we need to choose a model based on its estimated generalization performance we have no guarantees of choosing the right one. This is what makes model selection tricky (and somewhat arbitrary). If we had access to the true generalization performance, model selection would be a trivial.
The reason is, say that the model classes $\mathcal{H}$ is infinite (i.e. there is an infinite set of models to choose from).
This is a nice theoretical question but is somewhat tangential to the practical problem since one typically wishes to choose the best model within a finite set of options.
You are correct that a truly infinite set of models would yield an undecidable problem without making further assumptions. In practice, however, some further assumptions are reasonable.
It is common and often reasonable to assume that the functional form of $\mathcal{E}(h)$ behaves in a certain way with regards to hyperparameters of a given model class (for example convex). If such assumptions hold, the globally optimal hyperparameters could be found in polynomial time.
|
Would there be a model selection problem if we had access to an oracle that gave us the exact genera
Basically I feel the question assumes that we have an oracle for determining when generalization is minimum too.
Of course, having this would be superb. Having an oracle that gives us the best model
|
43,175
|
Calculating significance and uplift on Revenue A/B Tests
|
As I understand you have a 2X2 experimental design ( Factor1 - website(levels: a,b), factor2 - visitors group (levels: a,b)) and the dependent variable 'Revenue'.
I would consider ANOVA being careful to all assumptions behind. The way you measure/code dependent variable 'Revenue' has different implications.
This movie from Andy Field might be useful.
Regards,
Marius
|
Calculating significance and uplift on Revenue A/B Tests
|
As I understand you have a 2X2 experimental design ( Factor1 - website(levels: a,b), factor2 - visitors group (levels: a,b)) and the dependent variable 'Revenue'.
I would consider ANOVA being careful
|
Calculating significance and uplift on Revenue A/B Tests
As I understand you have a 2X2 experimental design ( Factor1 - website(levels: a,b), factor2 - visitors group (levels: a,b)) and the dependent variable 'Revenue'.
I would consider ANOVA being careful to all assumptions behind. The way you measure/code dependent variable 'Revenue' has different implications.
This movie from Andy Field might be useful.
Regards,
Marius
|
Calculating significance and uplift on Revenue A/B Tests
As I understand you have a 2X2 experimental design ( Factor1 - website(levels: a,b), factor2 - visitors group (levels: a,b)) and the dependent variable 'Revenue'.
I would consider ANOVA being careful
|
43,176
|
Calculating significance and uplift on Revenue A/B Tests
|
Probably the revenue per cookie has a normal distribution (you can check this with bootstrapping). The revenue per si do not have a normal distribution, just the revenue per cookie. That said, you can applied an hypothesis test as usually an check is the difference in revenues per user in two groups are significant.
Other approach is considers the revenue distribution as a combination of two others distribution. The first one is the distribution for the conversion (buy or not) and the other distribution if for the average revenue per order.
|
Calculating significance and uplift on Revenue A/B Tests
|
Probably the revenue per cookie has a normal distribution (you can check this with bootstrapping). The revenue per si do not have a normal distribution, just the revenue per cookie. That said, you can
|
Calculating significance and uplift on Revenue A/B Tests
Probably the revenue per cookie has a normal distribution (you can check this with bootstrapping). The revenue per si do not have a normal distribution, just the revenue per cookie. That said, you can applied an hypothesis test as usually an check is the difference in revenues per user in two groups are significant.
Other approach is considers the revenue distribution as a combination of two others distribution. The first one is the distribution for the conversion (buy or not) and the other distribution if for the average revenue per order.
|
Calculating significance and uplift on Revenue A/B Tests
Probably the revenue per cookie has a normal distribution (you can check this with bootstrapping). The revenue per si do not have a normal distribution, just the revenue per cookie. That said, you can
|
43,177
|
Variance of EM mean estimates in a simple mixture of two normals
|
I cannot answer with a simple formula to calculate the variances of $\mu_1$ and $\mu_2$ or their covariance. I can only depict the mathematical steps needed to obtain them which leads to the conclusion that there is not an exact analytical solution.
Let's recapitulate for a moment how the variance of an ML estimate $\mu$ for a simple normal distribution with known variance $S$ can be calculated. ML estimation means to select a $\mu$ that maximizes the probability for the data points $\{x_i\}$. Since data points are assumed to be drawn independently, the probability to obtain a sequence of $N$ data points is given by:
$$P(x_1,x_2,\dots)=\prod_{i=1}^{N}\frac{1}{\sqrt{2\pi S}}\exp\left(-\frac{1}{2S}(x_i-\mu)^2\right)=\\
= \left(\frac{1}{\sqrt{2\pi S}}\right)^N \exp\left(-\frac{1}{2S}\sum_{i=1}^N (x_i-\mu)^2\right)$$
In order to select the $\mu$ which maximizes the $P$ one has to find a $\mu$ for which the derivative $\frac{dP}{d\mu}$ vanishes. This can be done analytically and leads to the well known expression
$$\mu=\frac{1}{N}\sum_{i=1}^{N}x_i$$
Since the variance $S$ is known, the variance for the sample mean is
$$\mbox{Var}(\mu)=\mbox{Var}\left(\frac{1}{N}\sum_{i=1}^{N}x_i\right)=\frac{1}{N}\sum_{i=1}^{N}\mbox{Var}(x_i)=\frac{S}{N}$$
Okay, now for the real objective. The same steps as for the simple normal distribution have to be applied. For the following I abbreviate the density function of the normal distribution by $N$. So, first the probability to obtain a sequence of independently drawn $x_i$ given they follow normal mixture with two components:
$$P(x_1,x_2,\dots)=\prod_{i=1}^{N}
\left[pN\left(x_i|\mu_1,S_1\right)+(1-p)N\left(x_i|\mu_2,S_2\right)\right]$$
The derivatives of that with respect to $\mu_1$ and $\mu_2$ can be found by applying the chain and the product rule. However, since it is ugly work to do and the result additionally varies depending on the number of data points in a non-trivial way, I cannot state a general analytic expression. Nevertheless, let's assume one calculated the derivatives and obtained two (non-linear) equations to determine $\mu_1$ and $\mu_2$:
$$\frac{dP(x_1,x_2,\dots|\mu_1,\mu_2)}{d\mu_1}=0 \hspace{1em}\mbox{and}\hspace{1em}
\frac{dP(x_1,x_2,\dots|\mu_1,\mu_2)}{d\mu_2}=0$$
In these equations the unknown variables $\mu_1,\mu_2$ are contained both as factors to be multiplied by the exponential function as well as arguments of the exponential functions. Additionally, the appearing exponential functions have different arguments, so one cannot get rid of them by dividing as it is the case for the simple normal distribution. Hence, these equations cannot be solved analytically. However, if the $\mu_1$ and $\mu_2$ are far apart from each other in terms of $S_1$ and $S_2$ the two stated equations approximately decouple, so that the first equation only contains significant terms with $\mu_1$ and $x_i$'s in vicinity of $\mu_1$ and vice versa for the second equation. Then, the variances of $\mu_1$ and $\mu_2$ will be in good approximation what you already guessed. Furthermore, the correlation between the two variables should be small to zero. However, if there is a significant overlap between the two normal distributions the equations do not decouple.
To conclude, the possibility to state the variance of a quantity explicitly is chained to the possibility to state the quantity itself explicitly. The fact that one has to use an iterative solving scheme such as the EM-algorithm to get $\mu_1,\mu_2$ indicates that one also has to rely on numerical methods for the estimation of their variances.
So, given $S_1,S_2,p$ are known how could one estimate the variance of $\mu_1$ and $\mu_2$? One way is to calculate the two dimensional integral numerically:
$$ \mbox{Var}(\mu_1) = \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty}
(\tilde{\mu}_1 - \mu_1)^2 P(x_1,x_2,\dots|\tilde{\mu}_1,\tilde{\mu}_2) \,d\tilde{\mu}_1\,d\tilde{\mu}_2$$
where $P(x_1,x_2,\dots|\mu_1,\mu_2)$ is the normalized likelihood of the normal mixture:
$$\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty}
P(x_1,x_2,\dots|\tilde{\mu}_1,\tilde{\mu}_2) \,d\tilde{\mu}_1\,d\tilde{\mu}_2 = 1$$
The formula for $\mbox{Var}(\mu_1)$ can be justified within the framework of Bayesian statistics utilizing an improper uniform prior:
$$P(\mu_1,\mu_2|x_1,x_2,\dots) = \frac{P(x_1,x_2,\dots|\mu_1,\mu_2)P(\mu_1,\mu_2)}{P(x_1,x_2,\dots)}$$
|
Variance of EM mean estimates in a simple mixture of two normals
|
I cannot answer with a simple formula to calculate the variances of $\mu_1$ and $\mu_2$ or their covariance. I can only depict the mathematical steps needed to obtain them which leads to the conclusio
|
Variance of EM mean estimates in a simple mixture of two normals
I cannot answer with a simple formula to calculate the variances of $\mu_1$ and $\mu_2$ or their covariance. I can only depict the mathematical steps needed to obtain them which leads to the conclusion that there is not an exact analytical solution.
Let's recapitulate for a moment how the variance of an ML estimate $\mu$ for a simple normal distribution with known variance $S$ can be calculated. ML estimation means to select a $\mu$ that maximizes the probability for the data points $\{x_i\}$. Since data points are assumed to be drawn independently, the probability to obtain a sequence of $N$ data points is given by:
$$P(x_1,x_2,\dots)=\prod_{i=1}^{N}\frac{1}{\sqrt{2\pi S}}\exp\left(-\frac{1}{2S}(x_i-\mu)^2\right)=\\
= \left(\frac{1}{\sqrt{2\pi S}}\right)^N \exp\left(-\frac{1}{2S}\sum_{i=1}^N (x_i-\mu)^2\right)$$
In order to select the $\mu$ which maximizes the $P$ one has to find a $\mu$ for which the derivative $\frac{dP}{d\mu}$ vanishes. This can be done analytically and leads to the well known expression
$$\mu=\frac{1}{N}\sum_{i=1}^{N}x_i$$
Since the variance $S$ is known, the variance for the sample mean is
$$\mbox{Var}(\mu)=\mbox{Var}\left(\frac{1}{N}\sum_{i=1}^{N}x_i\right)=\frac{1}{N}\sum_{i=1}^{N}\mbox{Var}(x_i)=\frac{S}{N}$$
Okay, now for the real objective. The same steps as for the simple normal distribution have to be applied. For the following I abbreviate the density function of the normal distribution by $N$. So, first the probability to obtain a sequence of independently drawn $x_i$ given they follow normal mixture with two components:
$$P(x_1,x_2,\dots)=\prod_{i=1}^{N}
\left[pN\left(x_i|\mu_1,S_1\right)+(1-p)N\left(x_i|\mu_2,S_2\right)\right]$$
The derivatives of that with respect to $\mu_1$ and $\mu_2$ can be found by applying the chain and the product rule. However, since it is ugly work to do and the result additionally varies depending on the number of data points in a non-trivial way, I cannot state a general analytic expression. Nevertheless, let's assume one calculated the derivatives and obtained two (non-linear) equations to determine $\mu_1$ and $\mu_2$:
$$\frac{dP(x_1,x_2,\dots|\mu_1,\mu_2)}{d\mu_1}=0 \hspace{1em}\mbox{and}\hspace{1em}
\frac{dP(x_1,x_2,\dots|\mu_1,\mu_2)}{d\mu_2}=0$$
In these equations the unknown variables $\mu_1,\mu_2$ are contained both as factors to be multiplied by the exponential function as well as arguments of the exponential functions. Additionally, the appearing exponential functions have different arguments, so one cannot get rid of them by dividing as it is the case for the simple normal distribution. Hence, these equations cannot be solved analytically. However, if the $\mu_1$ and $\mu_2$ are far apart from each other in terms of $S_1$ and $S_2$ the two stated equations approximately decouple, so that the first equation only contains significant terms with $\mu_1$ and $x_i$'s in vicinity of $\mu_1$ and vice versa for the second equation. Then, the variances of $\mu_1$ and $\mu_2$ will be in good approximation what you already guessed. Furthermore, the correlation between the two variables should be small to zero. However, if there is a significant overlap between the two normal distributions the equations do not decouple.
To conclude, the possibility to state the variance of a quantity explicitly is chained to the possibility to state the quantity itself explicitly. The fact that one has to use an iterative solving scheme such as the EM-algorithm to get $\mu_1,\mu_2$ indicates that one also has to rely on numerical methods for the estimation of their variances.
So, given $S_1,S_2,p$ are known how could one estimate the variance of $\mu_1$ and $\mu_2$? One way is to calculate the two dimensional integral numerically:
$$ \mbox{Var}(\mu_1) = \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty}
(\tilde{\mu}_1 - \mu_1)^2 P(x_1,x_2,\dots|\tilde{\mu}_1,\tilde{\mu}_2) \,d\tilde{\mu}_1\,d\tilde{\mu}_2$$
where $P(x_1,x_2,\dots|\mu_1,\mu_2)$ is the normalized likelihood of the normal mixture:
$$\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty}
P(x_1,x_2,\dots|\tilde{\mu}_1,\tilde{\mu}_2) \,d\tilde{\mu}_1\,d\tilde{\mu}_2 = 1$$
The formula for $\mbox{Var}(\mu_1)$ can be justified within the framework of Bayesian statistics utilizing an improper uniform prior:
$$P(\mu_1,\mu_2|x_1,x_2,\dots) = \frac{P(x_1,x_2,\dots|\mu_1,\mu_2)P(\mu_1,\mu_2)}{P(x_1,x_2,\dots)}$$
|
Variance of EM mean estimates in a simple mixture of two normals
I cannot answer with a simple formula to calculate the variances of $\mu_1$ and $\mu_2$ or their covariance. I can only depict the mathematical steps needed to obtain them which leads to the conclusio
|
43,178
|
Variance of EM mean estimates in a simple mixture of two normals
|
The trick is to write down your likelihood exactly. There are many approaches to EM normal mixture estimation. Is the number in each group known? (hypergeometric likelihood) or will you potentially have everyone in one group (binomial likelihood)? Do the 2 normal mixture variates have common variance $\sigma^2$ or are there two separate variances?
Using the law of total variance:
$$\mbox{var}(\bar{x}_1) = \mbox{var}(E(\bar{x}_1|p)) + E(\mbox{var}(\bar{x}_1|p))$$
Where the $\bar{x}_1$ denotes the mean of the first normal mixture variate.
Expressions for the two terms on the RHS depend on the likelihood!
|
Variance of EM mean estimates in a simple mixture of two normals
|
The trick is to write down your likelihood exactly. There are many approaches to EM normal mixture estimation. Is the number in each group known? (hypergeometric likelihood) or will you potentially ha
|
Variance of EM mean estimates in a simple mixture of two normals
The trick is to write down your likelihood exactly. There are many approaches to EM normal mixture estimation. Is the number in each group known? (hypergeometric likelihood) or will you potentially have everyone in one group (binomial likelihood)? Do the 2 normal mixture variates have common variance $\sigma^2$ or are there two separate variances?
Using the law of total variance:
$$\mbox{var}(\bar{x}_1) = \mbox{var}(E(\bar{x}_1|p)) + E(\mbox{var}(\bar{x}_1|p))$$
Where the $\bar{x}_1$ denotes the mean of the first normal mixture variate.
Expressions for the two terms on the RHS depend on the likelihood!
|
Variance of EM mean estimates in a simple mixture of two normals
The trick is to write down your likelihood exactly. There are many approaches to EM normal mixture estimation. Is the number in each group known? (hypergeometric likelihood) or will you potentially ha
|
43,179
|
Variance of EM mean estimates in a simple mixture of two normals
|
First note that EM is an optimization algorithm.
The statistical properties of your estimator are driven by the fact that it is an MLE, and not by the optimization algorithm used to find it.
In this particular case, the MLE has no closed form solution so that you cannot really analyze its properties.
Asymptotic theory of MLE does, however, hold for this problem, so that you can use the fact that your estimators are CAN to approximate their properties.
|
Variance of EM mean estimates in a simple mixture of two normals
|
First note that EM is an optimization algorithm.
The statistical properties of your estimator are driven by the fact that it is an MLE, and not by the optimization algorithm used to find it.
In this
|
Variance of EM mean estimates in a simple mixture of two normals
First note that EM is an optimization algorithm.
The statistical properties of your estimator are driven by the fact that it is an MLE, and not by the optimization algorithm used to find it.
In this particular case, the MLE has no closed form solution so that you cannot really analyze its properties.
Asymptotic theory of MLE does, however, hold for this problem, so that you can use the fact that your estimators are CAN to approximate their properties.
|
Variance of EM mean estimates in a simple mixture of two normals
First note that EM is an optimization algorithm.
The statistical properties of your estimator are driven by the fact that it is an MLE, and not by the optimization algorithm used to find it.
In this
|
43,180
|
R: power calculation for beta coefficient
|
The answer is "no" but not for the reason in vafisher's answer.
The correct formula for the power of a two-sided hypothesis test for a single regression coefficient is
$$\begin{align}
\mathrm{power}=&\operatorname{Pr}\left(t_{\mathrm{df}} \le -\frac{D}{\operatorname{se}\left[D\right]} - {t}_{\mathrm{df},\frac{\alpha}{2}} \right)+\\&\operatorname{Pr}\left(t_{\mathrm{df}} > -\frac{D}{\operatorname{se}\left[D\right]} + {t}_{\mathrm{df},\frac{\alpha}{2}} \right)
\end{align}$$
where $D$ is the effect size, in this case $D=\hat{\beta}-\beta_0=\hat{\beta}$. This is given in the appendix of Dupont and Plummer (1998) (pdf), equation (A1) on p. 599, and in these notes.
The "Z" version of this test is just the normal approximation to the t-test. Note that the t-squared test (i.e. the $\chi^2$ test) is equivalent to the nested F test of regression models that differ by a single coefficient.
|
R: power calculation for beta coefficient
|
The answer is "no" but not for the reason in vafisher's answer.
The correct formula for the power of a two-sided hypothesis test for a single regression coefficient is
$$\begin{align}
\mathrm{power}=
|
R: power calculation for beta coefficient
The answer is "no" but not for the reason in vafisher's answer.
The correct formula for the power of a two-sided hypothesis test for a single regression coefficient is
$$\begin{align}
\mathrm{power}=&\operatorname{Pr}\left(t_{\mathrm{df}} \le -\frac{D}{\operatorname{se}\left[D\right]} - {t}_{\mathrm{df},\frac{\alpha}{2}} \right)+\\&\operatorname{Pr}\left(t_{\mathrm{df}} > -\frac{D}{\operatorname{se}\left[D\right]} + {t}_{\mathrm{df},\frac{\alpha}{2}} \right)
\end{align}$$
where $D$ is the effect size, in this case $D=\hat{\beta}-\beta_0=\hat{\beta}$. This is given in the appendix of Dupont and Plummer (1998) (pdf), equation (A1) on p. 599, and in these notes.
The "Z" version of this test is just the normal approximation to the t-test. Note that the t-squared test (i.e. the $\chi^2$ test) is equivalent to the nested F test of regression models that differ by a single coefficient.
|
R: power calculation for beta coefficient
The answer is "no" but not for the reason in vafisher's answer.
The correct formula for the power of a two-sided hypothesis test for a single regression coefficient is
$$\begin{align}
\mathrm{power}=
|
43,181
|
R: power calculation for beta coefficient
|
Short answer, no. The t-statistic you're looking at is the square root of the F discussed in this post: What is the power of the regression F test? for the removal of a single independent variable.
|
R: power calculation for beta coefficient
|
Short answer, no. The t-statistic you're looking at is the square root of the F discussed in this post: What is the power of the regression F test? for the removal of a single independent variable.
|
R: power calculation for beta coefficient
Short answer, no. The t-statistic you're looking at is the square root of the F discussed in this post: What is the power of the regression F test? for the removal of a single independent variable.
|
R: power calculation for beta coefficient
Short answer, no. The t-statistic you're looking at is the square root of the F discussed in this post: What is the power of the regression F test? for the removal of a single independent variable.
|
43,182
|
Ranking undergrad students by their future income - Mixture distribution
|
Mixture models of only two distributions, $D_1$ and $D_2$, can be considered to have the density $p D_1+(1-p) D_2$, where $0<p<1$. Now $p\neq0,1$ because it would no longer be a mixture. In general for any mixture distribution, software is available to find the best mixture model, e.g., FindDistribution in Mathematica, or mixtools in R as suggested by @omidi. However, in the case given by the OP, there is no overlap between distributions, as a salary of zero is not a salary. There is no particular need to add one to the income to take logarithms, as there is no need to take logarithms. Instead, all that need be done is to assign a Dirac $\delta$ for the $D_1$, zero salaries, i.e., $p \delta(x=0) +(1-p)D_2$, and then find $p$ and $D_2$. Finding $p$ is trivial, as $p=\frac{N_{no}}{N_{no}+N_{yes}}$, where $N_{yes}$ and $N_{no}$ are the number of subjects with and without income, respectively. Finding the best distribution for those with incomes can use software, or prior literature to search for models. However, even then, the distribution of those with income, $D_2$ here, can itself a mixture distribution. Moreover, use of an empirical distribution may be enough to answer the some of the questions the OP needs to answer, and although it's nice to have a theoretical distribution, it would not be absolutely required.
That is, the final distribution could be $p \delta(0) +(1-p)D_{Emp}(x)$, where $p=\frac{N_{no}}{N_{no}+N_{yes}}$. To be clear, that formulation is identical to what the data is, as the empirical distribution is $\frac{\delta(x_i)}{N_{yes}}$.
Other questions: "...set the mean for one of them [Sic, Gaussian distributions] to be known as equal to 0." That is what the Dirac $\delta(0)$ is; although the first distribution used to create it was historically Cauchy, it makes little difference which limit one uses to create the $\delta$, because its standard deviation is zero. Regarding using a Gaussian for the second distribution, not a good idea as the right tail of income is notoriously heavy, e.g., see Pareto distribution. If a theoretical distribution is desired in R for the OP's data for those having an income, see this.
Finally, I do not fully understand "Another problem might be that I am always predicting the income using a regression and building a ranking from that, rather than running an ordinal regression. What is the best way handle this situation - if the target variable (income) that the ranking is based on is itself available for training data?"
If one desires to predict income using regression, transforming variables may be desirable, but no transformation of $p \delta(0) +(1-p)D_{Emp}(x)$ will have "nice" residuals. That is, the residuals will not be Gaussian or homoscedastic, so the usual default regression techniques will not yield accurate answers. Maybe the way to treat this is to use classifiers to find the probability of finding a job with a yes/no Y-axis variable, and if yes use glm to find what that salary is for a best theoretical distribution of non-zero salaries. That is, one can then use classifiers to determine predictors of non-zero salary, and when finished, one then has two cases, predictors of getting a job, and predictors of salary if one secures a position.
|
Ranking undergrad students by their future income - Mixture distribution
|
Mixture models of only two distributions, $D_1$ and $D_2$, can be considered to have the density $p D_1+(1-p) D_2$, where $0<p<1$. Now $p\neq0,1$ because it would no longer be a mixture. In general fo
|
Ranking undergrad students by their future income - Mixture distribution
Mixture models of only two distributions, $D_1$ and $D_2$, can be considered to have the density $p D_1+(1-p) D_2$, where $0<p<1$. Now $p\neq0,1$ because it would no longer be a mixture. In general for any mixture distribution, software is available to find the best mixture model, e.g., FindDistribution in Mathematica, or mixtools in R as suggested by @omidi. However, in the case given by the OP, there is no overlap between distributions, as a salary of zero is not a salary. There is no particular need to add one to the income to take logarithms, as there is no need to take logarithms. Instead, all that need be done is to assign a Dirac $\delta$ for the $D_1$, zero salaries, i.e., $p \delta(x=0) +(1-p)D_2$, and then find $p$ and $D_2$. Finding $p$ is trivial, as $p=\frac{N_{no}}{N_{no}+N_{yes}}$, where $N_{yes}$ and $N_{no}$ are the number of subjects with and without income, respectively. Finding the best distribution for those with incomes can use software, or prior literature to search for models. However, even then, the distribution of those with income, $D_2$ here, can itself a mixture distribution. Moreover, use of an empirical distribution may be enough to answer the some of the questions the OP needs to answer, and although it's nice to have a theoretical distribution, it would not be absolutely required.
That is, the final distribution could be $p \delta(0) +(1-p)D_{Emp}(x)$, where $p=\frac{N_{no}}{N_{no}+N_{yes}}$. To be clear, that formulation is identical to what the data is, as the empirical distribution is $\frac{\delta(x_i)}{N_{yes}}$.
Other questions: "...set the mean for one of them [Sic, Gaussian distributions] to be known as equal to 0." That is what the Dirac $\delta(0)$ is; although the first distribution used to create it was historically Cauchy, it makes little difference which limit one uses to create the $\delta$, because its standard deviation is zero. Regarding using a Gaussian for the second distribution, not a good idea as the right tail of income is notoriously heavy, e.g., see Pareto distribution. If a theoretical distribution is desired in R for the OP's data for those having an income, see this.
Finally, I do not fully understand "Another problem might be that I am always predicting the income using a regression and building a ranking from that, rather than running an ordinal regression. What is the best way handle this situation - if the target variable (income) that the ranking is based on is itself available for training data?"
If one desires to predict income using regression, transforming variables may be desirable, but no transformation of $p \delta(0) +(1-p)D_{Emp}(x)$ will have "nice" residuals. That is, the residuals will not be Gaussian or homoscedastic, so the usual default regression techniques will not yield accurate answers. Maybe the way to treat this is to use classifiers to find the probability of finding a job with a yes/no Y-axis variable, and if yes use glm to find what that salary is for a best theoretical distribution of non-zero salaries. That is, one can then use classifiers to determine predictors of non-zero salary, and when finished, one then has two cases, predictors of getting a job, and predictors of salary if one secures a position.
|
Ranking undergrad students by their future income - Mixture distribution
Mixture models of only two distributions, $D_1$ and $D_2$, can be considered to have the density $p D_1+(1-p) D_2$, where $0<p<1$. Now $p\neq0,1$ because it would no longer be a mixture. In general fo
|
43,183
|
ARIMAX model's exogenous components?
|
Look at the simplest form of ARIMAX(0,1,0) or IX(1):
$$\Delta y_t=c+x_t+\varepsilon_t$$
where $x_t$ - exogenous variables. Take an expectation:
$$E[\Delta y_t]=c+E[x_t]$$
If you think that your $\Delta y_t$ is stationary, then $x_t$ must be statrionary too. The same with ARX(1):
$$y_t=\phi_1 y_{t-1}+c+x_t+\varepsilon_t$$
and expectation:
$$E[y_t]=\phi_1 E[y_{t-1}]+c+E[x_t]$$
$$E[y_t]=\frac{c+E[x_t]}{1-\phi_1}$$
|
ARIMAX model's exogenous components?
|
Look at the simplest form of ARIMAX(0,1,0) or IX(1):
$$\Delta y_t=c+x_t+\varepsilon_t$$
where $x_t$ - exogenous variables. Take an expectation:
$$E[\Delta y_t]=c+E[x_t]$$
If you think that your $\Delt
|
ARIMAX model's exogenous components?
Look at the simplest form of ARIMAX(0,1,0) or IX(1):
$$\Delta y_t=c+x_t+\varepsilon_t$$
where $x_t$ - exogenous variables. Take an expectation:
$$E[\Delta y_t]=c+E[x_t]$$
If you think that your $\Delta y_t$ is stationary, then $x_t$ must be statrionary too. The same with ARX(1):
$$y_t=\phi_1 y_{t-1}+c+x_t+\varepsilon_t$$
and expectation:
$$E[y_t]=\phi_1 E[y_{t-1}]+c+E[x_t]$$
$$E[y_t]=\frac{c+E[x_t]}{1-\phi_1}$$
|
ARIMAX model's exogenous components?
Look at the simplest form of ARIMAX(0,1,0) or IX(1):
$$\Delta y_t=c+x_t+\varepsilon_t$$
where $x_t$ - exogenous variables. Take an expectation:
$$E[\Delta y_t]=c+E[x_t]$$
If you think that your $\Delt
|
43,184
|
ARIMAX model's exogenous components?
|
This is known as transfer function model:
A(L)y(t)=B(L)e(t)+C(L)x(t)
y(t)=inv(A(L))B(L)e(t)+inv(A(L))C(L)x(t)
For stabity and invertibility you have to have some restrictions on the characteristic polynomial of this representation. Which means processes jointly has to satisfy these conditions.
You can have seasonal event dummies, intervention dummies or other deterministic components here without causing too much trouble. Problem is to identify them on the first place!
You can think ARIMAX model as a stochastic difference equation which can have Dirac delta function type of events with their own constant coefficients.
|
ARIMAX model's exogenous components?
|
This is known as transfer function model:
A(L)y(t)=B(L)e(t)+C(L)x(t)
y(t)=inv(A(L))B(L)e(t)+inv(A(L))C(L)x(t)
For stabity and invertibility you have to have some restrictions on the characteristic
|
ARIMAX model's exogenous components?
This is known as transfer function model:
A(L)y(t)=B(L)e(t)+C(L)x(t)
y(t)=inv(A(L))B(L)e(t)+inv(A(L))C(L)x(t)
For stabity and invertibility you have to have some restrictions on the characteristic polynomial of this representation. Which means processes jointly has to satisfy these conditions.
You can have seasonal event dummies, intervention dummies or other deterministic components here without causing too much trouble. Problem is to identify them on the first place!
You can think ARIMAX model as a stochastic difference equation which can have Dirac delta function type of events with their own constant coefficients.
|
ARIMAX model's exogenous components?
This is known as transfer function model:
A(L)y(t)=B(L)e(t)+C(L)x(t)
y(t)=inv(A(L))B(L)e(t)+inv(A(L))C(L)x(t)
For stabity and invertibility you have to have some restrictions on the characteristic
|
43,185
|
Selecting problems of the appropriate difficulty based for adaptive learning [closed]
|
You could Item Response Theory or it's more complex version Hierarchical Item Response Theory. It will estimate students abilities and items difficulty to produce probability of success using models based on logistic regressions.
|
Selecting problems of the appropriate difficulty based for adaptive learning [closed]
|
You could Item Response Theory or it's more complex version Hierarchical Item Response Theory. It will estimate students abilities and items difficulty to produce probability of success using models b
|
Selecting problems of the appropriate difficulty based for adaptive learning [closed]
You could Item Response Theory or it's more complex version Hierarchical Item Response Theory. It will estimate students abilities and items difficulty to produce probability of success using models based on logistic regressions.
|
Selecting problems of the appropriate difficulty based for adaptive learning [closed]
You could Item Response Theory or it's more complex version Hierarchical Item Response Theory. It will estimate students abilities and items difficulty to produce probability of success using models b
|
43,186
|
Difference of Prediction between Graph Representation and Data Matrix Representation
|
I'm trying to answer the question in one aspect.
Generally a graph can be described by a matrix, with the columns and rows indexed by vertex, and elements corresponding to the edge weights. And adjacent matrix can also describe an undirect/direct graph. So graph analysis is usually equivalent to perform analysis on matrices. In fact, many graph-based algorithms can be implemented through matrix operations, and some big data problems such as biological network and social network (and also the most famous one,google page ranking) are often treated as a matrix to do the further numerical analysis.
Take social network as an example. Network analysis uses nodes to represent people and edges to represent ties or relations between two people. You can use two colors of nodes to represent the gender, and use a directed edge from node A to B indicating "A chooses B". When there is two directed edges from both A to B and B to A, their ties are reciprocated and they share a relationship. But that may be all the information a graph can include. We don't know whether a shared tie means A and B are spouse or friends, although we may induce more forms of lines to differentiate them (dash lines, color lines, etc.). Yet when there are many people in the network and/or many kinds of relations, the graph becomes too visually complicated to display the patterns.
Representing information in the form of matrices seems more flexible. Because we can
(1) establish a bunch of matrices. We separate different relations with binary matrices. We can also,
(2) implement matrix permutation, then obtain a block density matrix (for example the ratio of one specific relation between male and male, male and female, female and female...).
(3) With the boolean matrices, the AND, OR, XOR.., and with other matrices, addition, subtraction, multiplication, and even inverse operations can be implemented with the selected matrices for further processing.
(4) those matrix operations all have their practical meanings.
(i) Adjacency matrix indicates whether there exists a path between two people, and the paths number of length one from each person to another;
(ii) Squared adjacency matrix tells us how many pathways of length two are there from each person to another, so on and so forth. Measuring the path number and lengths among the people in the social network allow us to index and infer some important tendencies;
(iii)The eigenvector analysis is another approach to find the "global" structure of the network in opposite to a "local" feature.
(5) Last but not least, structural analysts in subgroup , or clique (graph) can also be represented with matrix. And the clustering methods also handles high-dimensional arrays.
|
Difference of Prediction between Graph Representation and Data Matrix Representation
|
I'm trying to answer the question in one aspect.
Generally a graph can be described by a matrix, with the columns and rows indexed by vertex, and elements corresponding to the edge weights. And adjac
|
Difference of Prediction between Graph Representation and Data Matrix Representation
I'm trying to answer the question in one aspect.
Generally a graph can be described by a matrix, with the columns and rows indexed by vertex, and elements corresponding to the edge weights. And adjacent matrix can also describe an undirect/direct graph. So graph analysis is usually equivalent to perform analysis on matrices. In fact, many graph-based algorithms can be implemented through matrix operations, and some big data problems such as biological network and social network (and also the most famous one,google page ranking) are often treated as a matrix to do the further numerical analysis.
Take social network as an example. Network analysis uses nodes to represent people and edges to represent ties or relations between two people. You can use two colors of nodes to represent the gender, and use a directed edge from node A to B indicating "A chooses B". When there is two directed edges from both A to B and B to A, their ties are reciprocated and they share a relationship. But that may be all the information a graph can include. We don't know whether a shared tie means A and B are spouse or friends, although we may induce more forms of lines to differentiate them (dash lines, color lines, etc.). Yet when there are many people in the network and/or many kinds of relations, the graph becomes too visually complicated to display the patterns.
Representing information in the form of matrices seems more flexible. Because we can
(1) establish a bunch of matrices. We separate different relations with binary matrices. We can also,
(2) implement matrix permutation, then obtain a block density matrix (for example the ratio of one specific relation between male and male, male and female, female and female...).
(3) With the boolean matrices, the AND, OR, XOR.., and with other matrices, addition, subtraction, multiplication, and even inverse operations can be implemented with the selected matrices for further processing.
(4) those matrix operations all have their practical meanings.
(i) Adjacency matrix indicates whether there exists a path between two people, and the paths number of length one from each person to another;
(ii) Squared adjacency matrix tells us how many pathways of length two are there from each person to another, so on and so forth. Measuring the path number and lengths among the people in the social network allow us to index and infer some important tendencies;
(iii)The eigenvector analysis is another approach to find the "global" structure of the network in opposite to a "local" feature.
(5) Last but not least, structural analysts in subgroup , or clique (graph) can also be represented with matrix. And the clustering methods also handles high-dimensional arrays.
|
Difference of Prediction between Graph Representation and Data Matrix Representation
I'm trying to answer the question in one aspect.
Generally a graph can be described by a matrix, with the columns and rows indexed by vertex, and elements corresponding to the edge weights. And adjac
|
43,187
|
What is Polychoric Correlation Coefficient intuitively?
|
I found Kolenikov and Angeles "The Use of Discrete Data in Principal Component Analysis" working paper to be helpful (published version here if you have access). Slides here as well.
To quote the authors (from the help-file for their polychoric Stata command):
The polychoric correlation of two ordinal variables is derived as
follows. Suppose each of the ordinal variables was obtained by
categorizing a normally distributed underlying variable, and those two
unobserved variables follow a bivariate normal distribution. Then the
(maximum likelihood) estimate of that correlation is the polychoric
correlation.
|
What is Polychoric Correlation Coefficient intuitively?
|
I found Kolenikov and Angeles "The Use of Discrete Data in Principal Component Analysis" working paper to be helpful (published version here if you have access). Slides here as well.
To quote the auth
|
What is Polychoric Correlation Coefficient intuitively?
I found Kolenikov and Angeles "The Use of Discrete Data in Principal Component Analysis" working paper to be helpful (published version here if you have access). Slides here as well.
To quote the authors (from the help-file for their polychoric Stata command):
The polychoric correlation of two ordinal variables is derived as
follows. Suppose each of the ordinal variables was obtained by
categorizing a normally distributed underlying variable, and those two
unobserved variables follow a bivariate normal distribution. Then the
(maximum likelihood) estimate of that correlation is the polychoric
correlation.
|
What is Polychoric Correlation Coefficient intuitively?
I found Kolenikov and Angeles "The Use of Discrete Data in Principal Component Analysis" working paper to be helpful (published version here if you have access). Slides here as well.
To quote the auth
|
43,188
|
"Estimated effects may be unbalanced" message when running aov in R. What does it mean?
|
aov is designed for balanced data (link). Balanced design is: An experimental design where all cells (i.e. treatment combinations) have the same number of observations (link).
|
"Estimated effects may be unbalanced" message when running aov in R. What does it mean?
|
aov is designed for balanced data (link). Balanced design is: An experimental design where all cells (i.e. treatment combinations) have the same number of observations (link).
|
"Estimated effects may be unbalanced" message when running aov in R. What does it mean?
aov is designed for balanced data (link). Balanced design is: An experimental design where all cells (i.e. treatment combinations) have the same number of observations (link).
|
"Estimated effects may be unbalanced" message when running aov in R. What does it mean?
aov is designed for balanced data (link). Balanced design is: An experimental design where all cells (i.e. treatment combinations) have the same number of observations (link).
|
43,189
|
"Estimated effects may be unbalanced" message when running aov in R. What does it mean?
|
Install package called car & activate it first and then calculate sum of squares using Anova(lm(y~x1*x2),type=2)
Doing this way will calculate type II SS which can be used for analysis when the interaction is not significant
and if the interaction is significant for the unbalanced data, you should calculate type III SS
|
"Estimated effects may be unbalanced" message when running aov in R. What does it mean?
|
Install package called car & activate it first and then calculate sum of squares using Anova(lm(y~x1*x2),type=2)
Doing this way will calculate type II SS which can be used for analysis when the intera
|
"Estimated effects may be unbalanced" message when running aov in R. What does it mean?
Install package called car & activate it first and then calculate sum of squares using Anova(lm(y~x1*x2),type=2)
Doing this way will calculate type II SS which can be used for analysis when the interaction is not significant
and if the interaction is significant for the unbalanced data, you should calculate type III SS
|
"Estimated effects may be unbalanced" message when running aov in R. What does it mean?
Install package called car & activate it first and then calculate sum of squares using Anova(lm(y~x1*x2),type=2)
Doing this way will calculate type II SS which can be used for analysis when the intera
|
43,190
|
2D binary classification
|
This is a really deep question. I am going to try to answer it for your specific case and make broader points at the same time.
Has anyone an idea on how to proceed here?
How to proceed from here is really a question of which method to use. The answer for your particular case seems to be CART (Classification and Regression Trees). CARTs will allow you to get nice, rectangular regions for prediction, but they are very noisy. That is why Random Forests and other similary algorithms were created. Random Forests and the like trade clarity for an improvement in prediction.
In a general sense, the choice of method depends on two things: what your goals are for the analysis and how well the model fits your data. Nothing is stopping you from trying a couple of methods and choosing which fits bets.
Note that trees can output a continuous probability. You can change the threshold for classifying an event (good in your case) to raise or lower sensitivity. Examine the ROC curve to see how the two relate before doing so.
Does it make sense to split the dataset into training- and test-set?
Yes and no. You should never measure the performance of the classification algorithm on the same data that it was fit on. You will end up drastically overfitting the data and over estimating.
In this case, a training and test set does not make sense because of the small sample size (~70). Instead, I would use Leave One Out Cross Validation (LOOCV).
The algorithm goes like this:
Hold one observation out.
Fit the model on the data except the hold out from 1.
Classify the hold out from 1.
Repeat 1-3 until all observations have been held out.
Estimate fit based on the classifications from 3.
For LOOCV, the final model is the model fit on the entire dataset.
What proportion of the data should the training set be (I thought
around a 60/40-split).
In general, 60/40 or 50/50 splits are good. If you have enough data, do 50/25/25 where the second 25% goes to a validation set. When you have a validation set, you fit the model on the test set then check its performance on the test set. If you think the model should be tweeked, do so and the retest on the test set until you are satisfied. Then, once the model is locked down, classify the data in the validation set. The results from the validation set will be those that you report.
For your case, I would recommend LOOCV.
How to avoid overfitting (i.e. boxes that are too small)?
Most algorithms have control parameters (e.g. cost for SVMs). In the tree package in R, there are several control parameters to help prevent overfitting (see tree.control).
What is a good statistic to assess the predictive performance in this
case? AUC? Accurary? Positive predictive value? Matthews correlation
coefficient?
It depends on the purpose. I would recomend at lease reporting Accuracy, Sensitivity, Specificity, Positive and Negative Predictive Values. Since you said the emphasis should be on sensitivity, that should be a focus. AUC is also commonly used but is noisy.
|
2D binary classification
|
This is a really deep question. I am going to try to answer it for your specific case and make broader points at the same time.
Has anyone an idea on how to proceed here?
How to proceed from here i
|
2D binary classification
This is a really deep question. I am going to try to answer it for your specific case and make broader points at the same time.
Has anyone an idea on how to proceed here?
How to proceed from here is really a question of which method to use. The answer for your particular case seems to be CART (Classification and Regression Trees). CARTs will allow you to get nice, rectangular regions for prediction, but they are very noisy. That is why Random Forests and other similary algorithms were created. Random Forests and the like trade clarity for an improvement in prediction.
In a general sense, the choice of method depends on two things: what your goals are for the analysis and how well the model fits your data. Nothing is stopping you from trying a couple of methods and choosing which fits bets.
Note that trees can output a continuous probability. You can change the threshold for classifying an event (good in your case) to raise or lower sensitivity. Examine the ROC curve to see how the two relate before doing so.
Does it make sense to split the dataset into training- and test-set?
Yes and no. You should never measure the performance of the classification algorithm on the same data that it was fit on. You will end up drastically overfitting the data and over estimating.
In this case, a training and test set does not make sense because of the small sample size (~70). Instead, I would use Leave One Out Cross Validation (LOOCV).
The algorithm goes like this:
Hold one observation out.
Fit the model on the data except the hold out from 1.
Classify the hold out from 1.
Repeat 1-3 until all observations have been held out.
Estimate fit based on the classifications from 3.
For LOOCV, the final model is the model fit on the entire dataset.
What proportion of the data should the training set be (I thought
around a 60/40-split).
In general, 60/40 or 50/50 splits are good. If you have enough data, do 50/25/25 where the second 25% goes to a validation set. When you have a validation set, you fit the model on the test set then check its performance on the test set. If you think the model should be tweeked, do so and the retest on the test set until you are satisfied. Then, once the model is locked down, classify the data in the validation set. The results from the validation set will be those that you report.
For your case, I would recommend LOOCV.
How to avoid overfitting (i.e. boxes that are too small)?
Most algorithms have control parameters (e.g. cost for SVMs). In the tree package in R, there are several control parameters to help prevent overfitting (see tree.control).
What is a good statistic to assess the predictive performance in this
case? AUC? Accurary? Positive predictive value? Matthews correlation
coefficient?
It depends on the purpose. I would recomend at lease reporting Accuracy, Sensitivity, Specificity, Positive and Negative Predictive Values. Since you said the emphasis should be on sensitivity, that should be a focus. AUC is also commonly used but is noisy.
|
2D binary classification
This is a really deep question. I am going to try to answer it for your specific case and make broader points at the same time.
Has anyone an idea on how to proceed here?
How to proceed from here i
|
43,191
|
Comparing power law fits with large uncertainties
|
Because there are errors in both variables, we do not know the true values of the $(x,y)$ data associated with each observation. Let us suppose that each of $n=6$ independent observations results from measurement errors independently made in the two coordinates $(\xi, \eta) = (\xi, f(\xi;\alpha,\beta))$ with $f(\xi; \alpha, \beta) = \alpha \xi^\beta$. When those errors have Gaussian distributions and their standard deviations are proportional to the stated uncertainties $\sigma_i$ (in the first coordinate) and $\tau_i$ (in the second coordinate), the negative log likelihood of the data $(x_i, y_i)$ is
$$-\log(\Lambda(\alpha, \beta, (\xi_i))) = C+\frac{1}{2\lambda^2} \sum_i^n \left(\left(\frac{x_i - \xi_i}{\sigma_i}\right)^2 + \left(\frac{y_i - f(\xi_i; \alpha, \beta)}{\tau_i}\right)^2\right)$$
where the $\xi$ are the correct first coordinates and $C = n\log(2\pi) - 2 n \log(\lambda) + \sum_i^n \log(\sigma_i\tau_i)$ does not depend on the $n+2$ parameters $\alpha, \beta,$ and $(\xi_i)$.
The maximum likelihood estimates of $\alpha, \beta, (\xi_i)$ for these data (using $\lambda=1/2$) are $\hat\alpha = 7.77197$, $\hat\beta=-0.92872$, and $(\xi_i) = (6.5, 8.8, 17.4, 58.8, 124.9, 505.4)$, whence $(f(\xi_i; \alpha, \beta))$ = $(417.4, 316.0, 167.5, 54.0, 26.8, 7.3)$. To test whether $\beta=-1/2$ is consistent with the data, maximize the log likelihood with respect to that constraint. twice the difference in values at the optima under the null hypothesis has a $\chi^2(1)$ distribution. For these data it equals $1035,$ which is enormous: clearly $\beta\ne -1/2$.
In this figure, line segments connect the data to their fitted values for both fits. Symbol for the data are scaled in inverse proportion to the geometric means of the $\sigma_i$ and $\tau_i$, so that the more precise data are shown with larger circles. The slope of the data on this log-log plot is almost $-1$, which is far from the hypothesized value of $-1/2$. The best fit with a slope of $-1/2$ is shown in red. Evidently it posits tremendously large measurement errors occurred in both coordinates, far larger than suggested by the $(\sigma_i)$ and $(\tau_i)$.
If you mistrust the stated uncertainties, compensate by increasing $\lambda$. (In the R code below, include an option of the form , lambda=2 in the calls to optim.) Optimal values of $\lambda$ could be found via cross-validation to get a reasonable match between the residuals and the scaled standard deviations $\sigma_i\hat\lambda$ and $\tau_i\hat\lambda$, but I have not carried that out because the conclusion is so strong anyway. (Moreover, changing $\lambda$ will not change either of the fits.)
R code to perform the calculations and display the figure:
x.data <- matrix(c(5,15,13,60,125,505,
3,14,8,2,2,3), ncol=2)
y.data <- matrix(c(415,316,168,59,29,5,
33,3,6,4,3,1), ncol=2)
#
# Functional form of the fit.
#
f <- function(abt) {
x <- abt[-(1:2)]
exp(abt[2] * log(x) + abt[1])
}
#
# Negative log likelihood.
#
ll <- function(abt, x=x.data, y=y.data, lambda=1/2) {
a <- abt[1]; b <- abt[2]; x.0 <- abt[-(1:2)]
y.0 <- f(abt)
sum((((x[ ,1] - x.0)/x[ ,2])^2 + ((y[ ,1] - y.0)/y[ ,2])^2) / (2*lambda^2) +
log(x[, 2]*y[, 2]/(lambda^2)))
}
#
# Constrained log likelihood.
#
ll.0 <- function(abt, slope, ...) {
abt[2] <- slope
ll(abt, ...)
}
#
# Brute force fitting using ML.
#
ab.0 <- coef(lm(log(y.data[ ,1]) ~ log(x.data[ ,1]))) # Initial LS estimates
abt.0 <- c(ab.0, x.data[ ,1])
fit <- optim(abt.0, ll, control=list(maxit=20000))
fit.0 <- optim(fit$par, ll.0, control=list(maxit=20000), slope=-1/2)
fit.0$par[2] <- -1/2
#
# Plot of data and the fits.
#
plot(x[,1], y[,1], xlim=c(1/2,600), ylim=c(2,600), log="xy",
cex = 4 / sqrt(x[,2]*y[,2]),
main="Data and Fits", xlab="X", ylab="Y")
segments(x[,1], y[,1], fit.0$par[-(1:2)], f(fit.0$par), lty=3, col="#e0404080")
segments(x[,1], y[,1], fit$par[-(1:2)], f(fit$par), lty=3, col="#4040e080")
points(x[,1], y[,1], xlim=c(1/2,600), ylim=c(2,600),
cex = 4 / sqrt(x[,2]*y[,2]), pch=19, col="#80808080")
points(fit$par[-(1:2)], f(fit$par), pch=19, col="Blue")
points(fit.0$par[-(1:2)], f(fit.0$par), pch=3, col="Red")
abline(c(1/log(10), 1)*coef(lm(log(f(fit$par)) ~ log(fit$par[-(1:2)]))), col="Blue")
abline(c(1/log(10), 1)*coef(lm(log(f(fit.0$par)) ~ log(fit.0$par[-(1:2)]))), col="Red")
legend(x="bottomleft", legend=c("Data", "Fit", "Fit (slope = -1/2)"),
bg="#f8f8f8",
col=c("Gray", "Blue", "Red"),
pch=c(19, 1, 3))
#
# Chi-squared statistic to test beta=-1/2.
#
(chisq <- 2*(fit.0$value - fit$value))
pchisq(chisq, 1, lower.tail=FALSE)
|
Comparing power law fits with large uncertainties
|
Because there are errors in both variables, we do not know the true values of the $(x,y)$ data associated with each observation. Let us suppose that each of $n=6$ independent observations results fr
|
Comparing power law fits with large uncertainties
Because there are errors in both variables, we do not know the true values of the $(x,y)$ data associated with each observation. Let us suppose that each of $n=6$ independent observations results from measurement errors independently made in the two coordinates $(\xi, \eta) = (\xi, f(\xi;\alpha,\beta))$ with $f(\xi; \alpha, \beta) = \alpha \xi^\beta$. When those errors have Gaussian distributions and their standard deviations are proportional to the stated uncertainties $\sigma_i$ (in the first coordinate) and $\tau_i$ (in the second coordinate), the negative log likelihood of the data $(x_i, y_i)$ is
$$-\log(\Lambda(\alpha, \beta, (\xi_i))) = C+\frac{1}{2\lambda^2} \sum_i^n \left(\left(\frac{x_i - \xi_i}{\sigma_i}\right)^2 + \left(\frac{y_i - f(\xi_i; \alpha, \beta)}{\tau_i}\right)^2\right)$$
where the $\xi$ are the correct first coordinates and $C = n\log(2\pi) - 2 n \log(\lambda) + \sum_i^n \log(\sigma_i\tau_i)$ does not depend on the $n+2$ parameters $\alpha, \beta,$ and $(\xi_i)$.
The maximum likelihood estimates of $\alpha, \beta, (\xi_i)$ for these data (using $\lambda=1/2$) are $\hat\alpha = 7.77197$, $\hat\beta=-0.92872$, and $(\xi_i) = (6.5, 8.8, 17.4, 58.8, 124.9, 505.4)$, whence $(f(\xi_i; \alpha, \beta))$ = $(417.4, 316.0, 167.5, 54.0, 26.8, 7.3)$. To test whether $\beta=-1/2$ is consistent with the data, maximize the log likelihood with respect to that constraint. twice the difference in values at the optima under the null hypothesis has a $\chi^2(1)$ distribution. For these data it equals $1035,$ which is enormous: clearly $\beta\ne -1/2$.
In this figure, line segments connect the data to their fitted values for both fits. Symbol for the data are scaled in inverse proportion to the geometric means of the $\sigma_i$ and $\tau_i$, so that the more precise data are shown with larger circles. The slope of the data on this log-log plot is almost $-1$, which is far from the hypothesized value of $-1/2$. The best fit with a slope of $-1/2$ is shown in red. Evidently it posits tremendously large measurement errors occurred in both coordinates, far larger than suggested by the $(\sigma_i)$ and $(\tau_i)$.
If you mistrust the stated uncertainties, compensate by increasing $\lambda$. (In the R code below, include an option of the form , lambda=2 in the calls to optim.) Optimal values of $\lambda$ could be found via cross-validation to get a reasonable match between the residuals and the scaled standard deviations $\sigma_i\hat\lambda$ and $\tau_i\hat\lambda$, but I have not carried that out because the conclusion is so strong anyway. (Moreover, changing $\lambda$ will not change either of the fits.)
R code to perform the calculations and display the figure:
x.data <- matrix(c(5,15,13,60,125,505,
3,14,8,2,2,3), ncol=2)
y.data <- matrix(c(415,316,168,59,29,5,
33,3,6,4,3,1), ncol=2)
#
# Functional form of the fit.
#
f <- function(abt) {
x <- abt[-(1:2)]
exp(abt[2] * log(x) + abt[1])
}
#
# Negative log likelihood.
#
ll <- function(abt, x=x.data, y=y.data, lambda=1/2) {
a <- abt[1]; b <- abt[2]; x.0 <- abt[-(1:2)]
y.0 <- f(abt)
sum((((x[ ,1] - x.0)/x[ ,2])^2 + ((y[ ,1] - y.0)/y[ ,2])^2) / (2*lambda^2) +
log(x[, 2]*y[, 2]/(lambda^2)))
}
#
# Constrained log likelihood.
#
ll.0 <- function(abt, slope, ...) {
abt[2] <- slope
ll(abt, ...)
}
#
# Brute force fitting using ML.
#
ab.0 <- coef(lm(log(y.data[ ,1]) ~ log(x.data[ ,1]))) # Initial LS estimates
abt.0 <- c(ab.0, x.data[ ,1])
fit <- optim(abt.0, ll, control=list(maxit=20000))
fit.0 <- optim(fit$par, ll.0, control=list(maxit=20000), slope=-1/2)
fit.0$par[2] <- -1/2
#
# Plot of data and the fits.
#
plot(x[,1], y[,1], xlim=c(1/2,600), ylim=c(2,600), log="xy",
cex = 4 / sqrt(x[,2]*y[,2]),
main="Data and Fits", xlab="X", ylab="Y")
segments(x[,1], y[,1], fit.0$par[-(1:2)], f(fit.0$par), lty=3, col="#e0404080")
segments(x[,1], y[,1], fit$par[-(1:2)], f(fit$par), lty=3, col="#4040e080")
points(x[,1], y[,1], xlim=c(1/2,600), ylim=c(2,600),
cex = 4 / sqrt(x[,2]*y[,2]), pch=19, col="#80808080")
points(fit$par[-(1:2)], f(fit$par), pch=19, col="Blue")
points(fit.0$par[-(1:2)], f(fit.0$par), pch=3, col="Red")
abline(c(1/log(10), 1)*coef(lm(log(f(fit$par)) ~ log(fit$par[-(1:2)]))), col="Blue")
abline(c(1/log(10), 1)*coef(lm(log(f(fit.0$par)) ~ log(fit.0$par[-(1:2)]))), col="Red")
legend(x="bottomleft", legend=c("Data", "Fit", "Fit (slope = -1/2)"),
bg="#f8f8f8",
col=c("Gray", "Blue", "Red"),
pch=c(19, 1, 3))
#
# Chi-squared statistic to test beta=-1/2.
#
(chisq <- 2*(fit.0$value - fit$value))
pchisq(chisq, 1, lower.tail=FALSE)
|
Comparing power law fits with large uncertainties
Because there are errors in both variables, we do not know the true values of the $(x,y)$ data associated with each observation. Let us suppose that each of $n=6$ independent observations results fr
|
43,192
|
Analyzing repeated rank data.
|
I think that using a rating system, like Elo or Glicko, is a good choice.
Do the experiment with subject A, then repeat with subject B, subject C, and more.
Randomize matches' (i.e. comparisons) order and insert results in a rating system engine.
If you're interested in use more than in development, rankade, our free ranking system (for sports, games, items, and more) is another option. It allows matches with both 2 and 3+ factions, while Elo and Glicko works just for one-on-one (here's a comparison). Due to your items, it should be easy - and useful - comparing 3 or 4 types in each test.
|
Analyzing repeated rank data.
|
I think that using a rating system, like Elo or Glicko, is a good choice.
Do the experiment with subject A, then repeat with subject B, subject C, and more.
Randomize matches' (i.e. comparisons) orde
|
Analyzing repeated rank data.
I think that using a rating system, like Elo or Glicko, is a good choice.
Do the experiment with subject A, then repeat with subject B, subject C, and more.
Randomize matches' (i.e. comparisons) order and insert results in a rating system engine.
If you're interested in use more than in development, rankade, our free ranking system (for sports, games, items, and more) is another option. It allows matches with both 2 and 3+ factions, while Elo and Glicko works just for one-on-one (here's a comparison). Due to your items, it should be easy - and useful - comparing 3 or 4 types in each test.
|
Analyzing repeated rank data.
I think that using a rating system, like Elo or Glicko, is a good choice.
Do the experiment with subject A, then repeat with subject B, subject C, and more.
Randomize matches' (i.e. comparisons) orde
|
43,193
|
Analyzing repeated rank data.
|
Here one possible answer, although I imagine a better one exists:
Take the row means (ignoring blanks).
|
Analyzing repeated rank data.
|
Here one possible answer, although I imagine a better one exists:
Take the row means (ignoring blanks).
|
Analyzing repeated rank data.
Here one possible answer, although I imagine a better one exists:
Take the row means (ignoring blanks).
|
Analyzing repeated rank data.
Here one possible answer, although I imagine a better one exists:
Take the row means (ignoring blanks).
|
43,194
|
Analyzing repeated rank data.
|
This might be an odd approach but logistic regression might be useful. For example, if person 1 compared items 1vs2 and 3vs4 and person 2 compared 2vs3 and 5vs6, and if the lower # items were always rated as "2", the data could be entered in R as:
data.frame(
T1=c(1,2,3,4,2,3,5,6),
T2=c(2,1,4,3,3,2,6,5),
Y =c(1,2,1,2,1,2,1,2)
)
All 3 variables are categorical. You can predict the probability of getting a "2" in Y from T2, controlling for T1. The p-values and standard errors will be meaningless because nesting was not taken into account but the estimates could be useful.
Then, you can order the Items based on the probabilities that they will be ranked higher than another item.
|
Analyzing repeated rank data.
|
This might be an odd approach but logistic regression might be useful. For example, if person 1 compared items 1vs2 and 3vs4 and person 2 compared 2vs3 and 5vs6, and if the lower # items were always r
|
Analyzing repeated rank data.
This might be an odd approach but logistic regression might be useful. For example, if person 1 compared items 1vs2 and 3vs4 and person 2 compared 2vs3 and 5vs6, and if the lower # items were always rated as "2", the data could be entered in R as:
data.frame(
T1=c(1,2,3,4,2,3,5,6),
T2=c(2,1,4,3,3,2,6,5),
Y =c(1,2,1,2,1,2,1,2)
)
All 3 variables are categorical. You can predict the probability of getting a "2" in Y from T2, controlling for T1. The p-values and standard errors will be meaningless because nesting was not taken into account but the estimates could be useful.
Then, you can order the Items based on the probabilities that they will be ranked higher than another item.
|
Analyzing repeated rank data.
This might be an odd approach but logistic regression might be useful. For example, if person 1 compared items 1vs2 and 3vs4 and person 2 compared 2vs3 and 5vs6, and if the lower # items were always r
|
43,195
|
Relationship between inverse gamma and gamma distribution
|
Yes, but I think the first parameter of the Gamma should be $1-p/2$ instead of $1+p/2$.
$$
v \sim \text{Gamma}(1-p/2, s/2)
$$
I'm using the shape-rate parametrization, as in here.
|
Relationship between inverse gamma and gamma distribution
|
Yes, but I think the first parameter of the Gamma should be $1-p/2$ instead of $1+p/2$.
$$
v \sim \text{Gamma}(1-p/2, s/2)
$$
I'm using the shape-rate parametrization, as in here.
|
Relationship between inverse gamma and gamma distribution
Yes, but I think the first parameter of the Gamma should be $1-p/2$ instead of $1+p/2$.
$$
v \sim \text{Gamma}(1-p/2, s/2)
$$
I'm using the shape-rate parametrization, as in here.
|
Relationship between inverse gamma and gamma distribution
Yes, but I think the first parameter of the Gamma should be $1-p/2$ instead of $1+p/2$.
$$
v \sim \text{Gamma}(1-p/2, s/2)
$$
I'm using the shape-rate parametrization, as in here.
|
43,196
|
Relationship between inverse gamma and gamma distribution
|
Your scale parameter seems to be problematic. Here is the relationship between Gamma and Inv-Gamma distributions:
A random variable X is said to have the inverse Gamma distribution with
parameters $\alpha$ and $\theta$ if 1/X has the Gamma($\alpha$, $1/\theta$) distribution.
|
Relationship between inverse gamma and gamma distribution
|
Your scale parameter seems to be problematic. Here is the relationship between Gamma and Inv-Gamma distributions:
A random variable X is said to have the inverse Gamma distribution with
parameters
|
Relationship between inverse gamma and gamma distribution
Your scale parameter seems to be problematic. Here is the relationship between Gamma and Inv-Gamma distributions:
A random variable X is said to have the inverse Gamma distribution with
parameters $\alpha$ and $\theta$ if 1/X has the Gamma($\alpha$, $1/\theta$) distribution.
|
Relationship between inverse gamma and gamma distribution
Your scale parameter seems to be problematic. Here is the relationship between Gamma and Inv-Gamma distributions:
A random variable X is said to have the inverse Gamma distribution with
parameters
|
43,197
|
Linear Regression Intuition behind least squares
|
Solution $\beta=(x^Tx)^{-1}x^Ty$
can be justified by following three arguments:
It is a method of moments estimator which solves certain population moment conditions
It minimizes L2 norm
It is a maximum likelihood estimator when residuals follow Gaussian distribution
Second argument is about mathematical optimization and it does not rely on statistical properties of this estimator.
There is a Gauss-Markov-Aitken theorem which states that amongst linear unbiased estimators (generalized) least squares has a minimum variance so that it is BLUE (best linear unbiased estimator). Only constraint for this is that residuals has to be spherical.
|
Linear Regression Intuition behind least squares
|
Solution $\beta=(x^Tx)^{-1}x^Ty$
can be justified by following three arguments:
It is a method of moments estimator which solves certain population moment conditions
It minimizes L2 norm
It is a maxi
|
Linear Regression Intuition behind least squares
Solution $\beta=(x^Tx)^{-1}x^Ty$
can be justified by following three arguments:
It is a method of moments estimator which solves certain population moment conditions
It minimizes L2 norm
It is a maximum likelihood estimator when residuals follow Gaussian distribution
Second argument is about mathematical optimization and it does not rely on statistical properties of this estimator.
There is a Gauss-Markov-Aitken theorem which states that amongst linear unbiased estimators (generalized) least squares has a minimum variance so that it is BLUE (best linear unbiased estimator). Only constraint for this is that residuals has to be spherical.
|
Linear Regression Intuition behind least squares
Solution $\beta=(x^Tx)^{-1}x^Ty$
can be justified by following three arguments:
It is a method of moments estimator which solves certain population moment conditions
It minimizes L2 norm
It is a maxi
|
43,198
|
Deep Learning Networks: Fundamental differences
|
Two qualitative answers that seem reasonable are that:
the more layers you have, the more computation you have to perform during a training step. This computation (think for example of backpropagation as the simplest example) may be linear with the number of layers, and with deep networks you easily get to 20-30 layers. Tools such as TensorFlow require to build a data flow graph in order for the computation to be parallelized while respecting dependencies between the single weight updates.
there are several hyperparameters of the network that have to be trained from data too. For example, consider choosing which filters to use in the first layer of a convolutional neural network: each combination that you try in an grid search or a random search results in a new network to train from scratch, and that has to be evaluated with all the others to find the one with the lowest error on a validation set.
|
Deep Learning Networks: Fundamental differences
|
Two qualitative answers that seem reasonable are that:
the more layers you have, the more computation you have to perform during a training step. This computation (think for example of backpropagatio
|
Deep Learning Networks: Fundamental differences
Two qualitative answers that seem reasonable are that:
the more layers you have, the more computation you have to perform during a training step. This computation (think for example of backpropagation as the simplest example) may be linear with the number of layers, and with deep networks you easily get to 20-30 layers. Tools such as TensorFlow require to build a data flow graph in order for the computation to be parallelized while respecting dependencies between the single weight updates.
there are several hyperparameters of the network that have to be trained from data too. For example, consider choosing which filters to use in the first layer of a convolutional neural network: each combination that you try in an grid search or a random search results in a new network to train from scratch, and that has to be evaluated with all the others to find the one with the lowest error on a validation set.
|
Deep Learning Networks: Fundamental differences
Two qualitative answers that seem reasonable are that:
the more layers you have, the more computation you have to perform during a training step. This computation (think for example of backpropagatio
|
43,199
|
Kendall-tau and RKHS spaces
|
By the Moore-Aronszajn theorem, $\tau$ is the kernel for some RKHS iff it's symmetric and positive semidefinite. (The link uses the term "positive definite" to mean the equivalent of psd for matrices, unfortunately; that terminology isn't standardized.)
Update: What I had here before was based on a mistaken understanding of the framework (as well as a mistaken definition of $\tau$ in the original question); see the comments.
The new $\tau$ is clearly symmetric. I'm not sure yet whether it's psd. As @cardinal pointed out, it does at least satisfy $\tau(X, X) = 1$ and $-1 \le \tau(X, Y) \le 1$ for continuous RVs, which is a good start.
|
Kendall-tau and RKHS spaces
|
By the Moore-Aronszajn theorem, $\tau$ is the kernel for some RKHS iff it's symmetric and positive semidefinite. (The link uses the term "positive definite" to mean the equivalent of psd for matrices,
|
Kendall-tau and RKHS spaces
By the Moore-Aronszajn theorem, $\tau$ is the kernel for some RKHS iff it's symmetric and positive semidefinite. (The link uses the term "positive definite" to mean the equivalent of psd for matrices, unfortunately; that terminology isn't standardized.)
Update: What I had here before was based on a mistaken understanding of the framework (as well as a mistaken definition of $\tau$ in the original question); see the comments.
The new $\tau$ is clearly symmetric. I'm not sure yet whether it's psd. As @cardinal pointed out, it does at least satisfy $\tau(X, X) = 1$ and $-1 \le \tau(X, Y) \le 1$ for continuous RVs, which is a good start.
|
Kendall-tau and RKHS spaces
By the Moore-Aronszajn theorem, $\tau$ is the kernel for some RKHS iff it's symmetric and positive semidefinite. (The link uses the term "positive definite" to mean the equivalent of psd for matrices,
|
43,200
|
PAC learning theory and lower bound on the amount of input samples
|
To answer my own question, it is easy to get a lower bound when you assume that all variables are uniformly distributed. Then the probability of this event (let's call it A) becomes:
$$
P(A) = 1 - P (X_1 = 0, X_2 = 0, \ldots, X_n = 0) \\
= 1 - \prod P(X_i = 0) \quad \quad \quad \quad \quad\\
= 1 - \left[ \binom{n}{k}\,\theta^{k} (1-\theta)^{n-k} \right]^m \quad\\
= \cdots \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad
$$
The solution for non-uniform distributions can be found by compounding the Bernoulli distribution with an a priori Beta distribution.
|
PAC learning theory and lower bound on the amount of input samples
|
To answer my own question, it is easy to get a lower bound when you assume that all variables are uniformly distributed. Then the probability of this event (let's call it A) becomes:
$$
P(A) = 1 - P
|
PAC learning theory and lower bound on the amount of input samples
To answer my own question, it is easy to get a lower bound when you assume that all variables are uniformly distributed. Then the probability of this event (let's call it A) becomes:
$$
P(A) = 1 - P (X_1 = 0, X_2 = 0, \ldots, X_n = 0) \\
= 1 - \prod P(X_i = 0) \quad \quad \quad \quad \quad\\
= 1 - \left[ \binom{n}{k}\,\theta^{k} (1-\theta)^{n-k} \right]^m \quad\\
= \cdots \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad
$$
The solution for non-uniform distributions can be found by compounding the Bernoulli distribution with an a priori Beta distribution.
|
PAC learning theory and lower bound on the amount of input samples
To answer my own question, it is easy to get a lower bound when you assume that all variables are uniformly distributed. Then the probability of this event (let's call it A) becomes:
$$
P(A) = 1 - P
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.