idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
14,101 | Why we use log function for cross entropy? | For binary classification one way to encode the probability of an output is $p^y(1-p)^{1-y}$, if y is encoded as 0 or 1. This is the likelihood function and it’s meaning is with probability p we output 0 and with probability 1-p if output is 1.
Now you have a sample and you want to find p which best fits your data. On... | Why we use log function for cross entropy? | For binary classification one way to encode the probability of an output is $p^y(1-p)^{1-y}$, if y is encoded as 0 or 1. This is the likelihood function and it’s meaning is with probability p we outpu | Why we use log function for cross entropy?
For binary classification one way to encode the probability of an output is $p^y(1-p)^{1-y}$, if y is encoded as 0 or 1. This is the likelihood function and it’s meaning is with probability p we output 0 and with probability 1-p if output is 1.
Now you have a sample and you w... | Why we use log function for cross entropy?
For binary classification one way to encode the probability of an output is $p^y(1-p)^{1-y}$, if y is encoded as 0 or 1. This is the likelihood function and it’s meaning is with probability p we outpu |
14,102 | Why we use log function for cross entropy? | I was also looking for an explanation and found one reason I find intuitive here:
It heavily penalizes predications that are confident and wrong.
Check this graph, it shows the range of possible log loss values given a true observation:
The Log Loss increases rapidly as the predicted probability approaches 0(wrong p... | Why we use log function for cross entropy? | I was also looking for an explanation and found one reason I find intuitive here:
It heavily penalizes predications that are confident and wrong.
Check this graph, it shows the range of possible log | Why we use log function for cross entropy?
I was also looking for an explanation and found one reason I find intuitive here:
It heavily penalizes predications that are confident and wrong.
Check this graph, it shows the range of possible log loss values given a true observation:
The Log Loss increases rapidly as the... | Why we use log function for cross entropy?
I was also looking for an explanation and found one reason I find intuitive here:
It heavily penalizes predications that are confident and wrong.
Check this graph, it shows the range of possible log |
14,103 | What *is* an Artificial Neural Network? | Jürgen Schmidhuber, "Deep Learning in Neural Networks: An Overview" traces the history of key concepts in neural networks and deep learning. In his view, neural networks would appear to encompass essentially any model which can be characterized as a directed graph where each node represents some computational unit. Sch... | What *is* an Artificial Neural Network? | Jürgen Schmidhuber, "Deep Learning in Neural Networks: An Overview" traces the history of key concepts in neural networks and deep learning. In his view, neural networks would appear to encompass esse | What *is* an Artificial Neural Network?
Jürgen Schmidhuber, "Deep Learning in Neural Networks: An Overview" traces the history of key concepts in neural networks and deep learning. In his view, neural networks would appear to encompass essentially any model which can be characterized as a directed graph where each node... | What *is* an Artificial Neural Network?
Jürgen Schmidhuber, "Deep Learning in Neural Networks: An Overview" traces the history of key concepts in neural networks and deep learning. In his view, neural networks would appear to encompass esse |
14,104 | What *is* an Artificial Neural Network? | If you want a basic definition of an ANN, you might say that it's a directed-graphical-model, where inputs and outputs are processed at each node via an activation function, and most of the time gradient descent is used to train it. So the question really becomes: what models out there can be expressed as graphical mod... | What *is* an Artificial Neural Network? | If you want a basic definition of an ANN, you might say that it's a directed-graphical-model, where inputs and outputs are processed at each node via an activation function, and most of the time gradi | What *is* an Artificial Neural Network?
If you want a basic definition of an ANN, you might say that it's a directed-graphical-model, where inputs and outputs are processed at each node via an activation function, and most of the time gradient descent is used to train it. So the question really becomes: what models out... | What *is* an Artificial Neural Network?
If you want a basic definition of an ANN, you might say that it's a directed-graphical-model, where inputs and outputs are processed at each node via an activation function, and most of the time gradi |
14,105 | What *is* an Artificial Neural Network? | Perhaps, a more accurate name for ANNs is "differentiable networks", i.e. complex parametrized functions that can be optimized using gradient descent or its variant. This is a very general definition that emphasizes differentiability, but doesn't tell anything about principal ideas, tasks that it's suited for, underlyi... | What *is* an Artificial Neural Network? | Perhaps, a more accurate name for ANNs is "differentiable networks", i.e. complex parametrized functions that can be optimized using gradient descent or its variant. This is a very general definition | What *is* an Artificial Neural Network?
Perhaps, a more accurate name for ANNs is "differentiable networks", i.e. complex parametrized functions that can be optimized using gradient descent or its variant. This is a very general definition that emphasizes differentiability, but doesn't tell anything about principal ide... | What *is* an Artificial Neural Network?
Perhaps, a more accurate name for ANNs is "differentiable networks", i.e. complex parametrized functions that can be optimized using gradient descent or its variant. This is a very general definition |
14,106 | What *is* an Artificial Neural Network? | I might try to postulate some things that help to define a Neural Network.
A computation (directed) graph with adjustable parameters.
Said parameters can be adjusted to conform to data (real or simulated).
An objective function to be optimized is involved implicitly or explicitly. It can be global or local on paramete... | What *is* an Artificial Neural Network? | I might try to postulate some things that help to define a Neural Network.
A computation (directed) graph with adjustable parameters.
Said parameters can be adjusted to conform to data (real or simul | What *is* an Artificial Neural Network?
I might try to postulate some things that help to define a Neural Network.
A computation (directed) graph with adjustable parameters.
Said parameters can be adjusted to conform to data (real or simulated).
An objective function to be optimized is involved implicitly or explicitl... | What *is* an Artificial Neural Network?
I might try to postulate some things that help to define a Neural Network.
A computation (directed) graph with adjustable parameters.
Said parameters can be adjusted to conform to data (real or simul |
14,107 | Relationship between Hessian Matrix and Covariance Matrix | You should first check out this: Basic question about Fisher Information matrix and relationship to Hessian and standard errors.
Suppose we have a statistical model (family of distributions) $\{f_{\theta}: \theta \in \Theta\}$. In the most general case we have $\mathrm{dim}(\Theta) = d$, so this family is parameterized... | Relationship between Hessian Matrix and Covariance Matrix | You should first check out this: Basic question about Fisher Information matrix and relationship to Hessian and standard errors.
Suppose we have a statistical model (family of distributions) $\{f_{\th | Relationship between Hessian Matrix and Covariance Matrix
You should first check out this: Basic question about Fisher Information matrix and relationship to Hessian and standard errors.
Suppose we have a statistical model (family of distributions) $\{f_{\theta}: \theta \in \Theta\}$. In the most general case we have $... | Relationship between Hessian Matrix and Covariance Matrix
You should first check out this: Basic question about Fisher Information matrix and relationship to Hessian and standard errors.
Suppose we have a statistical model (family of distributions) $\{f_{\th |
14,108 | Generating samples from singular Gaussian distribution | The singular Gaussian distribution is the push-forward of a nonsingular distribution in a lower-dimensional space. Geometrically, you can take a standard Normal distribution, rescale it, rotate it, and embed it isometrically into an affine subspace of a higher dimensional space. Algebraically, this is done by means o... | Generating samples from singular Gaussian distribution | The singular Gaussian distribution is the push-forward of a nonsingular distribution in a lower-dimensional space. Geometrically, you can take a standard Normal distribution, rescale it, rotate it, a | Generating samples from singular Gaussian distribution
The singular Gaussian distribution is the push-forward of a nonsingular distribution in a lower-dimensional space. Geometrically, you can take a standard Normal distribution, rescale it, rotate it, and embed it isometrically into an affine subspace of a higher dim... | Generating samples from singular Gaussian distribution
The singular Gaussian distribution is the push-forward of a nonsingular distribution in a lower-dimensional space. Geometrically, you can take a standard Normal distribution, rescale it, rotate it, a |
14,109 | Why is Functional Data Analysis (FDA) not as popular? | As someone with almost zero knowledge in FDA but who has recently started reading and thinking about it, here are some of my thoughts on why FDA is not so popular these days. Please take them with a grain of salt as I am far from being an expert :
Most of the Data Science problems people are interested in solving "do ... | Why is Functional Data Analysis (FDA) not as popular? | As someone with almost zero knowledge in FDA but who has recently started reading and thinking about it, here are some of my thoughts on why FDA is not so popular these days. Please take them with a g | Why is Functional Data Analysis (FDA) not as popular?
As someone with almost zero knowledge in FDA but who has recently started reading and thinking about it, here are some of my thoughts on why FDA is not so popular these days. Please take them with a grain of salt as I am far from being an expert :
Most of the Data ... | Why is Functional Data Analysis (FDA) not as popular?
As someone with almost zero knowledge in FDA but who has recently started reading and thinking about it, here are some of my thoughts on why FDA is not so popular these days. Please take them with a g |
14,110 | Why is Functional Data Analysis (FDA) not as popular? | FDA was late to the party and benefits against established applications are at times incremental. In many cases, "standard longitudinal data techniques" check most boxes already. FDA does provide some unique advantages on specific use cases (e.g. working with density functions or covariance surfaces as the unit of anal... | Why is Functional Data Analysis (FDA) not as popular? | FDA was late to the party and benefits against established applications are at times incremental. In many cases, "standard longitudinal data techniques" check most boxes already. FDA does provide some | Why is Functional Data Analysis (FDA) not as popular?
FDA was late to the party and benefits against established applications are at times incremental. In many cases, "standard longitudinal data techniques" check most boxes already. FDA does provide some unique advantages on specific use cases (e.g. working with densit... | Why is Functional Data Analysis (FDA) not as popular?
FDA was late to the party and benefits against established applications are at times incremental. In many cases, "standard longitudinal data techniques" check most boxes already. FDA does provide some |
14,111 | What is the connection between partial least squares, reduced rank regression, and principal component regression? | These are three different methods, and none of them can be seen as a special case of another.
Formally, if $\mathbf X$ and $\mathbf Y$ are centered predictor ($n \times p$) and response ($n\times q$) datasets and if we look for the first pair of axes, $\mathbf w \in \mathbb R^p$ for $\mathbf X$ and $\mathbf v \in \math... | What is the connection between partial least squares, reduced rank regression, and principal compone | These are three different methods, and none of them can be seen as a special case of another.
Formally, if $\mathbf X$ and $\mathbf Y$ are centered predictor ($n \times p$) and response ($n\times q$) | What is the connection between partial least squares, reduced rank regression, and principal component regression?
These are three different methods, and none of them can be seen as a special case of another.
Formally, if $\mathbf X$ and $\mathbf Y$ are centered predictor ($n \times p$) and response ($n\times q$) datas... | What is the connection between partial least squares, reduced rank regression, and principal compone
These are three different methods, and none of them can be seen as a special case of another.
Formally, if $\mathbf X$ and $\mathbf Y$ are centered predictor ($n \times p$) and response ($n\times q$) |
14,112 | How much data do you need for a convolutional neural network? | In order to figure out whether or not more data will be helpful, you should compare the performance of your algorithm on the training data (i.e. the data used to train the neural network) to its performance on testing data (i.e. data the neural network did not "see" in training).
A good thing to check would be the erro... | How much data do you need for a convolutional neural network? | In order to figure out whether or not more data will be helpful, you should compare the performance of your algorithm on the training data (i.e. the data used to train the neural network) to its perfo | How much data do you need for a convolutional neural network?
In order to figure out whether or not more data will be helpful, you should compare the performance of your algorithm on the training data (i.e. the data used to train the neural network) to its performance on testing data (i.e. data the neural network did n... | How much data do you need for a convolutional neural network?
In order to figure out whether or not more data will be helpful, you should compare the performance of your algorithm on the training data (i.e. the data used to train the neural network) to its perfo |
14,113 | How much data do you need for a convolutional neural network? | The naive answer is that always more data are needed.
Iterating over the same dataset saying for more epochs helps you to "refine" the result but you don't improve the result as much as having more data.
As an example i'm training a convnet to do sentence modelling and to test if i need more data i tried to split my tr... | How much data do you need for a convolutional neural network? | The naive answer is that always more data are needed.
Iterating over the same dataset saying for more epochs helps you to "refine" the result but you don't improve the result as much as having more da | How much data do you need for a convolutional neural network?
The naive answer is that always more data are needed.
Iterating over the same dataset saying for more epochs helps you to "refine" the result but you don't improve the result as much as having more data.
As an example i'm training a convnet to do sentence mo... | How much data do you need for a convolutional neural network?
The naive answer is that always more data are needed.
Iterating over the same dataset saying for more epochs helps you to "refine" the result but you don't improve the result as much as having more da |
14,114 | How much data do you need for a convolutional neural network? | I guess the most important thing is that the samples in your data are well spread, because no matter how much data you have, more data would always be better.
After all, if you try to learn to distinguish between cat and dog pictures, you can't expect your model to perform well if you only feed it cat images.
As sugge... | How much data do you need for a convolutional neural network? | I guess the most important thing is that the samples in your data are well spread, because no matter how much data you have, more data would always be better.
After all, if you try to learn to disting | How much data do you need for a convolutional neural network?
I guess the most important thing is that the samples in your data are well spread, because no matter how much data you have, more data would always be better.
After all, if you try to learn to distinguish between cat and dog pictures, you can't expect your m... | How much data do you need for a convolutional neural network?
I guess the most important thing is that the samples in your data are well spread, because no matter how much data you have, more data would always be better.
After all, if you try to learn to disting |
14,115 | How much data do you need for a convolutional neural network? | Another method generally used to figure out if your network has learned enough features is to visualize the initial filters. If the network is well trained it should display a smooth filter. A noisy filter generally indicates that the network hasn't been trained enough or that it has been overfit.
For more info read th... | How much data do you need for a convolutional neural network? | Another method generally used to figure out if your network has learned enough features is to visualize the initial filters. If the network is well trained it should display a smooth filter. A noisy f | How much data do you need for a convolutional neural network?
Another method generally used to figure out if your network has learned enough features is to visualize the initial filters. If the network is well trained it should display a smooth filter. A noisy filter generally indicates that the network hasn't been tra... | How much data do you need for a convolutional neural network?
Another method generally used to figure out if your network has learned enough features is to visualize the initial filters. If the network is well trained it should display a smooth filter. A noisy f |
14,116 | Free internet or downloadable resources for sample size calculations | Power analysis refers to analytical procedures that attempt to determine the power of a statistical test (i.e., the probability of rejecting a false null hypothesis) or the sample size (i.e., $N$) required to achieve a given power. You can search Cross Validated for more information about power analysis by clicking he... | Free internet or downloadable resources for sample size calculations | Power analysis refers to analytical procedures that attempt to determine the power of a statistical test (i.e., the probability of rejecting a false null hypothesis) or the sample size (i.e., $N$) req | Free internet or downloadable resources for sample size calculations
Power analysis refers to analytical procedures that attempt to determine the power of a statistical test (i.e., the probability of rejecting a false null hypothesis) or the sample size (i.e., $N$) required to achieve a given power. You can search Cro... | Free internet or downloadable resources for sample size calculations
Power analysis refers to analytical procedures that attempt to determine the power of a statistical test (i.e., the probability of rejecting a false null hypothesis) or the sample size (i.e., $N$) req |
14,117 | What is the "$R^2$" value given in the summary of a coxph model in R | Using getS3method("summary","coxph") you can look at how it is calculated.
The relevant code lines are the following:
logtest <- -2 * (cox$loglik[1] - cox$loglik[2])
rval$rsq <- c(rsq = 1 - exp(-logtest/cox$n), maxrsq = 1 -
exp(2 * cox$loglik[1]/cox$n))
Here cox$loglik is "a vector of length 2 containing the ... | What is the "$R^2$" value given in the summary of a coxph model in R | Using getS3method("summary","coxph") you can look at how it is calculated.
The relevant code lines are the following:
logtest <- -2 * (cox$loglik[1] - cox$loglik[2])
rval$rsq <- c(rsq = 1 - exp(-logte | What is the "$R^2$" value given in the summary of a coxph model in R
Using getS3method("summary","coxph") you can look at how it is calculated.
The relevant code lines are the following:
logtest <- -2 * (cox$loglik[1] - cox$loglik[2])
rval$rsq <- c(rsq = 1 - exp(-logtest/cox$n), maxrsq = 1 -
exp(2 * cox$loglik... | What is the "$R^2$" value given in the summary of a coxph model in R
Using getS3method("summary","coxph") you can look at how it is calculated.
The relevant code lines are the following:
logtest <- -2 * (cox$loglik[1] - cox$loglik[2])
rval$rsq <- c(rsq = 1 - exp(-logte |
14,118 | What is the "$R^2$" value given in the summary of a coxph model in R | Dividing by $n$ the number of observations in the summary of coxph is wrong, it should be the number of uncensored events; see O'Quigley et al. (2005) Explained randomness in proportional hazards models Statistics in Medicine p. 479-489. | What is the "$R^2$" value given in the summary of a coxph model in R | Dividing by $n$ the number of observations in the summary of coxph is wrong, it should be the number of uncensored events; see O'Quigley et al. (2005) Explained randomness in proportional hazards mode | What is the "$R^2$" value given in the summary of a coxph model in R
Dividing by $n$ the number of observations in the summary of coxph is wrong, it should be the number of uncensored events; see O'Quigley et al. (2005) Explained randomness in proportional hazards models Statistics in Medicine p. 479-489. | What is the "$R^2$" value given in the summary of a coxph model in R
Dividing by $n$ the number of observations in the summary of coxph is wrong, it should be the number of uncensored events; see O'Quigley et al. (2005) Explained randomness in proportional hazards mode |
14,119 | If I repeat every sample observation in a linear regression model and rerun the regression how would the result be affected? [duplicate] | Conceptually, you are adding no "new" information, but you "know" that information more precisely.
This would therefore result in the same regression coefficients, with smaller standard errors.
For example, in Stata, the expand x function duplicates each observation x times.
sysuse auto, clear
regress mpg weight length... | If I repeat every sample observation in a linear regression model and rerun the regression how would | Conceptually, you are adding no "new" information, but you "know" that information more precisely.
This would therefore result in the same regression coefficients, with smaller standard errors.
For ex | If I repeat every sample observation in a linear regression model and rerun the regression how would the result be affected? [duplicate]
Conceptually, you are adding no "new" information, but you "know" that information more precisely.
This would therefore result in the same regression coefficients, with smaller standa... | If I repeat every sample observation in a linear regression model and rerun the regression how would
Conceptually, you are adding no "new" information, but you "know" that information more precisely.
This would therefore result in the same regression coefficients, with smaller standard errors.
For ex |
14,120 | If I repeat every sample observation in a linear regression model and rerun the regression how would the result be affected? [duplicate] | Ordinary linear regression solves the problem $$w^* = \mbox{argmin}_w ||Xw - y||^2$$ where $X$ is the matrix of predictors and $y$ is the response. If you repeat each sample $M$ times, it would leave the objective function to be minimized unchanged (except for a multiplicative factor $M$). Therefore the weight vector t... | If I repeat every sample observation in a linear regression model and rerun the regression how would | Ordinary linear regression solves the problem $$w^* = \mbox{argmin}_w ||Xw - y||^2$$ where $X$ is the matrix of predictors and $y$ is the response. If you repeat each sample $M$ times, it would leave | If I repeat every sample observation in a linear regression model and rerun the regression how would the result be affected? [duplicate]
Ordinary linear regression solves the problem $$w^* = \mbox{argmin}_w ||Xw - y||^2$$ where $X$ is the matrix of predictors and $y$ is the response. If you repeat each sample $M$ times... | If I repeat every sample observation in a linear regression model and rerun the regression how would
Ordinary linear regression solves the problem $$w^* = \mbox{argmin}_w ||Xw - y||^2$$ where $X$ is the matrix of predictors and $y$ is the response. If you repeat each sample $M$ times, it would leave |
14,121 | When to use weighted Euclidean distance and how to determine the weights to use? | Weights for standardisation
The setup you have is a variant of Mahalanobis distance. So when $w$ is the reciprocal of each measurement's variance you are effectively putting all the measurements on the same scale. This implies you think that the variation in each is equally 'important' but that some are measured in u... | When to use weighted Euclidean distance and how to determine the weights to use? | Weights for standardisation
The setup you have is a variant of Mahalanobis distance. So when $w$ is the reciprocal of each measurement's variance you are effectively putting all the measurements on t | When to use weighted Euclidean distance and how to determine the weights to use?
Weights for standardisation
The setup you have is a variant of Mahalanobis distance. So when $w$ is the reciprocal of each measurement's variance you are effectively putting all the measurements on the same scale. This implies you think ... | When to use weighted Euclidean distance and how to determine the weights to use?
Weights for standardisation
The setup you have is a variant of Mahalanobis distance. So when $w$ is the reciprocal of each measurement's variance you are effectively putting all the measurements on t |
14,122 | What methods can be used to determine the Order of Integration of a time series? | There are a number of statistical tests (known as "unit root tests") for dealing with this problem. The most popular is probably the "Augmented Dickey-Fuller" (ADF) test, although the Phillips-Perron (PP) test and the KPSS test are also widely used.
Both the ADF and PP tests are based on a null hypothesis of a unit ro... | What methods can be used to determine the Order of Integration of a time series? | There are a number of statistical tests (known as "unit root tests") for dealing with this problem. The most popular is probably the "Augmented Dickey-Fuller" (ADF) test, although the Phillips-Perron | What methods can be used to determine the Order of Integration of a time series?
There are a number of statistical tests (known as "unit root tests") for dealing with this problem. The most popular is probably the "Augmented Dickey-Fuller" (ADF) test, although the Phillips-Perron (PP) test and the KPSS test are also wi... | What methods can be used to determine the Order of Integration of a time series?
There are a number of statistical tests (known as "unit root tests") for dealing with this problem. The most popular is probably the "Augmented Dickey-Fuller" (ADF) test, although the Phillips-Perron |
14,123 | What methods can be used to determine the Order of Integration of a time series? | Also, for some elaborate discussion (including bashing of ADF / PP / KPSS :) you might want to have a look at the book by Maddala and Kim:
http://www.amazon.com/Cointegration-Structural-Change-Themes-Econometrics/dp/0521587824
Quite extensive and not very easy to read sometimes, but a useful reference. | What methods can be used to determine the Order of Integration of a time series? | Also, for some elaborate discussion (including bashing of ADF / PP / KPSS :) you might want to have a look at the book by Maddala and Kim:
http://www.amazon.com/Cointegration-Structural-Change-Themes- | What methods can be used to determine the Order of Integration of a time series?
Also, for some elaborate discussion (including bashing of ADF / PP / KPSS :) you might want to have a look at the book by Maddala and Kim:
http://www.amazon.com/Cointegration-Structural-Change-Themes-Econometrics/dp/0521587824
Quite extens... | What methods can be used to determine the Order of Integration of a time series?
Also, for some elaborate discussion (including bashing of ADF / PP / KPSS :) you might want to have a look at the book by Maddala and Kim:
http://www.amazon.com/Cointegration-Structural-Change-Themes- |
14,124 | Elastic/ridge/lasso analysis, what then? | These methods--the lasso and elastic net--were born out of the problems of both feature selection and prediction. It's through these two lenses that I think an explanation can be found.
Matthew Gunn nicely explains in his reply that these two goals are distinct and often taken up by different people. However, fortunate... | Elastic/ridge/lasso analysis, what then? | These methods--the lasso and elastic net--were born out of the problems of both feature selection and prediction. It's through these two lenses that I think an explanation can be found.
Matthew Gunn n | Elastic/ridge/lasso analysis, what then?
These methods--the lasso and elastic net--were born out of the problems of both feature selection and prediction. It's through these two lenses that I think an explanation can be found.
Matthew Gunn nicely explains in his reply that these two goals are distinct and often taken u... | Elastic/ridge/lasso analysis, what then?
These methods--the lasso and elastic net--were born out of the problems of both feature selection and prediction. It's through these two lenses that I think an explanation can be found.
Matthew Gunn n |
14,125 | Elastic/ridge/lasso analysis, what then? | What you're doing with elastic, ridge, or lasso, using cross-validation to choose regularization parameters, is fitting some linear form to optimize prediction. Why these particular regularization parameters? Because they work best for prediction on new data. Shrinking coefficient estimates towards zero, introducing bi... | Elastic/ridge/lasso analysis, what then? | What you're doing with elastic, ridge, or lasso, using cross-validation to choose regularization parameters, is fitting some linear form to optimize prediction. Why these particular regularization par | Elastic/ridge/lasso analysis, what then?
What you're doing with elastic, ridge, or lasso, using cross-validation to choose regularization parameters, is fitting some linear form to optimize prediction. Why these particular regularization parameters? Because they work best for prediction on new data. Shrinking coefficie... | Elastic/ridge/lasso analysis, what then?
What you're doing with elastic, ridge, or lasso, using cross-validation to choose regularization parameters, is fitting some linear form to optimize prediction. Why these particular regularization par |
14,126 | Test for difference between 2 empirical discrete distributions | The Kolmogorov-Smirnov can still be used, but if you use the tabulated critical values it will be conservative (which is only a problem because it pushes down your power curve). Better to get the permutation distribution of the statistic, so that your significance levels are what you choose them to be. This will only m... | Test for difference between 2 empirical discrete distributions | The Kolmogorov-Smirnov can still be used, but if you use the tabulated critical values it will be conservative (which is only a problem because it pushes down your power curve). Better to get the perm | Test for difference between 2 empirical discrete distributions
The Kolmogorov-Smirnov can still be used, but if you use the tabulated critical values it will be conservative (which is only a problem because it pushes down your power curve). Better to get the permutation distribution of the statistic, so that your signi... | Test for difference between 2 empirical discrete distributions
The Kolmogorov-Smirnov can still be used, but if you use the tabulated critical values it will be conservative (which is only a problem because it pushes down your power curve). Better to get the perm |
14,127 | Can Hazard Ratio be translated into ratio of medians of survival time? | Your intuition is correct. The following relationship between survival functions holds:
$$
S_1(t)=S_0(t)^r
$$
where $r$ is the hazard ratio (see, e.g. the Wikipedia article Hazard ratio). From this we may show that your statement implies an exponential survival function.
Let us denote the medians by $M_r$, $M_1$ for tw... | Can Hazard Ratio be translated into ratio of medians of survival time? | Your intuition is correct. The following relationship between survival functions holds:
$$
S_1(t)=S_0(t)^r
$$
where $r$ is the hazard ratio (see, e.g. the Wikipedia article Hazard ratio). From this we | Can Hazard Ratio be translated into ratio of medians of survival time?
Your intuition is correct. The following relationship between survival functions holds:
$$
S_1(t)=S_0(t)^r
$$
where $r$ is the hazard ratio (see, e.g. the Wikipedia article Hazard ratio). From this we may show that your statement implies an exponent... | Can Hazard Ratio be translated into ratio of medians of survival time?
Your intuition is correct. The following relationship between survival functions holds:
$$
S_1(t)=S_0(t)^r
$$
where $r$ is the hazard ratio (see, e.g. the Wikipedia article Hazard ratio). From this we |
14,128 | How to choose the right optimization algorithm? | Based on what you said: I assume you have to optimize for 50 variables; I also assume that you are having a situation that it is very expensive to find analytical derivatives (let alone get numericals out) and that your optimization is unconstrained.
Let me stress, you are a bit unluckily cause between 25-30 and 100 va... | How to choose the right optimization algorithm? | Based on what you said: I assume you have to optimize for 50 variables; I also assume that you are having a situation that it is very expensive to find analytical derivatives (let alone get numericals | How to choose the right optimization algorithm?
Based on what you said: I assume you have to optimize for 50 variables; I also assume that you are having a situation that it is very expensive to find analytical derivatives (let alone get numericals out) and that your optimization is unconstrained.
Let me stress, you ar... | How to choose the right optimization algorithm?
Based on what you said: I assume you have to optimize for 50 variables; I also assume that you are having a situation that it is very expensive to find analytical derivatives (let alone get numericals |
14,129 | How to choose the right optimization algorithm? | Maybe you should get yourself an introductory book about numerical optimization. You will need to take into account your function in order to decide for the algorithm.
Among the algorithms you mention, important differences are whether the Jacobian
or Hessian is needed or only the function itself.
Considering that thi... | How to choose the right optimization algorithm? | Maybe you should get yourself an introductory book about numerical optimization. You will need to take into account your function in order to decide for the algorithm.
Among the algorithms you mention | How to choose the right optimization algorithm?
Maybe you should get yourself an introductory book about numerical optimization. You will need to take into account your function in order to decide for the algorithm.
Among the algorithms you mention, important differences are whether the Jacobian
or Hessian is needed o... | How to choose the right optimization algorithm?
Maybe you should get yourself an introductory book about numerical optimization. You will need to take into account your function in order to decide for the algorithm.
Among the algorithms you mention |
14,130 | Time series and anomaly detection | Regarding your first question, I would recommend that you read this famous article (Clustering of Time Series Subsequences is Meaningless) before doing clustering on a time series. It is clearly written and illustrates many pitfalls that you want to avoid. | Time series and anomaly detection | Regarding your first question, I would recommend that you read this famous article (Clustering of Time Series Subsequences is Meaningless) before doing clustering on a time series. It is clearly writt | Time series and anomaly detection
Regarding your first question, I would recommend that you read this famous article (Clustering of Time Series Subsequences is Meaningless) before doing clustering on a time series. It is clearly written and illustrates many pitfalls that you want to avoid. | Time series and anomaly detection
Regarding your first question, I would recommend that you read this famous article (Clustering of Time Series Subsequences is Meaningless) before doing clustering on a time series. It is clearly writt |
14,131 | Time series and anomaly detection | Anomaly detection or "Intervention Detection" has been championd by G.C.Tiao and others. To do science is to search for repeated patterns.To detect anomalies is to identify values that do not follow repeated patterns. We learn from Newton "Whoever knows the ways of Nature will more easily notice her deviations and, on... | Time series and anomaly detection | Anomaly detection or "Intervention Detection" has been championd by G.C.Tiao and others. To do science is to search for repeated patterns.To detect anomalies is to identify values that do not follow | Time series and anomaly detection
Anomaly detection or "Intervention Detection" has been championd by G.C.Tiao and others. To do science is to search for repeated patterns.To detect anomalies is to identify values that do not follow repeated patterns. We learn from Newton "Whoever knows the ways of Nature will more ea... | Time series and anomaly detection
Anomaly detection or "Intervention Detection" has been championd by G.C.Tiao and others. To do science is to search for repeated patterns.To detect anomalies is to identify values that do not follow |
14,132 | Time series and anomaly detection | For time series anomaly detection there can be multiple approaches. As you have said, if you are using ARIMA as the model, you can use MAPE or SMAPE as the error metric and use a confidence threshold using it. Anything falling beyond the CI band can be an anomaly. Similarly you can go for DBSCAN or statistical profilin... | Time series and anomaly detection | For time series anomaly detection there can be multiple approaches. As you have said, if you are using ARIMA as the model, you can use MAPE or SMAPE as the error metric and use a confidence threshold | Time series and anomaly detection
For time series anomaly detection there can be multiple approaches. As you have said, if you are using ARIMA as the model, you can use MAPE or SMAPE as the error metric and use a confidence threshold using it. Anything falling beyond the CI band can be an anomaly. Similarly you can go ... | Time series and anomaly detection
For time series anomaly detection there can be multiple approaches. As you have said, if you are using ARIMA as the model, you can use MAPE or SMAPE as the error metric and use a confidence threshold |
14,133 | How does gradient boosting calculate probability estimates? | TL;DR: The log-odds for a sample is the sum of the weights of its terminal leafs. The probability of the sample belonging to class 1 is the inverse-logit transformation of the sum.
Analogously to logistic regression, the logistic function computes probabilities that are linear on the logit scale:
$$
z = Xw \\
\mathbb{... | How does gradient boosting calculate probability estimates? | TL;DR: The log-odds for a sample is the sum of the weights of its terminal leafs. The probability of the sample belonging to class 1 is the inverse-logit transformation of the sum.
Analogously to log | How does gradient boosting calculate probability estimates?
TL;DR: The log-odds for a sample is the sum of the weights of its terminal leafs. The probability of the sample belonging to class 1 is the inverse-logit transformation of the sum.
Analogously to logistic regression, the logistic function computes probabiliti... | How does gradient boosting calculate probability estimates?
TL;DR: The log-odds for a sample is the sum of the weights of its terminal leafs. The probability of the sample belonging to class 1 is the inverse-logit transformation of the sum.
Analogously to log |
14,134 | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM) | There are several reasons that can cause fluctuations in training loss over epochs. The main one though is the fact that almost all neural nets are trained with different forms of stochastic gradient descent. This is why batch_size parameter exists which determines how many samples you want to use to make one update to... | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM) | There are several reasons that can cause fluctuations in training loss over epochs. The main one though is the fact that almost all neural nets are trained with different forms of stochastic gradient | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM)
There are several reasons that can cause fluctuations in training loss over epochs. The main one though is the fact that almost all neural nets are trained with different forms of stochastic gradient descent. This is why batch_size parameter exists... | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM)
There are several reasons that can cause fluctuations in training loss over epochs. The main one though is the fact that almost all neural nets are trained with different forms of stochastic gradient |
14,135 | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM) | Your loss curve doesn't look so bad to me. It should definitely "fluctuate" up and down a bit, as long as the general trend is that it is going down - this makes sense.
Batch size will also play into how your network learns, so you might want to optimize that along with your learning rate. Also, I would plot the entir... | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM) | Your loss curve doesn't look so bad to me. It should definitely "fluctuate" up and down a bit, as long as the general trend is that it is going down - this makes sense.
Batch size will also play into | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM)
Your loss curve doesn't look so bad to me. It should definitely "fluctuate" up and down a bit, as long as the general trend is that it is going down - this makes sense.
Batch size will also play into how your network learns, so you might want to o... | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM)
Your loss curve doesn't look so bad to me. It should definitely "fluctuate" up and down a bit, as long as the general trend is that it is going down - this makes sense.
Batch size will also play into |
14,136 | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM) | The fluctuations are normal within certain limits and depend on the fact that you use a heuristic method but in your case they are excessive. Despite all the performance takes a definite direction and therefore the system works. From the graphs you have posted, the problem depends on your data so it's a difficult train... | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM) | The fluctuations are normal within certain limits and depend on the fact that you use a heuristic method but in your case they are excessive. Despite all the performance takes a definite direction and | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM)
The fluctuations are normal within certain limits and depend on the fact that you use a heuristic method but in your case they are excessive. Despite all the performance takes a definite direction and therefore the system works. From the graphs you... | Why does the loss/accuracy fluctuate during the training? (Keras, LSTM)
The fluctuations are normal within certain limits and depend on the fact that you use a heuristic method but in your case they are excessive. Despite all the performance takes a definite direction and |
14,137 | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian | The starred step is valid because (a) $p$ and $q$ have the same zeroth and second moments and (b) $\log(p)$ is a polynomial function of the components of $\mathbf{x}$ whose terms have total degrees $0$ or $2$.
You need to know only two things about a multivariate normal distribution with zero mean:
$\log(p)$ is a qua... | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian | The starred step is valid because (a) $p$ and $q$ have the same zeroth and second moments and (b) $\log(p)$ is a polynomial function of the components of $\mathbf{x}$ whose terms have total degrees $0 | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian
The starred step is valid because (a) $p$ and $q$ have the same zeroth and second moments and (b) $\log(p)$ is a polynomial function of the components of $\mathbf{x}$ whose terms have total degrees $0$ or $2$.
You need to know onl... | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian
The starred step is valid because (a) $p$ and $q$ have the same zeroth and second moments and (b) $\log(p)$ is a polynomial function of the components of $\mathbf{x}$ whose terms have total degrees $0 |
14,138 | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian | I think what happens is that in the integrals in both (4.27) and (4.28) you have $q(x)$ and $p(x)$ multiplying terms of the form $\sigma_{ij}x_ix_j$ (because $p(x)$ is a normal density, when you take the log you obtain just such kind of terms from the exponent plus constants). But then the condition in the theorem ens... | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian | I think what happens is that in the integrals in both (4.27) and (4.28) you have $q(x)$ and $p(x)$ multiplying terms of the form $\sigma_{ij}x_ix_j$ (because $p(x)$ is a normal density, when you take | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian
I think what happens is that in the integrals in both (4.27) and (4.28) you have $q(x)$ and $p(x)$ multiplying terms of the form $\sigma_{ij}x_ix_j$ (because $p(x)$ is a normal density, when you take the log you obtain just such k... | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian
I think what happens is that in the integrals in both (4.27) and (4.28) you have $q(x)$ and $p(x)$ multiplying terms of the form $\sigma_{ij}x_ix_j$ (because $p(x)$ is a normal density, when you take |
14,139 | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian | We can also verify the above result by solving the constrained optimization problem. For simplicity, let's prove for one dimensional random variable $X$ first (with $\Sigma=[\sigma^2]$ fixed, with $V(X) = E[(X-\mu)^2]=\sigma^2$, where $E[X]=\mu$), that can be generalized to multliple dimensions (random vectors). Let th... | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian | We can also verify the above result by solving the constrained optimization problem. For simplicity, let's prove for one dimensional random variable $X$ first (with $\Sigma=[\sigma^2]$ fixed, with $V( | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian
We can also verify the above result by solving the constrained optimization problem. For simplicity, let's prove for one dimensional random variable $X$ first (with $\Sigma=[\sigma^2]$ fixed, with $V(X) = E[(X-\mu)^2]=\sigma^2$, wh... | Prove that the maximum entropy distribution with a fixed covariance matrix is a Gaussian
We can also verify the above result by solving the constrained optimization problem. For simplicity, let's prove for one dimensional random variable $X$ first (with $\Sigma=[\sigma^2]$ fixed, with $V( |
14,140 | What measure of training error to report for Random Forests? | To add to @Soren H. Welling's answer.
1. Is it generally accepted to report OOB training error as the training error measure for random forests?
No. OOB error on the trained model is not the same as training error. It can, however, serve as a measure of predictive accuracy.
2. Is it true that the traditional measure o... | What measure of training error to report for Random Forests? | To add to @Soren H. Welling's answer.
1. Is it generally accepted to report OOB training error as the training error measure for random forests?
No. OOB error on the trained model is not the same as t | What measure of training error to report for Random Forests?
To add to @Soren H. Welling's answer.
1. Is it generally accepted to report OOB training error as the training error measure for random forests?
No. OOB error on the trained model is not the same as training error. It can, however, serve as a measure of predi... | What measure of training error to report for Random Forests?
To add to @Soren H. Welling's answer.
1. Is it generally accepted to report OOB training error as the training error measure for random forests?
No. OOB error on the trained model is not the same as t |
14,141 | What measure of training error to report for Random Forests? | [edited 21.7.15 8:31 AM CEST]
I suppose you used RF for classification. Because in this case, the algorithm produces fully grown trees with pure terminal nodes of only one target class.
predict(model, data=X_train)
This line of coding is like a dog chasing [~66% of] its own tail. The prediction of any training sample... | What measure of training error to report for Random Forests? | [edited 21.7.15 8:31 AM CEST]
I suppose you used RF for classification. Because in this case, the algorithm produces fully grown trees with pure terminal nodes of only one target class.
predict(model | What measure of training error to report for Random Forests?
[edited 21.7.15 8:31 AM CEST]
I suppose you used RF for classification. Because in this case, the algorithm produces fully grown trees with pure terminal nodes of only one target class.
predict(model, data=X_train)
This line of coding is like a dog chasing ... | What measure of training error to report for Random Forests?
[edited 21.7.15 8:31 AM CEST]
I suppose you used RF for classification. Because in this case, the algorithm produces fully grown trees with pure terminal nodes of only one target class.
predict(model |
14,142 | Smoothing - when to use it and when not to? | Exponential Smoothing is a classic technique used in noncausal time series forecasting. As long as you only use it in straightforward forecasting and don't use in-sample smoothed fits as an input to another data mining or statistical algorithm, Briggs' critique does not apply. (Accordingly, I am skeptical about using i... | Smoothing - when to use it and when not to? | Exponential Smoothing is a classic technique used in noncausal time series forecasting. As long as you only use it in straightforward forecasting and don't use in-sample smoothed fits as an input to a | Smoothing - when to use it and when not to?
Exponential Smoothing is a classic technique used in noncausal time series forecasting. As long as you only use it in straightforward forecasting and don't use in-sample smoothed fits as an input to another data mining or statistical algorithm, Briggs' critique does not apply... | Smoothing - when to use it and when not to?
Exponential Smoothing is a classic technique used in noncausal time series forecasting. As long as you only use it in straightforward forecasting and don't use in-sample smoothed fits as an input to a |
14,143 | Smoothing - when to use it and when not to? | Claiming that smoothing is inappropriate for a modeling analysis condemns it to having higher mean square error than it otherwise might. Mean square error or MSE can be decomposed into three terms, a square of a value called ``bias'', a variance, and some irreducible error. (This is shown in the citations below.) Exces... | Smoothing - when to use it and when not to? | Claiming that smoothing is inappropriate for a modeling analysis condemns it to having higher mean square error than it otherwise might. Mean square error or MSE can be decomposed into three terms, a | Smoothing - when to use it and when not to?
Claiming that smoothing is inappropriate for a modeling analysis condemns it to having higher mean square error than it otherwise might. Mean square error or MSE can be decomposed into three terms, a square of a value called ``bias'', a variance, and some irreducible error. (... | Smoothing - when to use it and when not to?
Claiming that smoothing is inappropriate for a modeling analysis condemns it to having higher mean square error than it otherwise might. Mean square error or MSE can be decomposed into three terms, a |
14,144 | Logistic Regression : How to obtain a saturated model | For each $y_i$, the fitted probability from the saturated model will be the same as $y_i$, either zero or one. As explained here, the likelihood of the saturated model is $1$. Therefore, the deviance of such model will be $-2\log(1/1) = 0$, on $0$ df. Here is an example from R:
y = c(1,1,1,0,0,0)
a <- factor(1:length(y... | Logistic Regression : How to obtain a saturated model | For each $y_i$, the fitted probability from the saturated model will be the same as $y_i$, either zero or one. As explained here, the likelihood of the saturated model is $1$. Therefore, the deviance | Logistic Regression : How to obtain a saturated model
For each $y_i$, the fitted probability from the saturated model will be the same as $y_i$, either zero or one. As explained here, the likelihood of the saturated model is $1$. Therefore, the deviance of such model will be $-2\log(1/1) = 0$, on $0$ df. Here is an exa... | Logistic Regression : How to obtain a saturated model
For each $y_i$, the fitted probability from the saturated model will be the same as $y_i$, either zero or one. As explained here, the likelihood of the saturated model is $1$. Therefore, the deviance |
14,145 | How to understand the correlation coefficient formula? | In the comments, 15 ways to understand the correlation coefficent were suggested:
"Thirteen Ways to Look at the Correlation Coefficient" (Rodgers & Nicewander 1988)
Via covariance
Via circles
The 13 ways discussed in the Rodgers and Nicewander article (The American Statistician, February 1988) are
A Function of Raw... | How to understand the correlation coefficient formula? | In the comments, 15 ways to understand the correlation coefficent were suggested:
"Thirteen Ways to Look at the Correlation Coefficient" (Rodgers & Nicewander 1988)
Via covariance
Via circles
The 1 | How to understand the correlation coefficient formula?
In the comments, 15 ways to understand the correlation coefficent were suggested:
"Thirteen Ways to Look at the Correlation Coefficient" (Rodgers & Nicewander 1988)
Via covariance
Via circles
The 13 ways discussed in the Rodgers and Nicewander article (The Ameri... | How to understand the correlation coefficient formula?
In the comments, 15 ways to understand the correlation coefficent were suggested:
"Thirteen Ways to Look at the Correlation Coefficient" (Rodgers & Nicewander 1988)
Via covariance
Via circles
The 1 |
14,146 | What are efficient algorithms to compute singular value decomposition (SVD)? | The main work-horse behind the computation of SVD is the QR algorithm. Having said that there are many different algorithms to calculate the singular value decomposition of a generic $M$-by-$N$ matrix $A$. A great schematic on the issue available here (from the documentation of Intel's MKL) is the following:
As you se... | What are efficient algorithms to compute singular value decomposition (SVD)? | The main work-horse behind the computation of SVD is the QR algorithm. Having said that there are many different algorithms to calculate the singular value decomposition of a generic $M$-by-$N$ matrix | What are efficient algorithms to compute singular value decomposition (SVD)?
The main work-horse behind the computation of SVD is the QR algorithm. Having said that there are many different algorithms to calculate the singular value decomposition of a generic $M$-by-$N$ matrix $A$. A great schematic on the issue availa... | What are efficient algorithms to compute singular value decomposition (SVD)?
The main work-horse behind the computation of SVD is the QR algorithm. Having said that there are many different algorithms to calculate the singular value decomposition of a generic $M$-by-$N$ matrix |
14,147 | What are the best ways to generate Bayesian prior estimates using beliefs of non-statisticians? | This is a good question. I'm going to use a simple example to illustrate my approach.
Suppose I am working with someone who needs to provide me priors on the mean and the variance for a gaussian likelihood. Something like
$$ y \sim \mathcal{N}(\mu, \sigma^2) $$
The question is: "What are this person's priors on $\mu$... | What are the best ways to generate Bayesian prior estimates using beliefs of non-statisticians? | This is a good question. I'm going to use a simple example to illustrate my approach.
Suppose I am working with someone who needs to provide me priors on the mean and the variance for a gaussian like | What are the best ways to generate Bayesian prior estimates using beliefs of non-statisticians?
This is a good question. I'm going to use a simple example to illustrate my approach.
Suppose I am working with someone who needs to provide me priors on the mean and the variance for a gaussian likelihood. Something like
... | What are the best ways to generate Bayesian prior estimates using beliefs of non-statisticians?
This is a good question. I'm going to use a simple example to illustrate my approach.
Suppose I am working with someone who needs to provide me priors on the mean and the variance for a gaussian like |
14,148 | What does "variational" mean? | It means using variational inference (at least for the first two).
In short, it's an method to approximate maximum likelihood when the probability density is complicated (and thus MLE is hard).
It uses Evidence Lower Bound (ELBO) as a proxy to ML:
$log(p(x)) \geq \mathbb{E}_q[log(p, Z)] - \mathbb{E}_q[log(q(Z))]$
Where... | What does "variational" mean? | It means using variational inference (at least for the first two).
In short, it's an method to approximate maximum likelihood when the probability density is complicated (and thus MLE is hard).
It use | What does "variational" mean?
It means using variational inference (at least for the first two).
In short, it's an method to approximate maximum likelihood when the probability density is complicated (and thus MLE is hard).
It uses Evidence Lower Bound (ELBO) as a proxy to ML:
$log(p(x)) \geq \mathbb{E}_q[log(p, Z)] - ... | What does "variational" mean?
It means using variational inference (at least for the first two).
In short, it's an method to approximate maximum likelihood when the probability density is complicated (and thus MLE is hard).
It use |
14,149 | What does "variational" mean? | You can find a good explanation in this source by Jason Eisner, where he cites:
The term variational is used because you pick the best q in Q -- the term derives from the "calculus of variations", which deals with optimization problems that pick the best function (in this case, a distribution q).
One way it occurs is w... | What does "variational" mean? | You can find a good explanation in this source by Jason Eisner, where he cites:
The term variational is used because you pick the best q in Q -- the term derives from the "calculus of variations", whi | What does "variational" mean?
You can find a good explanation in this source by Jason Eisner, where he cites:
The term variational is used because you pick the best q in Q -- the term derives from the "calculus of variations", which deals with optimization problems that pick the best function (in this case, a distribut... | What does "variational" mean?
You can find a good explanation in this source by Jason Eisner, where he cites:
The term variational is used because you pick the best q in Q -- the term derives from the "calculus of variations", whi |
14,150 | Why do we care if an MA process is invertible? | Invertibility is not really a big deal because almost any Gaussian, non-invertible MA$(q)$ model can be changed to an invertible MA$(q)$ model representing the same process by changing the parameter values. This is mentioned in most textbooks for the MA(1) model but it is true more generally.
As an example, consider t... | Why do we care if an MA process is invertible? | Invertibility is not really a big deal because almost any Gaussian, non-invertible MA$(q)$ model can be changed to an invertible MA$(q)$ model representing the same process by changing the parameter v | Why do we care if an MA process is invertible?
Invertibility is not really a big deal because almost any Gaussian, non-invertible MA$(q)$ model can be changed to an invertible MA$(q)$ model representing the same process by changing the parameter values. This is mentioned in most textbooks for the MA(1) model but it is... | Why do we care if an MA process is invertible?
Invertibility is not really a big deal because almost any Gaussian, non-invertible MA$(q)$ model can be changed to an invertible MA$(q)$ model representing the same process by changing the parameter v |
14,151 | Should I choose Random Forest regressor or classifier? | Use the Classifier. No, they are not both valid.
First, I really encourage you to read yourself into the topic of Regression vs Classification. Because using ML without knowing anything about it will give you wrong results which you won't realize. And that's quite dangerous... (it's a little bit like asking which way ... | Should I choose Random Forest regressor or classifier? | Use the Classifier. No, they are not both valid.
First, I really encourage you to read yourself into the topic of Regression vs Classification. Because using ML without knowing anything about it will | Should I choose Random Forest regressor or classifier?
Use the Classifier. No, they are not both valid.
First, I really encourage you to read yourself into the topic of Regression vs Classification. Because using ML without knowing anything about it will give you wrong results which you won't realize. And that's quite... | Should I choose Random Forest regressor or classifier?
Use the Classifier. No, they are not both valid.
First, I really encourage you to read yourself into the topic of Regression vs Classification. Because using ML without knowing anything about it will |
14,152 | Interpretation of ordinal logistic regression | You have perfectly confused odds and log odds. Log odds are the coefficients; odds are exponentiated coefficients. Besides, the odds interpretation goes the other way round. (I grew up with econometrics thinking about the limited dependent variables, and the odds interpretation of the ordinal regression is... uhm... am... | Interpretation of ordinal logistic regression | You have perfectly confused odds and log odds. Log odds are the coefficients; odds are exponentiated coefficients. Besides, the odds interpretation goes the other way round. (I grew up with econometri | Interpretation of ordinal logistic regression
You have perfectly confused odds and log odds. Log odds are the coefficients; odds are exponentiated coefficients. Besides, the odds interpretation goes the other way round. (I grew up with econometrics thinking about the limited dependent variables, and the odds interpreta... | Interpretation of ordinal logistic regression
You have perfectly confused odds and log odds. Log odds are the coefficients; odds are exponentiated coefficients. Besides, the odds interpretation goes the other way round. (I grew up with econometri |
14,153 | Interpretation of ordinal logistic regression | In the ordered logit model, the odds form the ratio of the probability being in any category below a specific threshold vs. the probability being in a category above the same threshold (e.g., with three categories: Probability of being in category A or B vs. C, as well as the probability of being in category A vs. B or... | Interpretation of ordinal logistic regression | In the ordered logit model, the odds form the ratio of the probability being in any category below a specific threshold vs. the probability being in a category above the same threshold (e.g., with thr | Interpretation of ordinal logistic regression
In the ordered logit model, the odds form the ratio of the probability being in any category below a specific threshold vs. the probability being in a category above the same threshold (e.g., with three categories: Probability of being in category A or B vs. C, as well as t... | Interpretation of ordinal logistic regression
In the ordered logit model, the odds form the ratio of the probability being in any category below a specific threshold vs. the probability being in a category above the same threshold (e.g., with thr |
14,154 | Sampling model for crowdsourced data? | Short answer: This is a convenience sample. There is nothing you can do to justify it.
A somewhat longer answer: you are in the same boat as many social networks that run their internal surveys without having much idea as to who would respond to a one-question survey that would appear randomly on Facebook or Google+..... | Sampling model for crowdsourced data? | Short answer: This is a convenience sample. There is nothing you can do to justify it.
A somewhat longer answer: you are in the same boat as many social networks that run their internal surveys witho | Sampling model for crowdsourced data?
Short answer: This is a convenience sample. There is nothing you can do to justify it.
A somewhat longer answer: you are in the same boat as many social networks that run their internal surveys without having much idea as to who would respond to a one-question survey that would ap... | Sampling model for crowdsourced data?
Short answer: This is a convenience sample. There is nothing you can do to justify it.
A somewhat longer answer: you are in the same boat as many social networks that run their internal surveys witho |
14,155 | What is the decision-theoretic justification for Bayesian credible interval procedures? | In univariate interval estimation, the set of possible actions is the set of ordered pairs specifying the endpoints of the interval. Let an element of that set be represented by $(a, b),\text{ } a \le b$.
Highest posterior density intervals
Let the posterior density be $f(\theta)$. The highest posterior density interva... | What is the decision-theoretic justification for Bayesian credible interval procedures? | In univariate interval estimation, the set of possible actions is the set of ordered pairs specifying the endpoints of the interval. Let an element of that set be represented by $(a, b),\text{ } a \le | What is the decision-theoretic justification for Bayesian credible interval procedures?
In univariate interval estimation, the set of possible actions is the set of ordered pairs specifying the endpoints of the interval. Let an element of that set be represented by $(a, b),\text{ } a \le b$.
Highest posterior density i... | What is the decision-theoretic justification for Bayesian credible interval procedures?
In univariate interval estimation, the set of possible actions is the set of ordered pairs specifying the endpoints of the interval. Let an element of that set be represented by $(a, b),\text{ } a \le |
14,156 | What is the decision-theoretic justification for Bayesian credible interval procedures? | Intervals of minimal size
One obvious choice of a loss function for interval selection (both Bayesian and frequentist) is to use the size of the intervals as measured in terms of the marginal distributions. Thus, start with the desired property or the loss function, and derive the intervals that are optimal. This tends... | What is the decision-theoretic justification for Bayesian credible interval procedures? | Intervals of minimal size
One obvious choice of a loss function for interval selection (both Bayesian and frequentist) is to use the size of the intervals as measured in terms of the marginal distribu | What is the decision-theoretic justification for Bayesian credible interval procedures?
Intervals of minimal size
One obvious choice of a loss function for interval selection (both Bayesian and frequentist) is to use the size of the intervals as measured in terms of the marginal distributions. Thus, start with the desi... | What is the decision-theoretic justification for Bayesian credible interval procedures?
Intervals of minimal size
One obvious choice of a loss function for interval selection (both Bayesian and frequentist) is to use the size of the intervals as measured in terms of the marginal distribu |
14,157 | Is there a graphical representation of bias-variance tradeoff in linear regression? | The bias variance trade-off is based on the breakdown of the mean square error:
$$MSE(\hat{y})=E[y-\hat{y}]^2=E[y-E[\hat{y}]]^2+E[\hat{y}-E[\hat{y}]]^2$$
One way to see the bias-variance trade of is what properties of the data set are used in the model fit. For the simple model, if we assume that OLS regression was us... | Is there a graphical representation of bias-variance tradeoff in linear regression? | The bias variance trade-off is based on the breakdown of the mean square error:
$$MSE(\hat{y})=E[y-\hat{y}]^2=E[y-E[\hat{y}]]^2+E[\hat{y}-E[\hat{y}]]^2$$
One way to see the bias-variance trade of is w | Is there a graphical representation of bias-variance tradeoff in linear regression?
The bias variance trade-off is based on the breakdown of the mean square error:
$$MSE(\hat{y})=E[y-\hat{y}]^2=E[y-E[\hat{y}]]^2+E[\hat{y}-E[\hat{y}]]^2$$
One way to see the bias-variance trade of is what properties of the data set are u... | Is there a graphical representation of bias-variance tradeoff in linear regression?
The bias variance trade-off is based on the breakdown of the mean square error:
$$MSE(\hat{y})=E[y-\hat{y}]^2=E[y-E[\hat{y}]]^2+E[\hat{y}-E[\hat{y}]]^2$$
One way to see the bias-variance trade of is w |
14,158 | Is there a graphical representation of bias-variance tradeoff in linear regression? | To summarize with what I think I know in a non-mathematical manner:
bias - your prediction is going to be incorrect when you use the simple model and that will happen on any dataset you use the model on. Your prediction is expected to be wrong
variance - if you use the complex model, you will get very different predic... | Is there a graphical representation of bias-variance tradeoff in linear regression? | To summarize with what I think I know in a non-mathematical manner:
bias - your prediction is going to be incorrect when you use the simple model and that will happen on any dataset you use the model | Is there a graphical representation of bias-variance tradeoff in linear regression?
To summarize with what I think I know in a non-mathematical manner:
bias - your prediction is going to be incorrect when you use the simple model and that will happen on any dataset you use the model on. Your prediction is expected to ... | Is there a graphical representation of bias-variance tradeoff in linear regression?
To summarize with what I think I know in a non-mathematical manner:
bias - your prediction is going to be incorrect when you use the simple model and that will happen on any dataset you use the model |
14,159 | In practice how is the random effects covariance matrix calculated in a mixed effects model? | The Goldstein .pdf @probabilityislogic linked is a great document. Here's a list of some references that discuss your particular question:
Harville, 1976: Extension of the Gauss-Markov Theorem to include the estimation of random effects.
Harville, 1977: Maximum likelihood approaches to variance component estimation an... | In practice how is the random effects covariance matrix calculated in a mixed effects model? | The Goldstein .pdf @probabilityislogic linked is a great document. Here's a list of some references that discuss your particular question:
Harville, 1976: Extension of the Gauss-Markov Theorem to inc | In practice how is the random effects covariance matrix calculated in a mixed effects model?
The Goldstein .pdf @probabilityislogic linked is a great document. Here's a list of some references that discuss your particular question:
Harville, 1976: Extension of the Gauss-Markov Theorem to include the estimation of rand... | In practice how is the random effects covariance matrix calculated in a mixed effects model?
The Goldstein .pdf @probabilityislogic linked is a great document. Here's a list of some references that discuss your particular question:
Harville, 1976: Extension of the Gauss-Markov Theorem to inc |
14,160 | In practice how is the random effects covariance matrix calculated in a mixed effects model? | Harvey Goldstein isn't a bad place to start.
As with most complex estimation methods, it varies with the software package. However, often what is done is in the following steps:
Pick an initial value for $D$ (say $D_0$) and $R$ (say $R_0$). Set $i=1$
Conditional on $D=D_{i-1}$ and $R=R_{i-1}$, estimate $\beta$ and $... | In practice how is the random effects covariance matrix calculated in a mixed effects model? | Harvey Goldstein isn't a bad place to start.
As with most complex estimation methods, it varies with the software package. However, often what is done is in the following steps:
Pick an initial valu | In practice how is the random effects covariance matrix calculated in a mixed effects model?
Harvey Goldstein isn't a bad place to start.
As with most complex estimation methods, it varies with the software package. However, often what is done is in the following steps:
Pick an initial value for $D$ (say $D_0$) and $... | In practice how is the random effects covariance matrix calculated in a mixed effects model?
Harvey Goldstein isn't a bad place to start.
As with most complex estimation methods, it varies with the software package. However, often what is done is in the following steps:
Pick an initial valu |
14,161 | In practice how is the random effects covariance matrix calculated in a mixed effects model? | The following article gives a closed form solution for D:
J. Shao, Mari Palta & Roger Qu, (1998), "Least squares estimation of regression parameters in mixed effects models with unmeasured covariates", Communications in Statistics - Theory and Methods, 27:6, 1487-1501, DOI:10.1080/03610929808832172 | In practice how is the random effects covariance matrix calculated in a mixed effects model? | The following article gives a closed form solution for D:
J. Shao, Mari Palta & Roger Qu, (1998), "Least squares estimation of regression parameters in mixed effects models with unmeasured covariates | In practice how is the random effects covariance matrix calculated in a mixed effects model?
The following article gives a closed form solution for D:
J. Shao, Mari Palta & Roger Qu, (1998), "Least squares estimation of regression parameters in mixed effects models with unmeasured covariates", Communications in Statis... | In practice how is the random effects covariance matrix calculated in a mixed effects model?
The following article gives a closed form solution for D:
J. Shao, Mari Palta & Roger Qu, (1998), "Least squares estimation of regression parameters in mixed effects models with unmeasured covariates |
14,162 | In practice how is the random effects covariance matrix calculated in a mixed effects model? | two more references that could be useful Variance Components by Searle Et al and Lynch and Walsh Genetics and Analysis of Quantitative Traits . The Lynch and Walsh book gives a step by step algorithm if I recall right | In practice how is the random effects covariance matrix calculated in a mixed effects model? | two more references that could be useful Variance Components by Searle Et al and Lynch and Walsh Genetics and Analysis of Quantitative Traits . The Lynch and Walsh book gives a step by step algorithm | In practice how is the random effects covariance matrix calculated in a mixed effects model?
two more references that could be useful Variance Components by Searle Et al and Lynch and Walsh Genetics and Analysis of Quantitative Traits . The Lynch and Walsh book gives a step by step algorithm if I recall right | In practice how is the random effects covariance matrix calculated in a mixed effects model?
two more references that could be useful Variance Components by Searle Et al and Lynch and Walsh Genetics and Analysis of Quantitative Traits . The Lynch and Walsh book gives a step by step algorithm |
14,163 | Summary of "Large p, Small n" results | I don't know of a single paper, but I think the current book with the best survey of methods applicable to $p\gg n$ is still Friedman-Hastie-Tibshirani. It is very partial to shrinkage and lasso (I know from a common acquaintance that Vapnik was upset at the first edition of the book), but covers almost all common shri... | Summary of "Large p, Small n" results | I don't know of a single paper, but I think the current book with the best survey of methods applicable to $p\gg n$ is still Friedman-Hastie-Tibshirani. It is very partial to shrinkage and lasso (I kn | Summary of "Large p, Small n" results
I don't know of a single paper, but I think the current book with the best survey of methods applicable to $p\gg n$ is still Friedman-Hastie-Tibshirani. It is very partial to shrinkage and lasso (I know from a common acquaintance that Vapnik was upset at the first edition of the bo... | Summary of "Large p, Small n" results
I don't know of a single paper, but I think the current book with the best survey of methods applicable to $p\gg n$ is still Friedman-Hastie-Tibshirani. It is very partial to shrinkage and lasso (I kn |
14,164 | Summary of "Large p, Small n" results | What do you mean by "result" theoretical result ? or numerical results ?
I like the reviews of Jianqing Fan see for example this one and this one on classification (lot of self citations).
Also there are non review paper that make rich review in introduction, see for example this one and this one. | Summary of "Large p, Small n" results | What do you mean by "result" theoretical result ? or numerical results ?
I like the reviews of Jianqing Fan see for example this one and this one on classification (lot of self citations).
Also ther | Summary of "Large p, Small n" results
What do you mean by "result" theoretical result ? or numerical results ?
I like the reviews of Jianqing Fan see for example this one and this one on classification (lot of self citations).
Also there are non review paper that make rich review in introduction, see for example thi... | Summary of "Large p, Small n" results
What do you mean by "result" theoretical result ? or numerical results ?
I like the reviews of Jianqing Fan see for example this one and this one on classification (lot of self citations).
Also ther |
14,165 | Summary of "Large p, Small n" results | If you want a resume maybe this is the best you can get
http://www.springer.com/statistics/statistical+theory+and+methods/book/978-3-642-20191-2 | Summary of "Large p, Small n" results | If you want a resume maybe this is the best you can get
http://www.springer.com/statistics/statistical+theory+and+methods/book/978-3-642-20191-2 | Summary of "Large p, Small n" results
If you want a resume maybe this is the best you can get
http://www.springer.com/statistics/statistical+theory+and+methods/book/978-3-642-20191-2 | Summary of "Large p, Small n" results
If you want a resume maybe this is the best you can get
http://www.springer.com/statistics/statistical+theory+and+methods/book/978-3-642-20191-2 |
14,166 | Summary of "Large p, Small n" results | Chapter 18 of Hastie, Tibshirani, and Friedman (12th printing/second edition, this chapter wasn't in the first edition) is a nice overview with some interesting data sets. It's not quite as thorough as their treatment of older material, and a lot of the time they have to give somewhat heuristic explanations of why cert... | Summary of "Large p, Small n" results | Chapter 18 of Hastie, Tibshirani, and Friedman (12th printing/second edition, this chapter wasn't in the first edition) is a nice overview with some interesting data sets. It's not quite as thorough a | Summary of "Large p, Small n" results
Chapter 18 of Hastie, Tibshirani, and Friedman (12th printing/second edition, this chapter wasn't in the first edition) is a nice overview with some interesting data sets. It's not quite as thorough as their treatment of older material, and a lot of the time they have to give somew... | Summary of "Large p, Small n" results
Chapter 18 of Hastie, Tibshirani, and Friedman (12th printing/second edition, this chapter wasn't in the first edition) is a nice overview with some interesting data sets. It's not quite as thorough a |
14,167 | Encoding of categorical variables with high cardinality | This link provides a very good summary and should be helpful. As you allude to, label-encoding should not be used for nominal variables at it introduces an artificial ordinality. Hashing is a potential alternative that is particularity suitable for features that have high cardinality.
You can also use a distributed re... | Encoding of categorical variables with high cardinality | This link provides a very good summary and should be helpful. As you allude to, label-encoding should not be used for nominal variables at it introduces an artificial ordinality. Hashing is a potentia | Encoding of categorical variables with high cardinality
This link provides a very good summary and should be helpful. As you allude to, label-encoding should not be used for nominal variables at it introduces an artificial ordinality. Hashing is a potential alternative that is particularity suitable for features that h... | Encoding of categorical variables with high cardinality
This link provides a very good summary and should be helpful. As you allude to, label-encoding should not be used for nominal variables at it introduces an artificial ordinality. Hashing is a potentia |
14,168 | Encoding of categorical variables with high cardinality | This might help Quantile Encoder: Tackling High Cardinality Categorical Features in Regression Problems: https://link.springer.com/chapter/10.1007%2F978-3-030-85529-1_14
The most well-known encoding for categorical features with low cardinality
is One Hot Encoding [1]. This produces orthogonal and equidistant vectors ... | Encoding of categorical variables with high cardinality | This might help Quantile Encoder: Tackling High Cardinality Categorical Features in Regression Problems: https://link.springer.com/chapter/10.1007%2F978-3-030-85529-1_14
The most well-known encoding | Encoding of categorical variables with high cardinality
This might help Quantile Encoder: Tackling High Cardinality Categorical Features in Regression Problems: https://link.springer.com/chapter/10.1007%2F978-3-030-85529-1_14
The most well-known encoding for categorical features with low cardinality
is One Hot Encodin... | Encoding of categorical variables with high cardinality
This might help Quantile Encoder: Tackling High Cardinality Categorical Features in Regression Problems: https://link.springer.com/chapter/10.1007%2F978-3-030-85529-1_14
The most well-known encoding |
14,169 | Encoding of categorical variables with high cardinality | Zhubarb had a very nice answer. I just want to provide more details on embedding and hashing and add one common approach binning.
Starting with the binning, this is a very common approach used in many fields, the key idea is many data follows 80-20 rules, that even we have a feature with many values but most of the dat... | Encoding of categorical variables with high cardinality | Zhubarb had a very nice answer. I just want to provide more details on embedding and hashing and add one common approach binning.
Starting with the binning, this is a very common approach used in many | Encoding of categorical variables with high cardinality
Zhubarb had a very nice answer. I just want to provide more details on embedding and hashing and add one common approach binning.
Starting with the binning, this is a very common approach used in many fields, the key idea is many data follows 80-20 rules, that eve... | Encoding of categorical variables with high cardinality
Zhubarb had a very nice answer. I just want to provide more details on embedding and hashing and add one common approach binning.
Starting with the binning, this is a very common approach used in many |
14,170 | The origin of the Wilkinson-style notation such as (1|id) for random effects in mixed models formulae in R | The notation | has been around in nlme docs since version 3.1-1 and that is probably late 1999; we can easily check that on CRAN nlme code archive. nlme does use this notation, for example try library(nlme); formula(Orthodont); the | comes up - so 2000's are off. So let's dig.... "Graphical Methods for Data with Multip... | The origin of the Wilkinson-style notation such as (1|id) for random effects in mixed models formula | The notation | has been around in nlme docs since version 3.1-1 and that is probably late 1999; we can easily check that on CRAN nlme code archive. nlme does use this notation, for example try library | The origin of the Wilkinson-style notation such as (1|id) for random effects in mixed models formulae in R
The notation | has been around in nlme docs since version 3.1-1 and that is probably late 1999; we can easily check that on CRAN nlme code archive. nlme does use this notation, for example try library(nlme); formu... | The origin of the Wilkinson-style notation such as (1|id) for random effects in mixed models formula
The notation | has been around in nlme docs since version 3.1-1 and that is probably late 1999; we can easily check that on CRAN nlme code archive. nlme does use this notation, for example try library |
14,171 | Heavy-tailed errors in mixed-effects model | I took my own advice and tried this with some simulated data.
This isn't as complete an answer as it could be, but it's what's on my hard drive.
I simulated data...
...with heavy-tailed residuals
($\frac{\epsilon}{10} \sim \text{Student}(\nu=1)$).
If fit two models using brms, one with Gaussian residuals,
and one wit... | Heavy-tailed errors in mixed-effects model | I took my own advice and tried this with some simulated data.
This isn't as complete an answer as it could be, but it's what's on my hard drive.
I simulated data...
...with heavy-tailed residuals
($\ | Heavy-tailed errors in mixed-effects model
I took my own advice and tried this with some simulated data.
This isn't as complete an answer as it could be, but it's what's on my hard drive.
I simulated data...
...with heavy-tailed residuals
($\frac{\epsilon}{10} \sim \text{Student}(\nu=1)$).
If fit two models using brm... | Heavy-tailed errors in mixed-effects model
I took my own advice and tried this with some simulated data.
This isn't as complete an answer as it could be, but it's what's on my hard drive.
I simulated data...
...with heavy-tailed residuals
($\ |
14,172 | Heavy-tailed errors in mixed-effects model | Looking at models based on the t-distribution is potentially helpful as others wrote. However one reason to use the Gaussian assumption is that the Gaussian distribution minimises the Fisher-Information for given variance. This means that Gaussian parameters cannot be as precisely estimated as parameters of other distr... | Heavy-tailed errors in mixed-effects model | Looking at models based on the t-distribution is potentially helpful as others wrote. However one reason to use the Gaussian assumption is that the Gaussian distribution minimises the Fisher-Informati | Heavy-tailed errors in mixed-effects model
Looking at models based on the t-distribution is potentially helpful as others wrote. However one reason to use the Gaussian assumption is that the Gaussian distribution minimises the Fisher-Information for given variance. This means that Gaussian parameters cannot be as preci... | Heavy-tailed errors in mixed-effects model
Looking at models based on the t-distribution is potentially helpful as others wrote. However one reason to use the Gaussian assumption is that the Gaussian distribution minimises the Fisher-Informati |
14,173 | Post-hoc test for chi-square goodness-of-fit test | To my surprise a couple of searches didn't seem to turn up prior discussion of post hoc for goodness of fit; I expect there's probably one here somewhere, but since I can't locate it easily, I think it's reasonable to turn my comments into an answer, so that people can at least find this one using the same search terms... | Post-hoc test for chi-square goodness-of-fit test | To my surprise a couple of searches didn't seem to turn up prior discussion of post hoc for goodness of fit; I expect there's probably one here somewhere, but since I can't locate it easily, I think i | Post-hoc test for chi-square goodness-of-fit test
To my surprise a couple of searches didn't seem to turn up prior discussion of post hoc for goodness of fit; I expect there's probably one here somewhere, but since I can't locate it easily, I think it's reasonable to turn my comments into an answer, so that people can ... | Post-hoc test for chi-square goodness-of-fit test
To my surprise a couple of searches didn't seem to turn up prior discussion of post hoc for goodness of fit; I expect there's probably one here somewhere, but since I can't locate it easily, I think i |
14,174 | Post-hoc test for chi-square goodness-of-fit test | I've had the same issue (and was happy to find this post). I now also found a short note on the issue in Sheskin (2003: 225) that I just wanted to share:
"Another type of comparison that can be conducted is to contrast just two of the original six cells with one another. Specifically, let us assume we want to compare ... | Post-hoc test for chi-square goodness-of-fit test | I've had the same issue (and was happy to find this post). I now also found a short note on the issue in Sheskin (2003: 225) that I just wanted to share:
"Another type of comparison that can be condu | Post-hoc test for chi-square goodness-of-fit test
I've had the same issue (and was happy to find this post). I now also found a short note on the issue in Sheskin (2003: 225) that I just wanted to share:
"Another type of comparison that can be conducted is to contrast just two of the original six cells with one anothe... | Post-hoc test for chi-square goodness-of-fit test
I've had the same issue (and was happy to find this post). I now also found a short note on the issue in Sheskin (2003: 225) that I just wanted to share:
"Another type of comparison that can be condu |
14,175 | Seeking a Theoretical Understanding of Firth Logistic Regression | Firth's correction is equivalent to specifying Jeffrey's prior and seeking the mode of the posterior distribution. Roughly, it adds half of an observation to the data set assuming that the true values of the regression parameters are equal to zero.
Firth's paper is an example of a higher order asymptotics. The null ord... | Seeking a Theoretical Understanding of Firth Logistic Regression | Firth's correction is equivalent to specifying Jeffrey's prior and seeking the mode of the posterior distribution. Roughly, it adds half of an observation to the data set assuming that the true values | Seeking a Theoretical Understanding of Firth Logistic Regression
Firth's correction is equivalent to specifying Jeffrey's prior and seeking the mode of the posterior distribution. Roughly, it adds half of an observation to the data set assuming that the true values of the regression parameters are equal to zero.
Firth'... | Seeking a Theoretical Understanding of Firth Logistic Regression
Firth's correction is equivalent to specifying Jeffrey's prior and seeking the mode of the posterior distribution. Roughly, it adds half of an observation to the data set assuming that the true values |
14,176 | Criteria to set STL s.window width | The question is not about whether it is a monthly or a weekly data, but about how quickly the seasonality evolves. If you think the seasonal pattern is constant through time, you should set this parameter to a big value, so that you use the entire data to perform your analysis.
If on the other way round, the seasonal p... | Criteria to set STL s.window width | The question is not about whether it is a monthly or a weekly data, but about how quickly the seasonality evolves. If you think the seasonal pattern is constant through time, you should set this param | Criteria to set STL s.window width
The question is not about whether it is a monthly or a weekly data, but about how quickly the seasonality evolves. If you think the seasonal pattern is constant through time, you should set this parameter to a big value, so that you use the entire data to perform your analysis.
If on ... | Criteria to set STL s.window width
The question is not about whether it is a monthly or a weekly data, but about how quickly the seasonality evolves. If you think the seasonal pattern is constant through time, you should set this param |
14,177 | High variance of leave-one-out cross-validation | This question is probably going to end up being closed as a duplicate of Variance and bias in cross-validation: why does leave-one-out CV have higher variance?, but before it happens I think I will turn my comments into an answer.
I also do not fully understand how LOO can be unbiased, but have a high variance?
Consi... | High variance of leave-one-out cross-validation | This question is probably going to end up being closed as a duplicate of Variance and bias in cross-validation: why does leave-one-out CV have higher variance?, but before it happens I think I will tu | High variance of leave-one-out cross-validation
This question is probably going to end up being closed as a duplicate of Variance and bias in cross-validation: why does leave-one-out CV have higher variance?, but before it happens I think I will turn my comments into an answer.
I also do not fully understand how LOO c... | High variance of leave-one-out cross-validation
This question is probably going to end up being closed as a duplicate of Variance and bias in cross-validation: why does leave-one-out CV have higher variance?, but before it happens I think I will tu |
14,178 | High variance of leave-one-out cross-validation | This high variance is with respect to the space of training sets. Here is why the LOOCV has high variance:
in LOOCV, we get prediction error for each observation, say observation i, using the whole observed dataset at hand except this observation. So, the predicted value for i is very dependent on the current dataset. ... | High variance of leave-one-out cross-validation | This high variance is with respect to the space of training sets. Here is why the LOOCV has high variance:
in LOOCV, we get prediction error for each observation, say observation i, using the whole ob | High variance of leave-one-out cross-validation
This high variance is with respect to the space of training sets. Here is why the LOOCV has high variance:
in LOOCV, we get prediction error for each observation, say observation i, using the whole observed dataset at hand except this observation. So, the predicted value ... | High variance of leave-one-out cross-validation
This high variance is with respect to the space of training sets. Here is why the LOOCV has high variance:
in LOOCV, we get prediction error for each observation, say observation i, using the whole ob |
14,179 | High variance of leave-one-out cross-validation | There are two "kinds" of variance in LOOCV. one is the variance in the result, and another is the variance in the model. Because there is not much randomness in training/validation splits, the result's variance is lower than the validation set approach. However, we use almost the same models(we have nearly the same dat... | High variance of leave-one-out cross-validation | There are two "kinds" of variance in LOOCV. one is the variance in the result, and another is the variance in the model. Because there is not much randomness in training/validation splits, the result' | High variance of leave-one-out cross-validation
There are two "kinds" of variance in LOOCV. one is the variance in the result, and another is the variance in the model. Because there is not much randomness in training/validation splits, the result's variance is lower than the validation set approach. However, we use al... | High variance of leave-one-out cross-validation
There are two "kinds" of variance in LOOCV. one is the variance in the result, and another is the variance in the model. Because there is not much randomness in training/validation splits, the result' |
14,180 | Machine learning classifiers big-O or complexity | Let $N$ = number of training examples, $d$ = dimensionality of the features and $c$ = number of classes.
Then training has complexities:
Naive Bayes is $O(Nd)$, all it needs to do is computing the frequency of every feature value $d_i$ for each class.
$k$-NN is in $\mathcal{O}(1)$ (some people even say it is non-exist... | Machine learning classifiers big-O or complexity | Let $N$ = number of training examples, $d$ = dimensionality of the features and $c$ = number of classes.
Then training has complexities:
Naive Bayes is $O(Nd)$, all it needs to do is computing the fr | Machine learning classifiers big-O or complexity
Let $N$ = number of training examples, $d$ = dimensionality of the features and $c$ = number of classes.
Then training has complexities:
Naive Bayes is $O(Nd)$, all it needs to do is computing the frequency of every feature value $d_i$ for each class.
$k$-NN is in $\mat... | Machine learning classifiers big-O or complexity
Let $N$ = number of training examples, $d$ = dimensionality of the features and $c$ = number of classes.
Then training has complexities:
Naive Bayes is $O(Nd)$, all it needs to do is computing the fr |
14,181 | What is the importance of the function $e^{-x^2}$ in statistics? | The reason that this function is important is indeed the normal distribution and its closely linked companion, the central limit theorem (we have some good explanations of the CLT in other questions here).
In statistics, the CLT can typically be used to compute probabilites approximately, making statements like "we are... | What is the importance of the function $e^{-x^2}$ in statistics? | The reason that this function is important is indeed the normal distribution and its closely linked companion, the central limit theorem (we have some good explanations of the CLT in other questions h | What is the importance of the function $e^{-x^2}$ in statistics?
The reason that this function is important is indeed the normal distribution and its closely linked companion, the central limit theorem (we have some good explanations of the CLT in other questions here).
In statistics, the CLT can typically be used to c... | What is the importance of the function $e^{-x^2}$ in statistics?
The reason that this function is important is indeed the normal distribution and its closely linked companion, the central limit theorem (we have some good explanations of the CLT in other questions h |
14,182 | What is the importance of the function $e^{-x^2}$ in statistics? | You are right, normal distribution or gaussian is a scaled and shifted $\exp (-x^2)$, so the importance of $\exp (-x^2)$ comes mostly from the fact that it is essentially the normal distribution.
And normal distribution is important mainly because ("under mild regularity conditions") the sum of many independent and id... | What is the importance of the function $e^{-x^2}$ in statistics? | You are right, normal distribution or gaussian is a scaled and shifted $\exp (-x^2)$, so the importance of $\exp (-x^2)$ comes mostly from the fact that it is essentially the normal distribution.
And | What is the importance of the function $e^{-x^2}$ in statistics?
You are right, normal distribution or gaussian is a scaled and shifted $\exp (-x^2)$, so the importance of $\exp (-x^2)$ comes mostly from the fact that it is essentially the normal distribution.
And normal distribution is important mainly because ("unde... | What is the importance of the function $e^{-x^2}$ in statistics?
You are right, normal distribution or gaussian is a scaled and shifted $\exp (-x^2)$, so the importance of $\exp (-x^2)$ comes mostly from the fact that it is essentially the normal distribution.
And |
14,183 | What is the importance of the function $e^{-x^2}$ in statistics? | Unique feature of this function is that its spectrum density (Fourier transform) is the same as a function itself. This means that when it is properly scaled to be the probability density function (PDF), its moment generating function (MGF) and characteristic functions (CF) have the same form and scaling:
PDF: $\frac ... | What is the importance of the function $e^{-x^2}$ in statistics? | Unique feature of this function is that its spectrum density (Fourier transform) is the same as a function itself. This means that when it is properly scaled to be the probability density function (PD | What is the importance of the function $e^{-x^2}$ in statistics?
Unique feature of this function is that its spectrum density (Fourier transform) is the same as a function itself. This means that when it is properly scaled to be the probability density function (PDF), its moment generating function (MGF) and characteri... | What is the importance of the function $e^{-x^2}$ in statistics?
Unique feature of this function is that its spectrum density (Fourier transform) is the same as a function itself. This means that when it is properly scaled to be the probability density function (PD |
14,184 | What is the importance of the function $e^{-x^2}$ in statistics? | Locked. Comments on this answer have been disabled, but it is still accepting other interactions. Learn more.
One version of CLT tells us that the distribution of averages of independent identically distributed random variables will start to look lik... | What is the importance of the function $e^{-x^2}$ in statistics? | Locked. Comments on this answer have been disabled, but it is still accepting other interactions. Learn more.
One version of CLT t | What is the importance of the function $e^{-x^2}$ in statistics?
Locked. Comments on this answer have been disabled, but it is still accepting other interactions. Learn more.
One version of CLT tells us that the distribution of averages of independen... | What is the importance of the function $e^{-x^2}$ in statistics?
Locked. Comments on this answer have been disabled, but it is still accepting other interactions. Learn more.
One version of CLT t |
14,185 | Goodness of fit for 2D histograms | OK, I've extensively revised this answer. I think rather than binning your data and comparing counts in each bin, the suggestion I'd buried in my original answer of fitting a 2d kernel density estimate and comparing them is a much better idea. Even better, there is a function kde.test() in Tarn Duong's ks package for... | Goodness of fit for 2D histograms | OK, I've extensively revised this answer. I think rather than binning your data and comparing counts in each bin, the suggestion I'd buried in my original answer of fitting a 2d kernel density estima | Goodness of fit for 2D histograms
OK, I've extensively revised this answer. I think rather than binning your data and comparing counts in each bin, the suggestion I'd buried in my original answer of fitting a 2d kernel density estimate and comparing them is a much better idea. Even better, there is a function kde.tes... | Goodness of fit for 2D histograms
OK, I've extensively revised this answer. I think rather than binning your data and comparing counts in each bin, the suggestion I'd buried in my original answer of fitting a 2d kernel density estima |
14,186 | Predictive Modeling - Should we care about mixed modeling? | I have been wondering this myself, and here are my tentative conclusions. I would be happy if anyone could supplement/correct this with their knowledge and any references on this topic.
If you want to test hypotheses about logistic regression coefficients by checking statistical significance, you need to model the corr... | Predictive Modeling - Should we care about mixed modeling? | I have been wondering this myself, and here are my tentative conclusions. I would be happy if anyone could supplement/correct this with their knowledge and any references on this topic.
If you want to | Predictive Modeling - Should we care about mixed modeling?
I have been wondering this myself, and here are my tentative conclusions. I would be happy if anyone could supplement/correct this with their knowledge and any references on this topic.
If you want to test hypotheses about logistic regression coefficients by ch... | Predictive Modeling - Should we care about mixed modeling?
I have been wondering this myself, and here are my tentative conclusions. I would be happy if anyone could supplement/correct this with their knowledge and any references on this topic.
If you want to |
14,187 | Notation conventions for random variables and their distributions | I like to say: a random variable assigns a number to each possible outcome of a random "experiment", where a random experiment is some well-defined process with an uncertain outcome.
$X^2$ is another random variable; whenever $X = x$, $X^2 = x^2$.
I would generally use lower cases letters as realizations of random vari... | Notation conventions for random variables and their distributions | I like to say: a random variable assigns a number to each possible outcome of a random "experiment", where a random experiment is some well-defined process with an uncertain outcome.
$X^2$ is another | Notation conventions for random variables and their distributions
I like to say: a random variable assigns a number to each possible outcome of a random "experiment", where a random experiment is some well-defined process with an uncertain outcome.
$X^2$ is another random variable; whenever $X = x$, $X^2 = x^2$.
I woul... | Notation conventions for random variables and their distributions
I like to say: a random variable assigns a number to each possible outcome of a random "experiment", where a random experiment is some well-defined process with an uncertain outcome.
$X^2$ is another |
14,188 | How can machine learning models (GBM, NN etc.) be used for survival analysis? | For the case of neural networks, this is a promising approach: WTTE-RNN - Less hacky churn prediction.
The essence of this method is to use a Recurrent Neural Network to predict parameters of a Weibull distribution at each time-step and optimize the network using a loss function that takes censoring into account.
The a... | How can machine learning models (GBM, NN etc.) be used for survival analysis? | For the case of neural networks, this is a promising approach: WTTE-RNN - Less hacky churn prediction.
The essence of this method is to use a Recurrent Neural Network to predict parameters of a Weibul | How can machine learning models (GBM, NN etc.) be used for survival analysis?
For the case of neural networks, this is a promising approach: WTTE-RNN - Less hacky churn prediction.
The essence of this method is to use a Recurrent Neural Network to predict parameters of a Weibull distribution at each time-step and optim... | How can machine learning models (GBM, NN etc.) be used for survival analysis?
For the case of neural networks, this is a promising approach: WTTE-RNN - Less hacky churn prediction.
The essence of this method is to use a Recurrent Neural Network to predict parameters of a Weibul |
14,189 | How can machine learning models (GBM, NN etc.) be used for survival analysis? | Have a look at these references:
https://www.stats.ox.ac.uk/pub/bdr/NNSM.pdf
http://pcwww.liv.ac.uk/~afgt/eleuteri_lyon07.pdf
Also note that traditional hazards-based models like Cox Proportional Hazards (CPH) are not designed to predict time-to-event, but rather to infer variables' impact (correlation) against i) obse... | How can machine learning models (GBM, NN etc.) be used for survival analysis? | Have a look at these references:
https://www.stats.ox.ac.uk/pub/bdr/NNSM.pdf
http://pcwww.liv.ac.uk/~afgt/eleuteri_lyon07.pdf
Also note that traditional hazards-based models like Cox Proportional Haza | How can machine learning models (GBM, NN etc.) be used for survival analysis?
Have a look at these references:
https://www.stats.ox.ac.uk/pub/bdr/NNSM.pdf
http://pcwww.liv.ac.uk/~afgt/eleuteri_lyon07.pdf
Also note that traditional hazards-based models like Cox Proportional Hazards (CPH) are not designed to predict time... | How can machine learning models (GBM, NN etc.) be used for survival analysis?
Have a look at these references:
https://www.stats.ox.ac.uk/pub/bdr/NNSM.pdf
http://pcwww.liv.ac.uk/~afgt/eleuteri_lyon07.pdf
Also note that traditional hazards-based models like Cox Proportional Haza |
14,190 | How can machine learning models (GBM, NN etc.) be used for survival analysis? | As @dsaxton said, you can build a discrete time model. You set it up to predict p(fail at this day given survived up to previous day). Your inputs are current day (in whatever representation you want) eg one hot encoding, integer,.. Spline... As well as any other independent variables you might want
So you create rows... | How can machine learning models (GBM, NN etc.) be used for survival analysis? | As @dsaxton said, you can build a discrete time model. You set it up to predict p(fail at this day given survived up to previous day). Your inputs are current day (in whatever representation you want | How can machine learning models (GBM, NN etc.) be used for survival analysis?
As @dsaxton said, you can build a discrete time model. You set it up to predict p(fail at this day given survived up to previous day). Your inputs are current day (in whatever representation you want) eg one hot encoding, integer,.. Spline..... | How can machine learning models (GBM, NN etc.) be used for survival analysis?
As @dsaxton said, you can build a discrete time model. You set it up to predict p(fail at this day given survived up to previous day). Your inputs are current day (in whatever representation you want |
14,191 | How to test whether a distribution follows a power law? | According to Clauset et al., this is how you test the power law tail with poweRlaw package:
Construct the power law distribution object. In this case, your data is discrete, so use the discrete version of the class
data <- c(100, 100, 10, 10, 10 ...)
data_pl <- displ$new(data)
Estimate the $x_{min}$ and the expon... | How to test whether a distribution follows a power law? | According to Clauset et al., this is how you test the power law tail with poweRlaw package:
Construct the power law distribution object. In this case, your data is discrete, so use the discrete versi | How to test whether a distribution follows a power law?
According to Clauset et al., this is how you test the power law tail with poweRlaw package:
Construct the power law distribution object. In this case, your data is discrete, so use the discrete version of the class
data <- c(100, 100, 10, 10, 10 ...)
data_pl <-... | How to test whether a distribution follows a power law?
According to Clauset et al., this is how you test the power law tail with poweRlaw package:
Construct the power law distribution object. In this case, your data is discrete, so use the discrete versi |
14,192 | Understanding the variance of random effects in lmer() models | This is a classic one way anova. A very short answer to your question is that the variance component is made up of two terms.
$$\hat{\sigma}^2_{\alpha}=E\left[\frac{1}{48}\sum_{s=1}^{48} \alpha_s^2\right]= \frac{1}{48}\sum_{s=1}^{48}\hat{ \alpha }_s^2 +\frac{1}{48}\sum_{s=1}^{48}var(\hat{ \alpha }_s)$$
So the term you... | Understanding the variance of random effects in lmer() models | This is a classic one way anova. A very short answer to your question is that the variance component is made up of two terms.
$$\hat{\sigma}^2_{\alpha}=E\left[\frac{1}{48}\sum_{s=1}^{48} \alpha_s^2\r | Understanding the variance of random effects in lmer() models
This is a classic one way anova. A very short answer to your question is that the variance component is made up of two terms.
$$\hat{\sigma}^2_{\alpha}=E\left[\frac{1}{48}\sum_{s=1}^{48} \alpha_s^2\right]= \frac{1}{48}\sum_{s=1}^{48}\hat{ \alpha }_s^2 +\fra... | Understanding the variance of random effects in lmer() models
This is a classic one way anova. A very short answer to your question is that the variance component is made up of two terms.
$$\hat{\sigma}^2_{\alpha}=E\left[\frac{1}{48}\sum_{s=1}^{48} \alpha_s^2\r |
14,193 | How to interpret a QQ-plot of p-values | This is an older question, but I found it helpful when trying to interpret QQPlots for the first time. I thought I'd add to these answers in case more people stumble across this in the future.
The thing I found a little tricky to understand is what are those points exactly? I found going to the code made it easy to fig... | How to interpret a QQ-plot of p-values | This is an older question, but I found it helpful when trying to interpret QQPlots for the first time. I thought I'd add to these answers in case more people stumble across this in the future.
The thi | How to interpret a QQ-plot of p-values
This is an older question, but I found it helpful when trying to interpret QQPlots for the first time. I thought I'd add to these answers in case more people stumble across this in the future.
The thing I found a little tricky to understand is what are those points exactly? I foun... | How to interpret a QQ-plot of p-values
This is an older question, but I found it helpful when trying to interpret QQPlots for the first time. I thought I'd add to these answers in case more people stumble across this in the future.
The thi |
14,194 | How to interpret a QQ-plot of p-values | A good reference on the analysis of p-value plots is [1].
The result you are seeing may be driven by the fact the signal/effects exist only at some subset of tests. These are driven above the acceptance bands. Rejecting only the p-value outside the bands can indeed be justified, but perhaps more importantly, you should... | How to interpret a QQ-plot of p-values | A good reference on the analysis of p-value plots is [1].
The result you are seeing may be driven by the fact the signal/effects exist only at some subset of tests. These are driven above the acceptan | How to interpret a QQ-plot of p-values
A good reference on the analysis of p-value plots is [1].
The result you are seeing may be driven by the fact the signal/effects exist only at some subset of tests. These are driven above the acceptance bands. Rejecting only the p-value outside the bands can indeed be justified, b... | How to interpret a QQ-plot of p-values
A good reference on the analysis of p-value plots is [1].
The result you are seeing may be driven by the fact the signal/effects exist only at some subset of tests. These are driven above the acceptan |
14,195 | Combining two confidence intervals/point estimates | You could do a pooled estimate as follows. You can then use the pooled estimates to generate a combined confidence interval. Specifically, let:
$\bar{x_1} \sim N(\mu,\frac{\sigma^2}{n_1})$
$\bar{x_2} \sim N(\mu,\frac{\sigma^2}{n_2})$
Using the confidence intervals for the two cases, you can re-construct the standard er... | Combining two confidence intervals/point estimates | You could do a pooled estimate as follows. You can then use the pooled estimates to generate a combined confidence interval. Specifically, let:
$\bar{x_1} \sim N(\mu,\frac{\sigma^2}{n_1})$
$\bar{x_2} | Combining two confidence intervals/point estimates
You could do a pooled estimate as follows. You can then use the pooled estimates to generate a combined confidence interval. Specifically, let:
$\bar{x_1} \sim N(\mu,\frac{\sigma^2}{n_1})$
$\bar{x_2} \sim N(\mu,\frac{\sigma^2}{n_2})$
Using the confidence intervals for ... | Combining two confidence intervals/point estimates
You could do a pooled estimate as follows. You can then use the pooled estimates to generate a combined confidence interval. Specifically, let:
$\bar{x_1} \sim N(\mu,\frac{\sigma^2}{n_1})$
$\bar{x_2} |
14,196 | Combining two confidence intervals/point estimates | Sounds a lot like meta-analysis to me. Your assumption that the samples are from the same population means you can use fixed-effect meta-analysis (rather than random-effects meta-analysis). The generic inverse-variance method takes a set of independent estimates and their variances as input, so doesn't require the full... | Combining two confidence intervals/point estimates | Sounds a lot like meta-analysis to me. Your assumption that the samples are from the same population means you can use fixed-effect meta-analysis (rather than random-effects meta-analysis). The generi | Combining two confidence intervals/point estimates
Sounds a lot like meta-analysis to me. Your assumption that the samples are from the same population means you can use fixed-effect meta-analysis (rather than random-effects meta-analysis). The generic inverse-variance method takes a set of independent estimates and th... | Combining two confidence intervals/point estimates
Sounds a lot like meta-analysis to me. Your assumption that the samples are from the same population means you can use fixed-effect meta-analysis (rather than random-effects meta-analysis). The generi |
14,197 | Combining two confidence intervals/point estimates | This is not unlike a stratified sample. So, pooling the samples for a point estimate and standard error seems like a reasonable approach. The two samples would be weighted by sample proportion. | Combining two confidence intervals/point estimates | This is not unlike a stratified sample. So, pooling the samples for a point estimate and standard error seems like a reasonable approach. The two samples would be weighted by sample proportion. | Combining two confidence intervals/point estimates
This is not unlike a stratified sample. So, pooling the samples for a point estimate and standard error seems like a reasonable approach. The two samples would be weighted by sample proportion. | Combining two confidence intervals/point estimates
This is not unlike a stratified sample. So, pooling the samples for a point estimate and standard error seems like a reasonable approach. The two samples would be weighted by sample proportion. |
14,198 | Combining two confidence intervals/point estimates | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
See paper:
K.M. Scott, X. Lu, C.M. Cavanaugh, J.S. Liu... | Combining two confidence intervals/point estimates | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Combining two confidence intervals/point estimates
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
See... | Combining two confidence intervals/point estimates
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
14,199 | When an analytical Jacobian is available, is it better to approximate the Hessian by $J^TJ$, or by finite differences of the Jacobian? | GOOD question. First, recall where this approximation $H \approx J^T J$ comes from. Let $(x_i, y_i)$ be your data points, $f(\cdot)$ be your model and $\beta$ be the parameters of your model. Then the objective function of the non-linear least squares problem is $\frac{1}{2} r^T r$ where $r$ is the vector of the res... | When an analytical Jacobian is available, is it better to approximate the Hessian by $J^TJ$, or by f | GOOD question. First, recall where this approximation $H \approx J^T J$ comes from. Let $(x_i, y_i)$ be your data points, $f(\cdot)$ be your model and $\beta$ be the parameters of your model. Then | When an analytical Jacobian is available, is it better to approximate the Hessian by $J^TJ$, or by finite differences of the Jacobian?
GOOD question. First, recall where this approximation $H \approx J^T J$ comes from. Let $(x_i, y_i)$ be your data points, $f(\cdot)$ be your model and $\beta$ be the parameters of you... | When an analytical Jacobian is available, is it better to approximate the Hessian by $J^TJ$, or by f
GOOD question. First, recall where this approximation $H \approx J^T J$ comes from. Let $(x_i, y_i)$ be your data points, $f(\cdot)$ be your model and $\beta$ be the parameters of your model. Then |
14,200 | Observed information matrix is a consistent estimator of the expected information matrix? | $\newcommand{\convp}{\stackrel{P}{\longrightarrow}}$
I guess directly establishing some sort of uniform law of large numbers
is one possible approach.
Here is another.
We want to show that $\frac{J^N(\theta_{MLE})}{N} \convp I(\theta^*)$.
(As you said, we have by the WLLN that $\frac{J^N(\theta)}{N} \convp I(\theta)$. ... | Observed information matrix is a consistent estimator of the expected information matrix? | $\newcommand{\convp}{\stackrel{P}{\longrightarrow}}$
I guess directly establishing some sort of uniform law of large numbers
is one possible approach.
Here is another.
We want to show that $\frac{J^N( | Observed information matrix is a consistent estimator of the expected information matrix?
$\newcommand{\convp}{\stackrel{P}{\longrightarrow}}$
I guess directly establishing some sort of uniform law of large numbers
is one possible approach.
Here is another.
We want to show that $\frac{J^N(\theta_{MLE})}{N} \convp I(\th... | Observed information matrix is a consistent estimator of the expected information matrix?
$\newcommand{\convp}{\stackrel{P}{\longrightarrow}}$
I guess directly establishing some sort of uniform law of large numbers
is one possible approach.
Here is another.
We want to show that $\frac{J^N( |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.