idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
26,301
Should we normalize before using VarianceThreshold in sklearn?
I created an issue to request the documentation be clarified. This is the response: Normalized and unnormalized data have valid uses cases in VarianceThreshold: If you only want to use VarianceThreshold to remove constant features then threshold=0 works regardless if the data was normalized. If you want the threshold have the same meaning for features, then normalizing makes sense. The example in the docstring is showcasing Use Case 1... Source: https://github.com/scikit-learn/scikit-learn/issues/23394 I believe the second point is that a non-zero threshold only really makes sense if the features have been normalised (so they have the same scale) as stated by prashanth
Should we normalize before using VarianceThreshold in sklearn?
I created an issue to request the documentation be clarified. This is the response: Normalized and unnormalized data have valid uses cases in VarianceThreshold: If you only want to use VarianceThre
Should we normalize before using VarianceThreshold in sklearn? I created an issue to request the documentation be clarified. This is the response: Normalized and unnormalized data have valid uses cases in VarianceThreshold: If you only want to use VarianceThreshold to remove constant features then threshold=0 works regardless if the data was normalized. If you want the threshold have the same meaning for features, then normalizing makes sense. The example in the docstring is showcasing Use Case 1... Source: https://github.com/scikit-learn/scikit-learn/issues/23394 I believe the second point is that a non-zero threshold only really makes sense if the features have been normalised (so they have the same scale) as stated by prashanth
Should we normalize before using VarianceThreshold in sklearn? I created an issue to request the documentation be clarified. This is the response: Normalized and unnormalized data have valid uses cases in VarianceThreshold: If you only want to use VarianceThre
26,302
regression - What does the median absolute error metric say about the models?
I have to disagree with the first posted answer. As stated in the documentation, the median absolute error is useful basically it is essentially insensitive to outliers (as long as there aren't too many of them). This is because it is the median of all of the absolute values of the residuals, and the median is unaffected by values at the tails. So, this loss function can be used to perform robust regression. In contrast, the mean squared error can be highly sensitive to outliers, and mean absolute error can be somewhat sensitive to outliers (although less so than the mean squared error). Note that using the median absolute error only corrects for outliers in the response/target variable, not for outliers in the predictors/feature variables. One possible source of confusion is mean/median error vs. mean/median absolute error. The former cannot be used as a cost function for regression, since the cost must always be positive (among other things).
regression - What does the median absolute error metric say about the models?
I have to disagree with the first posted answer. As stated in the documentation, the median absolute error is useful basically it is essentially insensitive to outliers (as long as there aren't too ma
regression - What does the median absolute error metric say about the models? I have to disagree with the first posted answer. As stated in the documentation, the median absolute error is useful basically it is essentially insensitive to outliers (as long as there aren't too many of them). This is because it is the median of all of the absolute values of the residuals, and the median is unaffected by values at the tails. So, this loss function can be used to perform robust regression. In contrast, the mean squared error can be highly sensitive to outliers, and mean absolute error can be somewhat sensitive to outliers (although less so than the mean squared error). Note that using the median absolute error only corrects for outliers in the response/target variable, not for outliers in the predictors/feature variables. One possible source of confusion is mean/median error vs. mean/median absolute error. The former cannot be used as a cost function for regression, since the cost must always be positive (among other things).
regression - What does the median absolute error metric say about the models? I have to disagree with the first posted answer. As stated in the documentation, the median absolute error is useful basically it is essentially insensitive to outliers (as long as there aren't too ma
26,303
regression - What does the median absolute error metric say about the models?
With an OLS regression the mean error will be zero by construction. I guess you could use the median error to gauge the distribution around the mean - i.e. if you know the mean error is zero but the median error is -10 then you know there must be a few very large errors that skew the mean back up to zero. It's not very informative really, I don't think. Many better error-based metrics like mean absolute percentage error, mean squared error, mean absolute error etc.
regression - What does the median absolute error metric say about the models?
With an OLS regression the mean error will be zero by construction. I guess you could use the median error to gauge the distribution around the mean - i.e. if you know the mean error is zero but the m
regression - What does the median absolute error metric say about the models? With an OLS regression the mean error will be zero by construction. I guess you could use the median error to gauge the distribution around the mean - i.e. if you know the mean error is zero but the median error is -10 then you know there must be a few very large errors that skew the mean back up to zero. It's not very informative really, I don't think. Many better error-based metrics like mean absolute percentage error, mean squared error, mean absolute error etc.
regression - What does the median absolute error metric say about the models? With an OLS regression the mean error will be zero by construction. I guess you could use the median error to gauge the distribution around the mean - i.e. if you know the mean error is zero but the m
26,304
Sample random variables conditional on their sum
If you seek the conditional density of $(X_1,...,X_{n-1})$ given $$S=\sum_{k=1}^n X_k$$ a change of variable from $$(X_1,...,X_{n})\sim\prod_{i=1}^n f(x_i)$$ to $$\left(X_1,...,X_{n-1},S\right)\sim\prod_{i=1}^{n-1}f(x_i)\times f(s-x_1-\cdots-x_{n-1})$$ [with Jacobian equal to 1] shows that this conditional density is proportional to$$f(x_1)\cdots f(x_{n-1})\,f(s-x_1-\cdots-x_{n-1})$$ Therefore there exists a closed form expression for the conditional density and one can thus call a generic simulation method to simulate from it, like accept-reject, Gibbs sampling, or a Metropolis-Hastings algorithm. The resolution even extends to independent variables that are not identically distributed. Note: A similar question was asked a while ago, but none of the answers mentions this generic solution. For instance, if $f$ is the N$(0,1)$ density and $n=4$, a Metropolis-within-Gibbs sampler for this problem would be of the form T=1e3 #Gibbs steps n=3 #n-1 s=3.1415 #imposed sum x=matrix(rnorm(n),T,n) for (t in 2:T){ x[t,]=x[t-1,] for (i in 1:n){ prop=rnorm(1,x[t-1,i],3) if (runif(1)<dnorm(prop)* dnorm(s-sum(x[t,-i])-prop)/ dnorm(x[t-1,i])/dnorm(s-sum(x[t,]))) x[t,i]=prop}} Here is the outcome of the simulation of the three (first) components $x_1$ (brown), $x_2$ (red), and $x_3$ (yellow): [reproduced from my blog] I recently came upon an unexpected property shown by Lindqvist and Taraldsen (Biometrika, 2005) that to simulate a sample ${\bf y}$ conditional on the realisation of a sufficient statistic, $T({\bf y})=t⁰$, it is sufficient (!!!) to simulate the components of ${\bf y}$ as ${\bf y}=G({\bf u},θ)$, with ${\bf u}$ a random variable with fixed distribution, e.g., a $U(0,1)$, and to solve in $θ$ the fixed point equation $$T({\bf y})=T\circ G({\bf u},θ)=t⁰$$ assuming there exists a single solution to this equation. To borrow a simple example from the authors, take an exponential sample ${\bf y}$ to be simulated given the sum statistic being fixed. As it is well-known, the conditional distribution of ${\bf y}$ is then a (rescaled) Beta and the proposed algorithm ends up being a standard Beta generator. For the method to work in general, $T({\bf y})$ must factorise through a function of the ${\bf u}$’s, a so-called pivotal condition. If this condition does not hold, it gets more complicated: the authors introduce a pseudo-prior distribution on the parameter $θ$ to make it independent from the ${\bf u}$’s conditional on $T({\bf y})=t⁰$. While the setting is necessarily one of exponential families and of sufficient conditioning statistics, I find it amazing that this property is not more well-known.
Sample random variables conditional on their sum
If you seek the conditional density of $(X_1,...,X_{n-1})$ given $$S=\sum_{k=1}^n X_k$$ a change of variable from $$(X_1,...,X_{n})\sim\prod_{i=1}^n f(x_i)$$ to $$\left(X_1,...,X_{n-1},S\right)\sim\pr
Sample random variables conditional on their sum If you seek the conditional density of $(X_1,...,X_{n-1})$ given $$S=\sum_{k=1}^n X_k$$ a change of variable from $$(X_1,...,X_{n})\sim\prod_{i=1}^n f(x_i)$$ to $$\left(X_1,...,X_{n-1},S\right)\sim\prod_{i=1}^{n-1}f(x_i)\times f(s-x_1-\cdots-x_{n-1})$$ [with Jacobian equal to 1] shows that this conditional density is proportional to$$f(x_1)\cdots f(x_{n-1})\,f(s-x_1-\cdots-x_{n-1})$$ Therefore there exists a closed form expression for the conditional density and one can thus call a generic simulation method to simulate from it, like accept-reject, Gibbs sampling, or a Metropolis-Hastings algorithm. The resolution even extends to independent variables that are not identically distributed. Note: A similar question was asked a while ago, but none of the answers mentions this generic solution. For instance, if $f$ is the N$(0,1)$ density and $n=4$, a Metropolis-within-Gibbs sampler for this problem would be of the form T=1e3 #Gibbs steps n=3 #n-1 s=3.1415 #imposed sum x=matrix(rnorm(n),T,n) for (t in 2:T){ x[t,]=x[t-1,] for (i in 1:n){ prop=rnorm(1,x[t-1,i],3) if (runif(1)<dnorm(prop)* dnorm(s-sum(x[t,-i])-prop)/ dnorm(x[t-1,i])/dnorm(s-sum(x[t,]))) x[t,i]=prop}} Here is the outcome of the simulation of the three (first) components $x_1$ (brown), $x_2$ (red), and $x_3$ (yellow): [reproduced from my blog] I recently came upon an unexpected property shown by Lindqvist and Taraldsen (Biometrika, 2005) that to simulate a sample ${\bf y}$ conditional on the realisation of a sufficient statistic, $T({\bf y})=t⁰$, it is sufficient (!!!) to simulate the components of ${\bf y}$ as ${\bf y}=G({\bf u},θ)$, with ${\bf u}$ a random variable with fixed distribution, e.g., a $U(0,1)$, and to solve in $θ$ the fixed point equation $$T({\bf y})=T\circ G({\bf u},θ)=t⁰$$ assuming there exists a single solution to this equation. To borrow a simple example from the authors, take an exponential sample ${\bf y}$ to be simulated given the sum statistic being fixed. As it is well-known, the conditional distribution of ${\bf y}$ is then a (rescaled) Beta and the proposed algorithm ends up being a standard Beta generator. For the method to work in general, $T({\bf y})$ must factorise through a function of the ${\bf u}$’s, a so-called pivotal condition. If this condition does not hold, it gets more complicated: the authors introduce a pseudo-prior distribution on the parameter $θ$ to make it independent from the ${\bf u}$’s conditional on $T({\bf y})=t⁰$. While the setting is necessarily one of exponential families and of sufficient conditioning statistics, I find it amazing that this property is not more well-known.
Sample random variables conditional on their sum If you seek the conditional density of $(X_1,...,X_{n-1})$ given $$S=\sum_{k=1}^n X_k$$ a change of variable from $$(X_1,...,X_{n})\sim\prod_{i=1}^n f(x_i)$$ to $$\left(X_1,...,X_{n-1},S\right)\sim\pr
26,305
Sample random variables conditional on their sum
I think it's worth noting the normal case even though the questioner seeks a more general answer. Following the logic shown here How to generate two groups of $n$ random numbers in $U(0,1)$ such that sum of these two groups equal? leads to the result that for the normal case the conditional distributions are all normal. So, for example, if we want to generate realizations of $X_1, X_2, \ldots X_n$ that sum to $z$ where the $X_i$ are all identically normal, here is the resulting approach: (1) Generate $X_1 \sim N \left({z \over n},{\frac{n-1}{n}} \sigma^2 \right)$ (2) Generate $X_2 \sim N \left({z-x_1 \over n-1},{\frac{n-2}{n-1}} \sigma^2 \right)$ For general $i,$ (3) Generate $X_i \sim N \left({z-\sum_{j=1}^{i-1}x_j \over {n-i+1}},{\frac{n-i}{n-i+1}} \sigma^2 \right)$ (4) Generate $X_{n-1} \sim \left({z-\sum_{j=1}^{n-2}x_j \over {2}},{\frac{\sigma^2}{2}} \right)$ (5) Let $X_n = z - \sum_{j=1}^{n-1} x_j$
Sample random variables conditional on their sum
I think it's worth noting the normal case even though the questioner seeks a more general answer. Following the logic shown here How to generate two groups of $n$ random numbers in $U(0,1)$ such that
Sample random variables conditional on their sum I think it's worth noting the normal case even though the questioner seeks a more general answer. Following the logic shown here How to generate two groups of $n$ random numbers in $U(0,1)$ such that sum of these two groups equal? leads to the result that for the normal case the conditional distributions are all normal. So, for example, if we want to generate realizations of $X_1, X_2, \ldots X_n$ that sum to $z$ where the $X_i$ are all identically normal, here is the resulting approach: (1) Generate $X_1 \sim N \left({z \over n},{\frac{n-1}{n}} \sigma^2 \right)$ (2) Generate $X_2 \sim N \left({z-x_1 \over n-1},{\frac{n-2}{n-1}} \sigma^2 \right)$ For general $i,$ (3) Generate $X_i \sim N \left({z-\sum_{j=1}^{i-1}x_j \over {n-i+1}},{\frac{n-i}{n-i+1}} \sigma^2 \right)$ (4) Generate $X_{n-1} \sim \left({z-\sum_{j=1}^{n-2}x_j \over {2}},{\frac{\sigma^2}{2}} \right)$ (5) Let $X_n = z - \sum_{j=1}^{n-1} x_j$
Sample random variables conditional on their sum I think it's worth noting the normal case even though the questioner seeks a more general answer. Following the logic shown here How to generate two groups of $n$ random numbers in $U(0,1)$ such that
26,306
Why do residual networks work?
In short (from my cellphone), it works because the gradient gets to every layer, with only a small number of layers in between it needs to differentiate through. If you pick a layer from the bottom of your stack of layers, it has a connection with the output layer which only goes through a couple of other layers. This means the gradient will be more pure. It is a way to solve the vanishing gradient problem. And therefore models could be built even deeper.
Why do residual networks work?
In short (from my cellphone), it works because the gradient gets to every layer, with only a small number of layers in between it needs to differentiate through. If you pick a layer from the bottom of
Why do residual networks work? In short (from my cellphone), it works because the gradient gets to every layer, with only a small number of layers in between it needs to differentiate through. If you pick a layer from the bottom of your stack of layers, it has a connection with the output layer which only goes through a couple of other layers. This means the gradient will be more pure. It is a way to solve the vanishing gradient problem. And therefore models could be built even deeper.
Why do residual networks work? In short (from my cellphone), it works because the gradient gets to every layer, with only a small number of layers in between it needs to differentiate through. If you pick a layer from the bottom of
26,307
Why do residual networks work?
Why does this allow the training of deep networks, escaping network saturation at deep levels? We can treat a layer as a function, and adding a layer(with more parameters) leads to a new function with a larger hypothesis space. There are two methods to adding a layer, and for the generic method, we just add a layer and this would result in the spaces depicted on the left side where a larger space does not guarantee to get closer to the truth(optima, either local or global optima) than before adding it. However, if we add residual connections, it is like 'Taylor expansion' style parametrization as depicted on the right hand side. The more layers you add, the more approximate to the truth your parameters would possibly be. Thus, only if larger function classes contain the smaller ones are we guaranteed that increasing them strictly increases the expressive power of the network. For deep neural networks, if we can train the newly-added layer into an identity function f(x) = x, the new model will be as effective as the original model. As the new model may get a better solution to fit the training dataset, the added layer might make it easier to reduce training errors. At the heart of their proposed residual network (ResNet) is the idea that every additional layer should more easily contain the identity function as one of its elements. References Deep Learning - ResNet Dive into deep learning
Why do residual networks work?
Why does this allow the training of deep networks, escaping network saturation at deep levels? We can treat a layer as a function, and adding a layer(with more parameters) leads to a new function wit
Why do residual networks work? Why does this allow the training of deep networks, escaping network saturation at deep levels? We can treat a layer as a function, and adding a layer(with more parameters) leads to a new function with a larger hypothesis space. There are two methods to adding a layer, and for the generic method, we just add a layer and this would result in the spaces depicted on the left side where a larger space does not guarantee to get closer to the truth(optima, either local or global optima) than before adding it. However, if we add residual connections, it is like 'Taylor expansion' style parametrization as depicted on the right hand side. The more layers you add, the more approximate to the truth your parameters would possibly be. Thus, only if larger function classes contain the smaller ones are we guaranteed that increasing them strictly increases the expressive power of the network. For deep neural networks, if we can train the newly-added layer into an identity function f(x) = x, the new model will be as effective as the original model. As the new model may get a better solution to fit the training dataset, the added layer might make it easier to reduce training errors. At the heart of their proposed residual network (ResNet) is the idea that every additional layer should more easily contain the identity function as one of its elements. References Deep Learning - ResNet Dive into deep learning
Why do residual networks work? Why does this allow the training of deep networks, escaping network saturation at deep levels? We can treat a layer as a function, and adding a layer(with more parameters) leads to a new function wit
26,308
Why do residual networks work?
There is a cleaner answer for this question (found it on a discussion forum): The point of shortcuts is to prevent vanishing gradients (rarely exploding ones). Imagine that during training the predicted output is not accurate, there is some error. For example, there is a Siberian cat in the picture, but the network predicts it as a European Shorthair cat. Not a big difference, the fur is shorter for the latter. Now, this difference must be back propagated through the whole network as a gradient. You can imagine that this difference, this gradient will be even smaller and smaller as we go back layer by layer towards the image itself, due to overall weights smaller than one. This is what we call "vanishing gradient" (just to mention, with weights greater than one, they would be exploding gradients, a quite bad thing). Too small gradients can be inaccurate and eventually they can be zero, so would not influence and train earlier layers at all. These vanishing gradients can be avoided by these shortcuts. If you make shortcuts even just over one layer, the gradients can take a shorter path back, which will be roughly half of the original length. It can greatly help avoiding vanishing (or exploding) gradients.
Why do residual networks work?
There is a cleaner answer for this question (found it on a discussion forum): The point of shortcuts is to prevent vanishing gradients (rarely exploding ones). Imagine that during training the pred
Why do residual networks work? There is a cleaner answer for this question (found it on a discussion forum): The point of shortcuts is to prevent vanishing gradients (rarely exploding ones). Imagine that during training the predicted output is not accurate, there is some error. For example, there is a Siberian cat in the picture, but the network predicts it as a European Shorthair cat. Not a big difference, the fur is shorter for the latter. Now, this difference must be back propagated through the whole network as a gradient. You can imagine that this difference, this gradient will be even smaller and smaller as we go back layer by layer towards the image itself, due to overall weights smaller than one. This is what we call "vanishing gradient" (just to mention, with weights greater than one, they would be exploding gradients, a quite bad thing). Too small gradients can be inaccurate and eventually they can be zero, so would not influence and train earlier layers at all. These vanishing gradients can be avoided by these shortcuts. If you make shortcuts even just over one layer, the gradients can take a shorter path back, which will be roughly half of the original length. It can greatly help avoiding vanishing (or exploding) gradients.
Why do residual networks work? There is a cleaner answer for this question (found it on a discussion forum): The point of shortcuts is to prevent vanishing gradients (rarely exploding ones). Imagine that during training the pred
26,309
MCMC in a frequentist setting
As indicated in the many comments, Markov Chain Monte Carlo is a special case of the Monte Carlo method, which is designed to approximate quantities related with a distribution via pseudo-random number simulation. As such, it has no connection with a particular statistical paradigm and the earliest instances of the method, as in Metropolis et al. (1953), were unrelated with statistics, Bayesian or frequentist. If anything these methods are naturally "frequentist" (an ill-defined category anyway) in that they rely on the stabilisation of the frequencies or averages towards the expectation as the number of simulations increases, aka the Law of Large Numbers. It is therefore possible within non-Bayesian complex problems to use MCMC methods to replace intractable integrals. Check for instance the optimisation of likelihoods with no closed form expressions, as in latent variable and random effect models. The EM algorithm may fail to work because of an intractable "E" step, in which case the expectation need be replaced by a Monte Carlo or a Markov Chain Monte Carlo approximation. With a possible evaluation of the error. Or it may fail to work because of an intractable "M" step, in which case the maximisation can sometimes be replaced by a Markovian maximisation procedure as in simulated annealing. Or using Gibbs steps. simulated inference methods in econometrics, as the simulated method of moments, indirect inference, empirical likelihood. approximations of likelihoods with intractable normalising constants such as Ising, Potts, and other Markov random fields models, using for instance exchange algorithms. frequentist goodness-of-fit tests, which may require computations of coverage probabilities, $p$_values, powers, for sufficient or insufficient statistics with no closed-form density, or conditional on ancillary statistics. Take the example of testing for independence in (large) contingency tables (or deriving the maximum likelihood estimator). again in econometrics, Laplace type estimators, "which include means and quantiles of quasi-posterior distributions defined as transformations of general nonlikelihood-based statistical criterion functions, such as those in GMM, nonlinear IV, empirical likelihood, and minimum distance methods" (Chernozhukov and Hong, 2003), rely on MCMC algorithms.
MCMC in a frequentist setting
As indicated in the many comments, Markov Chain Monte Carlo is a special case of the Monte Carlo method, which is designed to approximate quantities related with a distribution via pseudo-random numbe
MCMC in a frequentist setting As indicated in the many comments, Markov Chain Monte Carlo is a special case of the Monte Carlo method, which is designed to approximate quantities related with a distribution via pseudo-random number simulation. As such, it has no connection with a particular statistical paradigm and the earliest instances of the method, as in Metropolis et al. (1953), were unrelated with statistics, Bayesian or frequentist. If anything these methods are naturally "frequentist" (an ill-defined category anyway) in that they rely on the stabilisation of the frequencies or averages towards the expectation as the number of simulations increases, aka the Law of Large Numbers. It is therefore possible within non-Bayesian complex problems to use MCMC methods to replace intractable integrals. Check for instance the optimisation of likelihoods with no closed form expressions, as in latent variable and random effect models. The EM algorithm may fail to work because of an intractable "E" step, in which case the expectation need be replaced by a Monte Carlo or a Markov Chain Monte Carlo approximation. With a possible evaluation of the error. Or it may fail to work because of an intractable "M" step, in which case the maximisation can sometimes be replaced by a Markovian maximisation procedure as in simulated annealing. Or using Gibbs steps. simulated inference methods in econometrics, as the simulated method of moments, indirect inference, empirical likelihood. approximations of likelihoods with intractable normalising constants such as Ising, Potts, and other Markov random fields models, using for instance exchange algorithms. frequentist goodness-of-fit tests, which may require computations of coverage probabilities, $p$_values, powers, for sufficient or insufficient statistics with no closed-form density, or conditional on ancillary statistics. Take the example of testing for independence in (large) contingency tables (or deriving the maximum likelihood estimator). again in econometrics, Laplace type estimators, "which include means and quantiles of quasi-posterior distributions defined as transformations of general nonlikelihood-based statistical criterion functions, such as those in GMM, nonlinear IV, empirical likelihood, and minimum distance methods" (Chernozhukov and Hong, 2003), rely on MCMC algorithms.
MCMC in a frequentist setting As indicated in the many comments, Markov Chain Monte Carlo is a special case of the Monte Carlo method, which is designed to approximate quantities related with a distribution via pseudo-random numbe
26,310
Neural Networks: Is an epoch in SGD the same as an epoch in mini-batch?
In the neural network terminology: one epoch = one forward pass and one backward pass of all the training examples batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need. number of iterations = number of passes, each pass using [batch size] number of examples. To be clear, one pass = one forward pass + one backward pass (we do not count the forward pass and backward pass as two different passes). Example: if you have 1000 training examples, and your batch size is 500, then it will take 2 iterations to complete 1 epoch.
Neural Networks: Is an epoch in SGD the same as an epoch in mini-batch?
In the neural network terminology: one epoch = one forward pass and one backward pass of all the training examples batch size = the number of training examples in one forward/backward pass. The highe
Neural Networks: Is an epoch in SGD the same as an epoch in mini-batch? In the neural network terminology: one epoch = one forward pass and one backward pass of all the training examples batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need. number of iterations = number of passes, each pass using [batch size] number of examples. To be clear, one pass = one forward pass + one backward pass (we do not count the forward pass and backward pass as two different passes). Example: if you have 1000 training examples, and your batch size is 500, then it will take 2 iterations to complete 1 epoch.
Neural Networks: Is an epoch in SGD the same as an epoch in mini-batch? In the neural network terminology: one epoch = one forward pass and one backward pass of all the training examples batch size = the number of training examples in one forward/backward pass. The highe
26,311
Neural Networks: Is an epoch in SGD the same as an epoch in mini-batch?
Franck's answer is not correct. It takes some gut to say this because he has a lot more reps than me and many people already voted for it. Epoch is a word that means a single pass through a training set, not all training examples. So, yes. If we do mini-batches GD instead of a batch GD, say in batches of 20, One epoch now consist of N/20 weight updates. N is the total number of samples. To be verbose, In a batch gradient descent, a single pass through the training allows you to take only one gradient descent step. With mini-batch (batch size = 5,000) gradient descent, a single pass through the training set, that is one epoch, allows you to take 5,000 gradient descent steps.
Neural Networks: Is an epoch in SGD the same as an epoch in mini-batch?
Franck's answer is not correct. It takes some gut to say this because he has a lot more reps than me and many people already voted for it. Epoch is a word that means a single pass through a training s
Neural Networks: Is an epoch in SGD the same as an epoch in mini-batch? Franck's answer is not correct. It takes some gut to say this because he has a lot more reps than me and many people already voted for it. Epoch is a word that means a single pass through a training set, not all training examples. So, yes. If we do mini-batches GD instead of a batch GD, say in batches of 20, One epoch now consist of N/20 weight updates. N is the total number of samples. To be verbose, In a batch gradient descent, a single pass through the training allows you to take only one gradient descent step. With mini-batch (batch size = 5,000) gradient descent, a single pass through the training set, that is one epoch, allows you to take 5,000 gradient descent steps.
Neural Networks: Is an epoch in SGD the same as an epoch in mini-batch? Franck's answer is not correct. It takes some gut to say this because he has a lot more reps than me and many people already voted for it. Epoch is a word that means a single pass through a training s
26,312
fit GLM for weibull family [closed]
Sorry i'm quite late with this.... but might help someone i believe : gamlss package is what you should be looking for. It supports almost all the distributions( not just exponential family ones). It gives amazing flexibility on almost all the parameters of a distribution.
fit GLM for weibull family [closed]
Sorry i'm quite late with this.... but might help someone i believe : gamlss package is what you should be looking for. It supports almost all the distributions( not just exponential family ones). It
fit GLM for weibull family [closed] Sorry i'm quite late with this.... but might help someone i believe : gamlss package is what you should be looking for. It supports almost all the distributions( not just exponential family ones). It gives amazing flexibility on almost all the parameters of a distribution.
fit GLM for weibull family [closed] Sorry i'm quite late with this.... but might help someone i believe : gamlss package is what you should be looking for. It supports almost all the distributions( not just exponential family ones). It
26,313
fit GLM for weibull family [closed]
The glm() function does not support the Weibull distribution in R unfortunately. You can try ?family to see which distributions are available. I would try using survreg() from the survival package instead.
fit GLM for weibull family [closed]
The glm() function does not support the Weibull distribution in R unfortunately. You can try ?family to see which distributions are available. I would try using survreg() from the survival package ins
fit GLM for weibull family [closed] The glm() function does not support the Weibull distribution in R unfortunately. You can try ?family to see which distributions are available. I would try using survreg() from the survival package instead.
fit GLM for weibull family [closed] The glm() function does not support the Weibull distribution in R unfortunately. You can try ?family to see which distributions are available. I would try using survreg() from the survival package ins
26,314
fit GLM for weibull family [closed]
I have used the brms package, which is Bayesian. It supports the Weibull, exponential, lognormal, Frechet, and other families and (left/right/interval) censoring so implements AFT models. It also includes random effects which are known in survival models as "frailty", and a host of other regression options like gam-style smoothers. Since Bayesian approaches use MCMC sampling, it's slower than glm, gamlss, or survreg, but it's also a comprehensive regression solution, and being Bayesian has other advantages. (I love it's stanplot, which provides a host of illuminating diagnostic plots.)
fit GLM for weibull family [closed]
I have used the brms package, which is Bayesian. It supports the Weibull, exponential, lognormal, Frechet, and other families and (left/right/interval) censoring so implements AFT models. It also incl
fit GLM for weibull family [closed] I have used the brms package, which is Bayesian. It supports the Weibull, exponential, lognormal, Frechet, and other families and (left/right/interval) censoring so implements AFT models. It also includes random effects which are known in survival models as "frailty", and a host of other regression options like gam-style smoothers. Since Bayesian approaches use MCMC sampling, it's slower than glm, gamlss, or survreg, but it's also a comprehensive regression solution, and being Bayesian has other advantages. (I love it's stanplot, which provides a host of illuminating diagnostic plots.)
fit GLM for weibull family [closed] I have used the brms package, which is Bayesian. It supports the Weibull, exponential, lognormal, Frechet, and other families and (left/right/interval) censoring so implements AFT models. It also incl
26,315
Relationship between AUC and U Mann-Whitney statistic
Ok, I found the answer and as I expected it is trivial. $U$ test statistic value depends on the group it is calculated for (it does not affect the test result in anyway). In the code I wrote the test statistic was computed as a measure of support for the hypothesis that the group with the smaller mean dominates the group with the higher mean, which is of course not true, so that's why $U$ was small. So after switching the direction of the comparison and making the hypothesis tested by the Wilcoxon-Mann-Whitney test to one checking whether the group with the higher mean dominates the one with the lower, which is true, I got the correct relationship between $U$ and $AUC$ (that is $AUC = \frac{U}{n_1n_2}$). So everything is correct.
Relationship between AUC and U Mann-Whitney statistic
Ok, I found the answer and as I expected it is trivial. $U$ test statistic value depends on the group it is calculated for (it does not affect the test result in anyway). In the code I wrote the test
Relationship between AUC and U Mann-Whitney statistic Ok, I found the answer and as I expected it is trivial. $U$ test statistic value depends on the group it is calculated for (it does not affect the test result in anyway). In the code I wrote the test statistic was computed as a measure of support for the hypothesis that the group with the smaller mean dominates the group with the higher mean, which is of course not true, so that's why $U$ was small. So after switching the direction of the comparison and making the hypothesis tested by the Wilcoxon-Mann-Whitney test to one checking whether the group with the higher mean dominates the one with the lower, which is true, I got the correct relationship between $U$ and $AUC$ (that is $AUC = \frac{U}{n_1n_2}$). So everything is correct.
Relationship between AUC and U Mann-Whitney statistic Ok, I found the answer and as I expected it is trivial. $U$ test statistic value depends on the group it is calculated for (it does not affect the test result in anyway). In the code I wrote the test
26,316
Q-learning with Neural Network as function approximation
Your target should be just $r_{t+1}+\gamma \max_a Q(s_{t+1},a)$. Note that your error term (which is correct) could then be rewritten as $r_{t+1}+\gamma \max_a Q(s_{t+1},a) - Q_t$ which is the term inside brackets in the update formula. This will get multiplied by your NN learning rate and other backpropagation terms during learning, and then added to the previous weights, just like the $Q$ update formula.
Q-learning with Neural Network as function approximation
Your target should be just $r_{t+1}+\gamma \max_a Q(s_{t+1},a)$. Note that your error term (which is correct) could then be rewritten as $r_{t+1}+\gamma \max_a Q(s_{t+1},a) - Q_t$ which is the term in
Q-learning with Neural Network as function approximation Your target should be just $r_{t+1}+\gamma \max_a Q(s_{t+1},a)$. Note that your error term (which is correct) could then be rewritten as $r_{t+1}+\gamma \max_a Q(s_{t+1},a) - Q_t$ which is the term inside brackets in the update formula. This will get multiplied by your NN learning rate and other backpropagation terms during learning, and then added to the previous weights, just like the $Q$ update formula.
Q-learning with Neural Network as function approximation Your target should be just $r_{t+1}+\gamma \max_a Q(s_{t+1},a)$. Note that your error term (which is correct) could then be rewritten as $r_{t+1}+\gamma \max_a Q(s_{t+1},a) - Q_t$ which is the term in
26,317
Real life examples of difference between independence and correlation
Stock returns are a decent real-life example of what you're asking for. There's very close to zero correlation between today's and yesterday's S&P 500 return. However, there is clear dependence: squared returns are positively autocorrelated; periods of high volatility are clustered in time. R code: library(ggplot2) library(grid) library(quantmod) symbols <- new.env() date_from <- as.Date("1960-01-01") date_to <- as.Date("2016-02-01") getSymbols("^GSPC", env=symbols, src="yahoo", from=date_from, to=date_to) # S&P500 df <- data.frame(close=as.numeric(symbols$GSPC$GSPC.Close), date=index(symbols$GSPC)) df$log_return <- c(NA, diff(log(df$close))) df$log_return_lag <- c(NA, head(df$log_return, nrow(df) - 1)) cor(df$log_return, df$log_return_lag, use="pairwise.complete.obs") # 0.02 cor(df$log_return^2, df$log_return_lag^2, use="pairwise.complete.obs") # 0.14 acf(df$log_return, na.action=na.pass) # Basically zero autocorrelation acf((df$log_return^2), na.action=na.pass) # Squared returns positively autocorrelated p <- (ggplot(df, aes(x=date, y=log_return)) + geom_point(alpha=0.5) + theme_bw() + theme(panel.border=element_blank())) p ggsave("log_returns_s&p.png", p, width=10, height=8) The timeseries of log returns on the S&P 500: If returns were independent through time (and stationary), it would be very unlikely to see those patterns of clustered volatility, and you wouldn't see autocorrelation in squared log returns.
Real life examples of difference between independence and correlation
Stock returns are a decent real-life example of what you're asking for. There's very close to zero correlation between today's and yesterday's S&P 500 return. However, there is clear dependence: squa
Real life examples of difference between independence and correlation Stock returns are a decent real-life example of what you're asking for. There's very close to zero correlation between today's and yesterday's S&P 500 return. However, there is clear dependence: squared returns are positively autocorrelated; periods of high volatility are clustered in time. R code: library(ggplot2) library(grid) library(quantmod) symbols <- new.env() date_from <- as.Date("1960-01-01") date_to <- as.Date("2016-02-01") getSymbols("^GSPC", env=symbols, src="yahoo", from=date_from, to=date_to) # S&P500 df <- data.frame(close=as.numeric(symbols$GSPC$GSPC.Close), date=index(symbols$GSPC)) df$log_return <- c(NA, diff(log(df$close))) df$log_return_lag <- c(NA, head(df$log_return, nrow(df) - 1)) cor(df$log_return, df$log_return_lag, use="pairwise.complete.obs") # 0.02 cor(df$log_return^2, df$log_return_lag^2, use="pairwise.complete.obs") # 0.14 acf(df$log_return, na.action=na.pass) # Basically zero autocorrelation acf((df$log_return^2), na.action=na.pass) # Squared returns positively autocorrelated p <- (ggplot(df, aes(x=date, y=log_return)) + geom_point(alpha=0.5) + theme_bw() + theme(panel.border=element_blank())) p ggsave("log_returns_s&p.png", p, width=10, height=8) The timeseries of log returns on the S&P 500: If returns were independent through time (and stationary), it would be very unlikely to see those patterns of clustered volatility, and you wouldn't see autocorrelation in squared log returns.
Real life examples of difference between independence and correlation Stock returns are a decent real-life example of what you're asking for. There's very close to zero correlation between today's and yesterday's S&P 500 return. However, there is clear dependence: squa
26,318
Real life examples of difference between independence and correlation
Another example is the relationship between stress and grades on an exam. The relationship is an inverse U shape and the correlation is very low even though causation seems pretty clear.
Real life examples of difference between independence and correlation
Another example is the relationship between stress and grades on an exam. The relationship is an inverse U shape and the correlation is very low even though causation seems pretty clear.
Real life examples of difference between independence and correlation Another example is the relationship between stress and grades on an exam. The relationship is an inverse U shape and the correlation is very low even though causation seems pretty clear.
Real life examples of difference between independence and correlation Another example is the relationship between stress and grades on an exam. The relationship is an inverse U shape and the correlation is very low even though causation seems pretty clear.
26,319
Incorporating Prior Class Probability Distribution in Logistic Regression
Let $Y$ be the binary response variable and $X$ the vector of predictors with density $f$ (which would either be continuous, discrete or a combination of both). Note that $$ \frac{P(Y = 1 \mid X = x)}{P(Y = 0 \mid X = x)} = \frac{P(Y = 1) f_{X \mid Y=1}(x)}{P(Y = 0) f_{X \mid Y=0}(x)} $$ and so $$ \log \left ( \frac{P(Y = 1 \mid X = x)}{P(Y = 0 \mid X = x)} \right ) = \log \left ( \frac{P(Y = 1)}{P(Y = 0)} \right ) + \log \left ( \frac{f_{X \mid Y=1}(x)}{f_{X \mid Y=0}(x)} \right ) . $$ This means that under a logistic regression model the logarithm of the prior odds of the event $\{ Y = 1 \}$ appears as an additive constant in the conditional log odds. What you might consider then is an intercept adjustment where you subtract off the logit of the empirical odds and add the logit of the prior odds. But, assuming that the prior probability is accurate this doesn't expect to have much of an effect on the model. This type of adjustment is made primarily after some sampling procedure that artificially alters the proportion of events in the data.
Incorporating Prior Class Probability Distribution in Logistic Regression
Let $Y$ be the binary response variable and $X$ the vector of predictors with density $f$ (which would either be continuous, discrete or a combination of both). Note that $$ \frac{P(Y = 1 \mid X = x)
Incorporating Prior Class Probability Distribution in Logistic Regression Let $Y$ be the binary response variable and $X$ the vector of predictors with density $f$ (which would either be continuous, discrete or a combination of both). Note that $$ \frac{P(Y = 1 \mid X = x)}{P(Y = 0 \mid X = x)} = \frac{P(Y = 1) f_{X \mid Y=1}(x)}{P(Y = 0) f_{X \mid Y=0}(x)} $$ and so $$ \log \left ( \frac{P(Y = 1 \mid X = x)}{P(Y = 0 \mid X = x)} \right ) = \log \left ( \frac{P(Y = 1)}{P(Y = 0)} \right ) + \log \left ( \frac{f_{X \mid Y=1}(x)}{f_{X \mid Y=0}(x)} \right ) . $$ This means that under a logistic regression model the logarithm of the prior odds of the event $\{ Y = 1 \}$ appears as an additive constant in the conditional log odds. What you might consider then is an intercept adjustment where you subtract off the logit of the empirical odds and add the logit of the prior odds. But, assuming that the prior probability is accurate this doesn't expect to have much of an effect on the model. This type of adjustment is made primarily after some sampling procedure that artificially alters the proportion of events in the data.
Incorporating Prior Class Probability Distribution in Logistic Regression Let $Y$ be the binary response variable and $X$ the vector of predictors with density $f$ (which would either be continuous, discrete or a combination of both). Note that $$ \frac{P(Y = 1 \mid X = x)
26,320
Incorporating Prior Class Probability Distribution in Logistic Regression
For random forest, the default prior is the empirical class distribution of training set. You would like to adjust this prior, when you expect the training set class distribution is far from matching new test observations. The prior can be adjusted by stratification/downsampling or class_weights. Stratifictaion/downsampling does not mean, that some observations are being discarded, they'll just be bootstrapped into fewer root nodes. Besides adjusting the prior, it is also possible to obtain probabilistic predictions from the random forest model and choose a threshold of certainty. In practice, I find a mix of adjusting priors by stratification and choosing best threshold as the best performing solution. Use ROC plots to decide for thresholds. Adjusting class_weights will likely provide a similar performance, but it is less transparent, what the effective prior becomes. For stratification, the ratio of stratification is simply the new prior. See also this answer for more details
Incorporating Prior Class Probability Distribution in Logistic Regression
For random forest, the default prior is the empirical class distribution of training set. You would like to adjust this prior, when you expect the training set class distribution is far from matching
Incorporating Prior Class Probability Distribution in Logistic Regression For random forest, the default prior is the empirical class distribution of training set. You would like to adjust this prior, when you expect the training set class distribution is far from matching new test observations. The prior can be adjusted by stratification/downsampling or class_weights. Stratifictaion/downsampling does not mean, that some observations are being discarded, they'll just be bootstrapped into fewer root nodes. Besides adjusting the prior, it is also possible to obtain probabilistic predictions from the random forest model and choose a threshold of certainty. In practice, I find a mix of adjusting priors by stratification and choosing best threshold as the best performing solution. Use ROC plots to decide for thresholds. Adjusting class_weights will likely provide a similar performance, but it is less transparent, what the effective prior becomes. For stratification, the ratio of stratification is simply the new prior. See also this answer for more details
Incorporating Prior Class Probability Distribution in Logistic Regression For random forest, the default prior is the empirical class distribution of training set. You would like to adjust this prior, when you expect the training set class distribution is far from matching
26,321
RBF SVM use cases (vs logistic regression and random forest)
I will try to answer this question with a combination of published evidence, personal experience, and speculation. A) Published evidence. The only paper I know that help answer the question is Delgado et al 2014 - Do we Need Hundreds of Classifiers to Solve Real World Classification Problems? - JMLR which runs hundreds of different algorithms and implementations on 121 datasets fro UCI. They find that although RBF SVM are not the "best" algorithm (it is random forests if I remember correctly), it is among the top 3 (or 5). If you consider that their selection of datasets is a "good sample" of real world problems, than SVM are definitively an algorithm that should be tried on new problems but one should try random forest first! The limits on generalizing that result are that the datasets are almost all tall and skinny (n>>p), not very sparse - which I speculate should be more of a problem for RF, and not very big (both n and p). Finally, still on the published evidence, I recommend two sites that compare different implementations of random forests: Benchmarking Random Forest Implementations Benchmarking Random Forest Classification B) Personal experience. I believe that papers such as Delgado et all very important for the machine learning community, so I tried to replicate their results under some different conditions. I ran some 15 different algorithms on 100+ binary datasets (from Delgado's set of datasets). I also think I was more careful on the selection of hyperparameters then they were. My results is that the SVM was the "best algorithm" (mean rank 4.9). My take is that SVM passed RF because the original dataset contained many multiclass problems - which I will discuss in the speculation part - should be a problem for SVM. EDIT (Jun/16): But RF is way way faster, and it was the 2nd best algorithm (mean rank 5.6) followed by gbm (5.8), nnets (7.2), and so on). I did not try standard logistic regression in these problems, but I tried an elastic net (L1 and L2 regularized LR) but it did not perform well (mean rank 8.3)~ I have not yet finished analyzing the results or writing the paper so I cannot even point to a technical report with the results. Hopefully, in some weeks I can re-edit this answer and point to a technical report with the results. The paper is available at http://arxiv.org/abs/1606.00930 It turns out that after the full analysis RF and SVM are almost equivalent in terms of expected error rate and SVM is fastest (to my surprise!!). I am no longer that emphatic in recommending RF (on speed grounds). So my personal experience is that although SVM may get you some extra bit of accuracy, it is almost always a better choice to use a RF. Also for larger problems, it may be impossible to use a batch SVM solver (I have never used a online SVM solver such as LASVM or others). Finally I only used logistic regression in one situation. I was doing some "intense" feature engineering on a image classification problem (such as - combine or not two different descriptions of the image, and the dimensionality of the descriptions). And I used logistic regression to select among the many alternatives (because there is no hyperparameter search in LR). Once we settle in the best features (according to LR) we used a RF (selecting for the best hyperparameters) to get the final classifier. C) Speculation I have never seriously worked on multiclass problems, but my feeling is that SVM are not so good on them. The problem is not the issue between one-vs-one or one-vs-all solutions, but that all implementations that I know, will use the same hyperparameters for all (OVO or OVA) classifiers. Selecting the correct hyperparameters for SVM is so costly that none of the of-the-shelf implementations I know will do a search for each classifiers. I speculate that this is a problem for SVM (but not a problem for RF!!). Then again, for multiclass problems I would go straight to RF.
RBF SVM use cases (vs logistic regression and random forest)
I will try to answer this question with a combination of published evidence, personal experience, and speculation. A) Published evidence. The only paper I know that help answer the question is Delgado
RBF SVM use cases (vs logistic regression and random forest) I will try to answer this question with a combination of published evidence, personal experience, and speculation. A) Published evidence. The only paper I know that help answer the question is Delgado et al 2014 - Do we Need Hundreds of Classifiers to Solve Real World Classification Problems? - JMLR which runs hundreds of different algorithms and implementations on 121 datasets fro UCI. They find that although RBF SVM are not the "best" algorithm (it is random forests if I remember correctly), it is among the top 3 (or 5). If you consider that their selection of datasets is a "good sample" of real world problems, than SVM are definitively an algorithm that should be tried on new problems but one should try random forest first! The limits on generalizing that result are that the datasets are almost all tall and skinny (n>>p), not very sparse - which I speculate should be more of a problem for RF, and not very big (both n and p). Finally, still on the published evidence, I recommend two sites that compare different implementations of random forests: Benchmarking Random Forest Implementations Benchmarking Random Forest Classification B) Personal experience. I believe that papers such as Delgado et all very important for the machine learning community, so I tried to replicate their results under some different conditions. I ran some 15 different algorithms on 100+ binary datasets (from Delgado's set of datasets). I also think I was more careful on the selection of hyperparameters then they were. My results is that the SVM was the "best algorithm" (mean rank 4.9). My take is that SVM passed RF because the original dataset contained many multiclass problems - which I will discuss in the speculation part - should be a problem for SVM. EDIT (Jun/16): But RF is way way faster, and it was the 2nd best algorithm (mean rank 5.6) followed by gbm (5.8), nnets (7.2), and so on). I did not try standard logistic regression in these problems, but I tried an elastic net (L1 and L2 regularized LR) but it did not perform well (mean rank 8.3)~ I have not yet finished analyzing the results or writing the paper so I cannot even point to a technical report with the results. Hopefully, in some weeks I can re-edit this answer and point to a technical report with the results. The paper is available at http://arxiv.org/abs/1606.00930 It turns out that after the full analysis RF and SVM are almost equivalent in terms of expected error rate and SVM is fastest (to my surprise!!). I am no longer that emphatic in recommending RF (on speed grounds). So my personal experience is that although SVM may get you some extra bit of accuracy, it is almost always a better choice to use a RF. Also for larger problems, it may be impossible to use a batch SVM solver (I have never used a online SVM solver such as LASVM or others). Finally I only used logistic regression in one situation. I was doing some "intense" feature engineering on a image classification problem (such as - combine or not two different descriptions of the image, and the dimensionality of the descriptions). And I used logistic regression to select among the many alternatives (because there is no hyperparameter search in LR). Once we settle in the best features (according to LR) we used a RF (selecting for the best hyperparameters) to get the final classifier. C) Speculation I have never seriously worked on multiclass problems, but my feeling is that SVM are not so good on them. The problem is not the issue between one-vs-one or one-vs-all solutions, but that all implementations that I know, will use the same hyperparameters for all (OVO or OVA) classifiers. Selecting the correct hyperparameters for SVM is so costly that none of the of-the-shelf implementations I know will do a search for each classifiers. I speculate that this is a problem for SVM (but not a problem for RF!!). Then again, for multiclass problems I would go straight to RF.
RBF SVM use cases (vs logistic regression and random forest) I will try to answer this question with a combination of published evidence, personal experience, and speculation. A) Published evidence. The only paper I know that help answer the question is Delgado
26,322
RBF SVM use cases (vs logistic regression and random forest)
I don't have sufficient privileges to be able to write comments, so I will just provide my input/observations here as an answer. In my experience, Support Vector Classifiers (SVC) tend to be either at par or outperform the other methods when the binary classes are balanced. For unbalanced classes, SVC tends to perform poorly. I don't often deal with multiclass problems, but I have seen some good results with SVC for multiclass problems as well. Another thing I've noticed is that the curse of dimentionality doesn't seem to affect SVC as much as other modeling techniques. In other words, as I add more terms in the model, the other techniques start performing poorly on the test (or, holdout) set as compared to the training set. But not so much when I use SVC. Because of this reason, if model parsimony is not your priority, then SVC may be a better option as you can throw in a lot of terms without as much over-fitting as the other methods. One of the issues I have with SVC is that it doesn't implicitly provide a measure (like predicted probability) to be able to rank order the observations. You could use Platt Scaling (implemented in sklearn.svm package in Python), but I have seen some inconsistencies. (I can share the details if anyone's interested.) Not sure if this really answers your question, but these are my observations. Hope that helps.
RBF SVM use cases (vs logistic regression and random forest)
I don't have sufficient privileges to be able to write comments, so I will just provide my input/observations here as an answer. In my experience, Support Vector Classifiers (SVC) tend to be either at
RBF SVM use cases (vs logistic regression and random forest) I don't have sufficient privileges to be able to write comments, so I will just provide my input/observations here as an answer. In my experience, Support Vector Classifiers (SVC) tend to be either at par or outperform the other methods when the binary classes are balanced. For unbalanced classes, SVC tends to perform poorly. I don't often deal with multiclass problems, but I have seen some good results with SVC for multiclass problems as well. Another thing I've noticed is that the curse of dimentionality doesn't seem to affect SVC as much as other modeling techniques. In other words, as I add more terms in the model, the other techniques start performing poorly on the test (or, holdout) set as compared to the training set. But not so much when I use SVC. Because of this reason, if model parsimony is not your priority, then SVC may be a better option as you can throw in a lot of terms without as much over-fitting as the other methods. One of the issues I have with SVC is that it doesn't implicitly provide a measure (like predicted probability) to be able to rank order the observations. You could use Platt Scaling (implemented in sklearn.svm package in Python), but I have seen some inconsistencies. (I can share the details if anyone's interested.) Not sure if this really answers your question, but these are my observations. Hope that helps.
RBF SVM use cases (vs logistic regression and random forest) I don't have sufficient privileges to be able to write comments, so I will just provide my input/observations here as an answer. In my experience, Support Vector Classifiers (SVC) tend to be either at
26,323
RBF SVM use cases (vs logistic regression and random forest)
RF and (RBF) SVM have different theories behind them, but assuming you have enough data, they perform similarly well. They both can learn complex functions and deal nicely with noisy and uninformative variables and outliers. If you are trying to get best results for something like a kaggle, you would ensemble multiple models including RF and SVM anyway. In non kaggle settings, you might consider how hard is it to implement the model, put it to production, make a prediction, interpret, explain it to a manager etc. SVM (linear or highly regularized RBF) would be definitely preferred if you have small amount of data or you are dealing with a course of dimensionality. There is couple of reasons for it, one is that is better to look for maximum margin hyperplane instead of series of best splits on your features, also there is usually no need for a complex boundary because in high dimensional space there will be some hyperplane that can separate the data anyway. Another issue is that RF is harder to tune (has more parameters to tune), so you need more data. Another think, cross validation can be very cheap and fast for SVM, especially LOOCV. Since only a few samples are support vectors (not always), you don have to retrain your classifier on every fold, but only when the data that are now in the test set were support vectors before. This can also make online learning easier. Also, it might be cheaper to store support vectors than full trees. Is often better to make probabilistic model than classifier. So, make model first and decision later. In that case logistic regression will be preferred. And you can still use kernels and regularization to make it behave like you want. Also, you will not use RF to answer questions like: correcting for age, lifestyle, sex and education, does drinking alcohol increase chance of dyeing of heart attack? Some additional resource I found interesting: https://www.quora.com/What-are-the-advantages-of-different-classification-algorithms http://videolectures.net/solomon_caruana_wslmw/
RBF SVM use cases (vs logistic regression and random forest)
RF and (RBF) SVM have different theories behind them, but assuming you have enough data, they perform similarly well. They both can learn complex functions and deal nicely with noisy and uninformative
RBF SVM use cases (vs logistic regression and random forest) RF and (RBF) SVM have different theories behind them, but assuming you have enough data, they perform similarly well. They both can learn complex functions and deal nicely with noisy and uninformative variables and outliers. If you are trying to get best results for something like a kaggle, you would ensemble multiple models including RF and SVM anyway. In non kaggle settings, you might consider how hard is it to implement the model, put it to production, make a prediction, interpret, explain it to a manager etc. SVM (linear or highly regularized RBF) would be definitely preferred if you have small amount of data or you are dealing with a course of dimensionality. There is couple of reasons for it, one is that is better to look for maximum margin hyperplane instead of series of best splits on your features, also there is usually no need for a complex boundary because in high dimensional space there will be some hyperplane that can separate the data anyway. Another issue is that RF is harder to tune (has more parameters to tune), so you need more data. Another think, cross validation can be very cheap and fast for SVM, especially LOOCV. Since only a few samples are support vectors (not always), you don have to retrain your classifier on every fold, but only when the data that are now in the test set were support vectors before. This can also make online learning easier. Also, it might be cheaper to store support vectors than full trees. Is often better to make probabilistic model than classifier. So, make model first and decision later. In that case logistic regression will be preferred. And you can still use kernels and regularization to make it behave like you want. Also, you will not use RF to answer questions like: correcting for age, lifestyle, sex and education, does drinking alcohol increase chance of dyeing of heart attack? Some additional resource I found interesting: https://www.quora.com/What-are-the-advantages-of-different-classification-algorithms http://videolectures.net/solomon_caruana_wslmw/
RBF SVM use cases (vs logistic regression and random forest) RF and (RBF) SVM have different theories behind them, but assuming you have enough data, they perform similarly well. They both can learn complex functions and deal nicely with noisy and uninformative
26,324
Difference between centered and uncentered $R^2$?
I don't know much about econometric. But I think your question is a statistical one in essence. Consider an OLS model $$\boldsymbol{y}=\boldsymbol{X\beta}+\boldsymbol{\varepsilon}.$$ Let $V=\operatorname{col}(\boldsymbol{X})$, also take $V_0\subset V$ to be the "intercept subspace". $R^2$ can be defined as a ratio of two "sum of squares": $$R^2=\frac{\lVert\hat{\boldsymbol{y}}-\hat{\boldsymbol{y}}_0\rVert^2}{\lVert\boldsymbol{y}-\hat{\boldsymbol{y}}_0\rVert^2}.$$ Using projection matrices, which are idempotent and symmetric, this is equivalently saying: $$R^2=\frac{\lVert(\boldsymbol{P}_V-\boldsymbol{P}_{V_0})\boldsymbol{y}\rVert^2}{\lVert(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{y}\rVert^2}=\frac{\lVert(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{P}_V\boldsymbol{y}\rVert^2}{\lVert(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{y}\rVert^2}=\frac{\boldsymbol{y}'\boldsymbol{P}_V(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{P}_V\boldsymbol{y}}{\boldsymbol{y}'(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{y}}.$$ In the usual OLS we take $\boldsymbol{X}=\begin{pmatrix}\boldsymbol{1}_n&\boldsymbol{x}\end{pmatrix}$ and $V_0=\operatorname{span}\{\boldsymbol{1}_n\}$. Then $$\boldsymbol{I}-\boldsymbol{P}_{V_0}=\boldsymbol{I}-\frac{1}{n}\boldsymbol{1}_n\boldsymbol{1}_n'=\boldsymbol{M}_1$$ (which implies the "residual maker" $\boldsymbol{M}_1$ is a projection matrix onto $V_0^\perp$). If we force the intercept term to be $0$ by choosing $\boldsymbol{X}=\begin{pmatrix}\boldsymbol{0}_n&\boldsymbol{x}\end{pmatrix}$ and $V_0=\operatorname{span}\{\boldsymbol{0}_n\}=\{\boldsymbol{0}_n\}$. Then $\boldsymbol{I}-\boldsymbol{P}_{V_0}=\boldsymbol{I}$, so $$R^2=\frac{\boldsymbol{y}'\boldsymbol{P}_V(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{P}_V\boldsymbol{y}}{\boldsymbol{y}'(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{y}}=\frac{\boldsymbol{y}'\boldsymbol{P}_V\boldsymbol{y}}{\boldsymbol{y}'\boldsymbol{y}}.$$ In short, the "centered" $R^2$ is the usual $R^2$, and the "uncentered" $R^2$ is the $R^2$ when the model does not contain an intercept term. The word "centered", I think, comes from the fact that $$\boldsymbol{P}_{V_0}\boldsymbol{y}=\frac{1}{n}\boldsymbol{1}_n\boldsymbol{1}_n'\boldsymbol{y}=\bar{y}\boldsymbol{1}_n.$$
Difference between centered and uncentered $R^2$?
I don't know much about econometric. But I think your question is a statistical one in essence. Consider an OLS model $$\boldsymbol{y}=\boldsymbol{X\beta}+\boldsymbol{\varepsilon}.$$ Let $V=\operatorn
Difference between centered and uncentered $R^2$? I don't know much about econometric. But I think your question is a statistical one in essence. Consider an OLS model $$\boldsymbol{y}=\boldsymbol{X\beta}+\boldsymbol{\varepsilon}.$$ Let $V=\operatorname{col}(\boldsymbol{X})$, also take $V_0\subset V$ to be the "intercept subspace". $R^2$ can be defined as a ratio of two "sum of squares": $$R^2=\frac{\lVert\hat{\boldsymbol{y}}-\hat{\boldsymbol{y}}_0\rVert^2}{\lVert\boldsymbol{y}-\hat{\boldsymbol{y}}_0\rVert^2}.$$ Using projection matrices, which are idempotent and symmetric, this is equivalently saying: $$R^2=\frac{\lVert(\boldsymbol{P}_V-\boldsymbol{P}_{V_0})\boldsymbol{y}\rVert^2}{\lVert(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{y}\rVert^2}=\frac{\lVert(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{P}_V\boldsymbol{y}\rVert^2}{\lVert(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{y}\rVert^2}=\frac{\boldsymbol{y}'\boldsymbol{P}_V(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{P}_V\boldsymbol{y}}{\boldsymbol{y}'(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{y}}.$$ In the usual OLS we take $\boldsymbol{X}=\begin{pmatrix}\boldsymbol{1}_n&\boldsymbol{x}\end{pmatrix}$ and $V_0=\operatorname{span}\{\boldsymbol{1}_n\}$. Then $$\boldsymbol{I}-\boldsymbol{P}_{V_0}=\boldsymbol{I}-\frac{1}{n}\boldsymbol{1}_n\boldsymbol{1}_n'=\boldsymbol{M}_1$$ (which implies the "residual maker" $\boldsymbol{M}_1$ is a projection matrix onto $V_0^\perp$). If we force the intercept term to be $0$ by choosing $\boldsymbol{X}=\begin{pmatrix}\boldsymbol{0}_n&\boldsymbol{x}\end{pmatrix}$ and $V_0=\operatorname{span}\{\boldsymbol{0}_n\}=\{\boldsymbol{0}_n\}$. Then $\boldsymbol{I}-\boldsymbol{P}_{V_0}=\boldsymbol{I}$, so $$R^2=\frac{\boldsymbol{y}'\boldsymbol{P}_V(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{P}_V\boldsymbol{y}}{\boldsymbol{y}'(\boldsymbol{I}-\boldsymbol{P}_{V_0})\boldsymbol{y}}=\frac{\boldsymbol{y}'\boldsymbol{P}_V\boldsymbol{y}}{\boldsymbol{y}'\boldsymbol{y}}.$$ In short, the "centered" $R^2$ is the usual $R^2$, and the "uncentered" $R^2$ is the $R^2$ when the model does not contain an intercept term. The word "centered", I think, comes from the fact that $$\boldsymbol{P}_{V_0}\boldsymbol{y}=\frac{1}{n}\boldsymbol{1}_n\boldsymbol{1}_n'\boldsymbol{y}=\bar{y}\boldsymbol{1}_n.$$
Difference between centered and uncentered $R^2$? I don't know much about econometric. But I think your question is a statistical one in essence. Consider an OLS model $$\boldsymbol{y}=\boldsymbol{X\beta}+\boldsymbol{\varepsilon}.$$ Let $V=\operatorn
26,325
Difference between centered and uncentered $R^2$?
The above answer is not correct based on my experience with econometrics. I hope this adds some additional flavor and intuition to the post linked in the first comment of the OP's question above. Centered R2 is the usual measure and it effectively assesses the improvement in accuracy that your linear model (with a constant/intercept or not) has over just using the mean. If the model is worse than the mean, R2 is negative (this can't happen with a regression that includes a constant/intercept term). Centered R2 is the same as Nash Sutcliffe Efficiency for y and yhat. Uncentered R2 is uncommon and just tells you how much of y (rather than variation in y about it's mean) has been explained. Uncentered R2 is a measure that gives a trophy to the loser for participation, which in this case is explaining the non-varying part of y. Centered R2 gives no points for explaining a non-varying quantity and the score starts at 0 when accuracy is equivalent to the mean. Following [https://stats.stackexchange.com/a/26205/297006], it seems disingenious that R would provide uncentered R2 when the regression lacks a mean. That positive uncentered R2 value very well may be for predictions (yhat) that are worse than the mean (but there's a trophy for you regardless). I would say never (for econometric analysis, machine learning, and other standard statistical applications) use uncentered R2, but if you do, make sure you don't compare it to centered R2 and assume you have a better fit because the score is higher. if you center y by subtracting it's mean before the regression and then exclude an intercept term from your regression, then centered R2 and uncentered R2 are identical.
Difference between centered and uncentered $R^2$?
The above answer is not correct based on my experience with econometrics. I hope this adds some additional flavor and intuition to the post linked in the first comment of the OP's question above. Cent
Difference between centered and uncentered $R^2$? The above answer is not correct based on my experience with econometrics. I hope this adds some additional flavor and intuition to the post linked in the first comment of the OP's question above. Centered R2 is the usual measure and it effectively assesses the improvement in accuracy that your linear model (with a constant/intercept or not) has over just using the mean. If the model is worse than the mean, R2 is negative (this can't happen with a regression that includes a constant/intercept term). Centered R2 is the same as Nash Sutcliffe Efficiency for y and yhat. Uncentered R2 is uncommon and just tells you how much of y (rather than variation in y about it's mean) has been explained. Uncentered R2 is a measure that gives a trophy to the loser for participation, which in this case is explaining the non-varying part of y. Centered R2 gives no points for explaining a non-varying quantity and the score starts at 0 when accuracy is equivalent to the mean. Following [https://stats.stackexchange.com/a/26205/297006], it seems disingenious that R would provide uncentered R2 when the regression lacks a mean. That positive uncentered R2 value very well may be for predictions (yhat) that are worse than the mean (but there's a trophy for you regardless). I would say never (for econometric analysis, machine learning, and other standard statistical applications) use uncentered R2, but if you do, make sure you don't compare it to centered R2 and assume you have a better fit because the score is higher. if you center y by subtracting it's mean before the regression and then exclude an intercept term from your regression, then centered R2 and uncentered R2 are identical.
Difference between centered and uncentered $R^2$? The above answer is not correct based on my experience with econometrics. I hope this adds some additional flavor and intuition to the post linked in the first comment of the OP's question above. Cent
26,326
Binary time series
One approach might be to assume that the Bernoulli sequence can be described by a latent Normal random variable using the Probit transformation. That is your realized $X_t \sim Bernoulli(p_t)$ where $p_t \sim \Phi^{-1}(Y_t)$ and $Y \sim N(\mu, \Sigma)$. This way you can place whatever time-series (e.g. ARIMA) structure you like on your $Y$ variable and then use standard time-series techniques to predict future observations (e.g. Holt-Winters). Should be possible to code something like this up in Stan or JAGS, but you might not get great predictions given the "glass darkly" view the Bernoulli process gives you of the latent state.
Binary time series
One approach might be to assume that the Bernoulli sequence can be described by a latent Normal random variable using the Probit transformation. That is your realized $X_t \sim Bernoulli(p_t)$ where $
Binary time series One approach might be to assume that the Bernoulli sequence can be described by a latent Normal random variable using the Probit transformation. That is your realized $X_t \sim Bernoulli(p_t)$ where $p_t \sim \Phi^{-1}(Y_t)$ and $Y \sim N(\mu, \Sigma)$. This way you can place whatever time-series (e.g. ARIMA) structure you like on your $Y$ variable and then use standard time-series techniques to predict future observations (e.g. Holt-Winters). Should be possible to code something like this up in Stan or JAGS, but you might not get great predictions given the "glass darkly" view the Bernoulli process gives you of the latent state.
Binary time series One approach might be to assume that the Bernoulli sequence can be described by a latent Normal random variable using the Probit transformation. That is your realized $X_t \sim Bernoulli(p_t)$ where $
26,327
Binary time series
Simplest model would be linear regression. You can plot your data using ggplot: #for reproducing set.seed(200) #simple example. Assume your data is simple binomial variable with probability 0.3 data <- data.frame(time = 1:200, val=sample(c(0,1), size = 200, replace = T, prob = c(0.3, 0.7))) #plot using ggplot and add linear regression and confidence interval ggplot(data, aes(x = time, y=val)) + geom_smooth(method=lm) +geom_point() #Now we can try to create linear regression y = data$time x = data$val fitData <- lm(x ~ y) predict(fitData, newdata = data.frame(y=201:224), interval="confidence") This is the simplest model, there are other non-linear models, that might fit your data better. Also, bear in mind that you might have to use log of date, to get better fit. On non-linear regressions such as polynomial regression you can read a lot here Now, it would require additional analysis, but it is essential to establish whether your events are independent. It is possible, that there is some sort of confounding variable that you might not account for. You might want to look into Bayesian linear regression (given you obtain more dimensions than just time and yes/no values) here
Binary time series
Simplest model would be linear regression. You can plot your data using ggplot: #for reproducing set.seed(200) #simple example. Assume your data is simple binomial variable with probability 0.3 data <
Binary time series Simplest model would be linear regression. You can plot your data using ggplot: #for reproducing set.seed(200) #simple example. Assume your data is simple binomial variable with probability 0.3 data <- data.frame(time = 1:200, val=sample(c(0,1), size = 200, replace = T, prob = c(0.3, 0.7))) #plot using ggplot and add linear regression and confidence interval ggplot(data, aes(x = time, y=val)) + geom_smooth(method=lm) +geom_point() #Now we can try to create linear regression y = data$time x = data$val fitData <- lm(x ~ y) predict(fitData, newdata = data.frame(y=201:224), interval="confidence") This is the simplest model, there are other non-linear models, that might fit your data better. Also, bear in mind that you might have to use log of date, to get better fit. On non-linear regressions such as polynomial regression you can read a lot here Now, it would require additional analysis, but it is essential to establish whether your events are independent. It is possible, that there is some sort of confounding variable that you might not account for. You might want to look into Bayesian linear regression (given you obtain more dimensions than just time and yes/no values) here
Binary time series Simplest model would be linear regression. You can plot your data using ggplot: #for reproducing set.seed(200) #simple example. Assume your data is simple binomial variable with probability 0.3 data <
26,328
Binary time series
Accident data? I'd start by assuming there's hourly seasonality and daily seasonality. Without knowing the type of accident, it may be that you could look at hourly pooling Monday through Friday, and handle hourly for Saturday and Sunday separately, so you have 3 pools of hours, 24 (Mon-Fri), 24 (Sat) and 24 (Sun). Further data reduction might be possible, but assuming not, just take the averages. For example, the average for Sunday 3pm might be .3 (30% chance of an accident). The average for 4pm might be .2, and so on. The probability of no accident occurring in 3pm or 4pm would be (1-.3)(1-.2) = .56, so the probability of having an accident in these two hours would be .44, and so on. This seems to be a good, simple place to start.
Binary time series
Accident data? I'd start by assuming there's hourly seasonality and daily seasonality. Without knowing the type of accident, it may be that you could look at hourly pooling Monday through Friday, and
Binary time series Accident data? I'd start by assuming there's hourly seasonality and daily seasonality. Without knowing the type of accident, it may be that you could look at hourly pooling Monday through Friday, and handle hourly for Saturday and Sunday separately, so you have 3 pools of hours, 24 (Mon-Fri), 24 (Sat) and 24 (Sun). Further data reduction might be possible, but assuming not, just take the averages. For example, the average for Sunday 3pm might be .3 (30% chance of an accident). The average for 4pm might be .2, and so on. The probability of no accident occurring in 3pm or 4pm would be (1-.3)(1-.2) = .56, so the probability of having an accident in these two hours would be .44, and so on. This seems to be a good, simple place to start.
Binary time series Accident data? I'd start by assuming there's hourly seasonality and daily seasonality. Without knowing the type of accident, it may be that you could look at hourly pooling Monday through Friday, and
26,329
The "strongest password"
You've asked a statisticians forum for help on this question, so I'll provide a statistically-based answer. Thus it's reasonable to assume you're interested in the probability of guessing a PIN at random (for some definition of random), but that's reading more into the question than is provided. My approach will be to enumerate all possible options without restricting, then subtract the void options. This has a sharp corner, to it, though, called the inclusion-exclusion principle, which corresponds to the intuitive idea that you don't want to subtract the same thing from a set twice! In a six-digit PIN with no restrictions and a decimal number system, there are $10^6$ possible combinations, from $000 000$ to $999 999:$ each digit has 10 options. Consider what "two adjacent, identical" digits looks like: $AAXXXX$, where the positions labelled $A$ are the same and $X$ can be any decimal digit. Now consider how many other ways the string $AA$ can be arranged in six digits: $XAAXXX$, $XXAAXX$, $XXXAAX$, and $XXXXAA$. So for any particular ordering (one of those options), there are at least $10^4$ combinations, since there are $10^4$ digits without restriction. Now, how many choices of $A$ are there? We're working with decimal digits, so there must be 10. So there are $10^5$ choices for a particular ordering. There are five such orderings, so there are $5\times10^5$ arrangements that satisfy this definition. (What this means in terms of security might be measured in terms of an information-theoretic measure of how much this reduces the entropy of the PIN space.) Now consider what consecutive numbers look like. In the string $ABCXXX$, if we know A, we also know B and C*: if A is 5, then B is 6 and C is 7. So we can enumerate these options: 012XXX 123XXX 234XXX 456XXX 789XXX and at this point it's unclear if there's a "wrapping around." If there is, we also include 890XXX 901XXX Each solution has $10^3$ associated combinations, by the same reasoning as above. So just count out how many solutions there must be. Keep in mind to count alternative orderings, such as $XABCXX.$ Now we get to the sharp corner, which is the inclusion-exclusion principle. We've made the set of all six-digit PINs into three sets: A. Permissible PINs B. Void PINs due to "adjacent digits" C. Void PINs due to "sequential digits" But there's an additional subtlety, which is that there are some 6-digit numbers which can be allocated to both $B$ and $C$. So if we compute $|S|=|A|-|B|-|C|,$ we're subtracting out those numbers twice, and our answer is incorrect. The correct computation is $|S|=|A|-|B|-|C|+|B\cap C|,$ where $B\cap C$ is the set of elements in both $B$ and $C$. So we must determine How many ways can a number fall in both $B$ and $C$. There are several ways this can occur: $AABCXX$ $ABCXDD$ and so on. So you have to work out a systematic approach to this as well, as well as a way to keep track of alternative orderings. Using the same logic that I've applied above, this should be very tractable, if slightly tedious. Just keep in mind how many alternative ways there might be to satisfy both B and C. Slightly more advanced approaches would take advantage of basic combinatoric results and the fundamental theorem of counting, but I chose this avenue as it places the smallest technical burden on the reader. Now, for this to be a well-formed probability question, we have to have some measure of probability for each arrangement. In the assumption of a naive attack, one might assume that all digit combinations have equal probability. In this scenario, the probability of a randomly-chosen combination is $\frac{1}{|S|}$ If that's the kind of attack you're most interested in preventing, though, then the proposed set of criteria obviously weakens the system, because some combinations have are forbidden, so only a dumb attacker would try them. I leave the rest of the exercise to the reader. The wrinkle of "five until lockout" is decidedly the better guard against unauthorized access, since in either the 4-digit or the 6-digit scheme, there are a very large number of options, and even five different, random guesses have a low probability of success. For a well-posed probability question, it's possible to compute the probability of such an attack succeeding. But other factors than probability of sequences of numbers may influence the security of the PIN mechanism. Chiefly, people tend not to choose PINs at random! For example, some people use their date of birth, or DOB of children, or some similarly personally-related number as a PIN. If an attacker knows the DOB of the user, then it will probably be among the first things they try. So for a particular user, some combinations may be more likely than others. *The sequences you list are strictly increasing, and it's unclear whether both increasing and decreasing when you say "three-running number."
The "strongest password"
You've asked a statisticians forum for help on this question, so I'll provide a statistically-based answer. Thus it's reasonable to assume you're interested in the probability of guessing a PIN at ran
The "strongest password" You've asked a statisticians forum for help on this question, so I'll provide a statistically-based answer. Thus it's reasonable to assume you're interested in the probability of guessing a PIN at random (for some definition of random), but that's reading more into the question than is provided. My approach will be to enumerate all possible options without restricting, then subtract the void options. This has a sharp corner, to it, though, called the inclusion-exclusion principle, which corresponds to the intuitive idea that you don't want to subtract the same thing from a set twice! In a six-digit PIN with no restrictions and a decimal number system, there are $10^6$ possible combinations, from $000 000$ to $999 999:$ each digit has 10 options. Consider what "two adjacent, identical" digits looks like: $AAXXXX$, where the positions labelled $A$ are the same and $X$ can be any decimal digit. Now consider how many other ways the string $AA$ can be arranged in six digits: $XAAXXX$, $XXAAXX$, $XXXAAX$, and $XXXXAA$. So for any particular ordering (one of those options), there are at least $10^4$ combinations, since there are $10^4$ digits without restriction. Now, how many choices of $A$ are there? We're working with decimal digits, so there must be 10. So there are $10^5$ choices for a particular ordering. There are five such orderings, so there are $5\times10^5$ arrangements that satisfy this definition. (What this means in terms of security might be measured in terms of an information-theoretic measure of how much this reduces the entropy of the PIN space.) Now consider what consecutive numbers look like. In the string $ABCXXX$, if we know A, we also know B and C*: if A is 5, then B is 6 and C is 7. So we can enumerate these options: 012XXX 123XXX 234XXX 456XXX 789XXX and at this point it's unclear if there's a "wrapping around." If there is, we also include 890XXX 901XXX Each solution has $10^3$ associated combinations, by the same reasoning as above. So just count out how many solutions there must be. Keep in mind to count alternative orderings, such as $XABCXX.$ Now we get to the sharp corner, which is the inclusion-exclusion principle. We've made the set of all six-digit PINs into three sets: A. Permissible PINs B. Void PINs due to "adjacent digits" C. Void PINs due to "sequential digits" But there's an additional subtlety, which is that there are some 6-digit numbers which can be allocated to both $B$ and $C$. So if we compute $|S|=|A|-|B|-|C|,$ we're subtracting out those numbers twice, and our answer is incorrect. The correct computation is $|S|=|A|-|B|-|C|+|B\cap C|,$ where $B\cap C$ is the set of elements in both $B$ and $C$. So we must determine How many ways can a number fall in both $B$ and $C$. There are several ways this can occur: $AABCXX$ $ABCXDD$ and so on. So you have to work out a systematic approach to this as well, as well as a way to keep track of alternative orderings. Using the same logic that I've applied above, this should be very tractable, if slightly tedious. Just keep in mind how many alternative ways there might be to satisfy both B and C. Slightly more advanced approaches would take advantage of basic combinatoric results and the fundamental theorem of counting, but I chose this avenue as it places the smallest technical burden on the reader. Now, for this to be a well-formed probability question, we have to have some measure of probability for each arrangement. In the assumption of a naive attack, one might assume that all digit combinations have equal probability. In this scenario, the probability of a randomly-chosen combination is $\frac{1}{|S|}$ If that's the kind of attack you're most interested in preventing, though, then the proposed set of criteria obviously weakens the system, because some combinations have are forbidden, so only a dumb attacker would try them. I leave the rest of the exercise to the reader. The wrinkle of "five until lockout" is decidedly the better guard against unauthorized access, since in either the 4-digit or the 6-digit scheme, there are a very large number of options, and even five different, random guesses have a low probability of success. For a well-posed probability question, it's possible to compute the probability of such an attack succeeding. But other factors than probability of sequences of numbers may influence the security of the PIN mechanism. Chiefly, people tend not to choose PINs at random! For example, some people use their date of birth, or DOB of children, or some similarly personally-related number as a PIN. If an attacker knows the DOB of the user, then it will probably be among the first things they try. So for a particular user, some combinations may be more likely than others. *The sequences you list are strictly increasing, and it's unclear whether both increasing and decreasing when you say "three-running number."
The "strongest password" You've asked a statisticians forum for help on this question, so I'll provide a statistically-based answer. Thus it's reasonable to assume you're interested in the probability of guessing a PIN at ran
26,330
The "strongest password"
Obtaining a closed formula seems complex. However, it is quite easy to enumerate them. There are 568 916 possible codes for the second solution. Which is bigger than the number of solutions with a four digit PIN code. The code to enumerate them is below. Though not optimized, it only takes seconds to run. Note. I assumed that the sequence had to be in increasing order (which can be easily modified in three_running) N = 999999 candidates = range(N) def same_consecutive_digits(x): x_string = str(x).zfill(6) for i in range(1,len(x_string)): if x_string[i] == x_string[i-1]: return True return False def three_running(x): x_string = str(x).zfill(6) for i in range(2,len(x_string)): if int(x_string[i]) == int(x_string[i-1]) + 1 and int(x_string[i-1]) == int(x_string[i-2]) + 1: return True return False def valid(x): return not same_consecutive_digits(x) and not three_running(x) assert(same_consecutive_digits(88555)) assert(same_consecutive_digits(123)) assert(not same_consecutive_digits(852123)) assert(three_running(123456)) assert(not three_running(4587)) assert(valid(134679)) assert(not valid(123894)) assert(not valid(111111)) assert(not valid(151178)) assert(valid("031278")) accepted = [i for i in range(N) if valid(i)] print(len(accepted))
The "strongest password"
Obtaining a closed formula seems complex. However, it is quite easy to enumerate them. There are 568 916 possible codes for the second solution. Which is bigger than the number of solutions with a fou
The "strongest password" Obtaining a closed formula seems complex. However, it is quite easy to enumerate them. There are 568 916 possible codes for the second solution. Which is bigger than the number of solutions with a four digit PIN code. The code to enumerate them is below. Though not optimized, it only takes seconds to run. Note. I assumed that the sequence had to be in increasing order (which can be easily modified in three_running) N = 999999 candidates = range(N) def same_consecutive_digits(x): x_string = str(x).zfill(6) for i in range(1,len(x_string)): if x_string[i] == x_string[i-1]: return True return False def three_running(x): x_string = str(x).zfill(6) for i in range(2,len(x_string)): if int(x_string[i]) == int(x_string[i-1]) + 1 and int(x_string[i-1]) == int(x_string[i-2]) + 1: return True return False def valid(x): return not same_consecutive_digits(x) and not three_running(x) assert(same_consecutive_digits(88555)) assert(same_consecutive_digits(123)) assert(not same_consecutive_digits(852123)) assert(three_running(123456)) assert(not three_running(4587)) assert(valid(134679)) assert(not valid(123894)) assert(not valid(111111)) assert(not valid(151178)) assert(valid("031278")) accepted = [i for i in range(N) if valid(i)] print(len(accepted))
The "strongest password" Obtaining a closed formula seems complex. However, it is quite easy to enumerate them. There are 568 916 possible codes for the second solution. Which is bigger than the number of solutions with a fou
26,331
OLS vs. maximum likelihood under Normal distribution in linear regression
OLS does not make a normality assumption for the model errors. OLS can be used under different distributional assumptions and the estimator will still make sense as the minimum variance linear unbiased estimator. Maximum likelihood (ML) can also accommodate different distributions, but the distribution has to be chosen in advance. If the actual distribution appears to be different from the assumed distribution, ML estimator will no longer make sense as the estimator that maximizes the joint probability density of the data. Thus we can say that in a particular application ML makes a more stringent assumption about the model errors than OLS does.
OLS vs. maximum likelihood under Normal distribution in linear regression
OLS does not make a normality assumption for the model errors. OLS can be used under different distributional assumptions and the estimator will still make sense as the minimum variance linear unbiase
OLS vs. maximum likelihood under Normal distribution in linear regression OLS does not make a normality assumption for the model errors. OLS can be used under different distributional assumptions and the estimator will still make sense as the minimum variance linear unbiased estimator. Maximum likelihood (ML) can also accommodate different distributions, but the distribution has to be chosen in advance. If the actual distribution appears to be different from the assumed distribution, ML estimator will no longer make sense as the estimator that maximizes the joint probability density of the data. Thus we can say that in a particular application ML makes a more stringent assumption about the model errors than OLS does.
OLS vs. maximum likelihood under Normal distribution in linear regression OLS does not make a normality assumption for the model errors. OLS can be used under different distributional assumptions and the estimator will still make sense as the minimum variance linear unbiase
26,332
AIC formula in Introduction to Statistical Learning
I think that you are confusing the two residual sum of squares that you have. You have one RSS to estimate the $\hat{\sigma}^2$ in the formula, this RSS is in some sense independent of the number of parameters, $p$. This $\hat{\sigma}^2$ should be estimated using all your covariates, giving you a baseline unit of error. You should call the RSS in the formula for AIC: $\text{RSS}_{p_i}$, meaning that it corresponds to model $i$ with $p$ parameters, (There may be many models with $p$ parameters). So the RSS in the formula is calculated for a specific model, while the RSS for $\hat{\sigma}^2$ is for the full model. This is also noted in the page before, where $\hat{\sigma}^2$ is introduced for $C_p$. So the RSS for the formula in AIC is not indepednent of $p$, it is calculated for a given model. Introducing $\hat{\sigma}^2$ to all of this is just to have a baseline unit for the error, such that there is a "fair" comparison between the number of parameters and the reduction in error. You need to compare the number of parameters to something that is scaled w.r.t. the magnitude of the error. If you would not scale the RSS by the baseline error, it might be that the RSS is dropping much more than the number of variables introduced and thus you become more greedy in adding in more variables. If you scale it to some unit, the comparison to the number of parameters is independent of the magnitude of the baseline error. This is not the general way to calculate AIC, but it essentially boils down to something similar to this in cases where it is possible to derive simpler versions of the formula.
AIC formula in Introduction to Statistical Learning
I think that you are confusing the two residual sum of squares that you have. You have one RSS to estimate the $\hat{\sigma}^2$ in the formula, this RSS is in some sense independent of the number of p
AIC formula in Introduction to Statistical Learning I think that you are confusing the two residual sum of squares that you have. You have one RSS to estimate the $\hat{\sigma}^2$ in the formula, this RSS is in some sense independent of the number of parameters, $p$. This $\hat{\sigma}^2$ should be estimated using all your covariates, giving you a baseline unit of error. You should call the RSS in the formula for AIC: $\text{RSS}_{p_i}$, meaning that it corresponds to model $i$ with $p$ parameters, (There may be many models with $p$ parameters). So the RSS in the formula is calculated for a specific model, while the RSS for $\hat{\sigma}^2$ is for the full model. This is also noted in the page before, where $\hat{\sigma}^2$ is introduced for $C_p$. So the RSS for the formula in AIC is not indepednent of $p$, it is calculated for a given model. Introducing $\hat{\sigma}^2$ to all of this is just to have a baseline unit for the error, such that there is a "fair" comparison between the number of parameters and the reduction in error. You need to compare the number of parameters to something that is scaled w.r.t. the magnitude of the error. If you would not scale the RSS by the baseline error, it might be that the RSS is dropping much more than the number of variables introduced and thus you become more greedy in adding in more variables. If you scale it to some unit, the comparison to the number of parameters is independent of the magnitude of the baseline error. This is not the general way to calculate AIC, but it essentially boils down to something similar to this in cases where it is possible to derive simpler versions of the formula.
AIC formula in Introduction to Statistical Learning I think that you are confusing the two residual sum of squares that you have. You have one RSS to estimate the $\hat{\sigma}^2$ in the formula, this RSS is in some sense independent of the number of p
26,333
AIC formula in Introduction to Statistical Learning
Unfortunately this will be a rather unsatisfying answer... First of all usually for the AIC calculation you will use the Maximum Likelihood estimate of $\sigma^2$ which would be biased. So that would reduce to $\sigma^2 = \frac{RSS}{n}$ and ultimately the calculation you do would reduce to $1+2\frac{d}{n}$. Second I would refer you to the Wikipedia article on AIC in particular in the equivariance cases section. As you see there it is clear that most derivations omit a constant $C$. This constant is irrelevant for model comparison purposes so it is omitted. It is somewhat common to see contradictory derivations of AIC because exactly of that issue. For example Johnson & Wichern's Applied Multivariate Statistical Analysis, 6th edition give AIC as: $n \log(\frac{RSS}{N}) + 2d$ (Chapt. 7.6), which clearly does not equate the definition of James et al. you are using. Neither book is wrong per se. Just people using different constants. In the case of the James et al. book it seems they do not allude this point. In other books eg. Ravishanker and Dey's A First Course in Linear Model Theory this is even more profound as the authors write: \begin{align} AIC(p) &= -2l(y; X, \hat{\beta}_{ML}, \hat{\sigma}_{ML}^2) + 2p \\ &= -N \log(\hat{\sigma}_{ML}^2)/2 - N/2 + 2p \qquad (7.5.10) \end{align} which interestingly it cannot be concurrently true either. As Burnham & Anderson (1998) Chapt 2.2 write: "In the special case of least squares (LS) estimation with normally distributed errors, and apart from an arbitrary additive constant, AIC can be expressed as a simple function of the residual sum of squares."; B&A suggest the same AIC variant that J&W use. What messes you up is that particular constant (and the fact you were not using the ML estimate for the residuals.) Looking at M. Bishop's Pattern Recognition and Machine Learning (2006) I find an even more contradictory definition as: \begin{align} AIC &= l(D|w_{ML}) - M \qquad (1.73) \end{align} which is funny because it not only omits the multiplier from the original paper but also goes ahead to tumble the signs so it can use AIC based selection as a maximization problem... I would recommend sticking with the old fashioned definition $−2\log(L)+2p$ if you want to do theoretical derivations. This is the one Akaike states in his original paper. All the other intermediate formulas tend to be messy and/or make some implicit assumptions. If it is any consolation, you "did nothing wrong".
AIC formula in Introduction to Statistical Learning
Unfortunately this will be a rather unsatisfying answer... First of all usually for the AIC calculation you will use the Maximum Likelihood estimate of $\sigma^2$ which would be biased. So that would
AIC formula in Introduction to Statistical Learning Unfortunately this will be a rather unsatisfying answer... First of all usually for the AIC calculation you will use the Maximum Likelihood estimate of $\sigma^2$ which would be biased. So that would reduce to $\sigma^2 = \frac{RSS}{n}$ and ultimately the calculation you do would reduce to $1+2\frac{d}{n}$. Second I would refer you to the Wikipedia article on AIC in particular in the equivariance cases section. As you see there it is clear that most derivations omit a constant $C$. This constant is irrelevant for model comparison purposes so it is omitted. It is somewhat common to see contradictory derivations of AIC because exactly of that issue. For example Johnson & Wichern's Applied Multivariate Statistical Analysis, 6th edition give AIC as: $n \log(\frac{RSS}{N}) + 2d$ (Chapt. 7.6), which clearly does not equate the definition of James et al. you are using. Neither book is wrong per se. Just people using different constants. In the case of the James et al. book it seems they do not allude this point. In other books eg. Ravishanker and Dey's A First Course in Linear Model Theory this is even more profound as the authors write: \begin{align} AIC(p) &= -2l(y; X, \hat{\beta}_{ML}, \hat{\sigma}_{ML}^2) + 2p \\ &= -N \log(\hat{\sigma}_{ML}^2)/2 - N/2 + 2p \qquad (7.5.10) \end{align} which interestingly it cannot be concurrently true either. As Burnham & Anderson (1998) Chapt 2.2 write: "In the special case of least squares (LS) estimation with normally distributed errors, and apart from an arbitrary additive constant, AIC can be expressed as a simple function of the residual sum of squares."; B&A suggest the same AIC variant that J&W use. What messes you up is that particular constant (and the fact you were not using the ML estimate for the residuals.) Looking at M. Bishop's Pattern Recognition and Machine Learning (2006) I find an even more contradictory definition as: \begin{align} AIC &= l(D|w_{ML}) - M \qquad (1.73) \end{align} which is funny because it not only omits the multiplier from the original paper but also goes ahead to tumble the signs so it can use AIC based selection as a maximization problem... I would recommend sticking with the old fashioned definition $−2\log(L)+2p$ if you want to do theoretical derivations. This is the one Akaike states in his original paper. All the other intermediate formulas tend to be messy and/or make some implicit assumptions. If it is any consolation, you "did nothing wrong".
AIC formula in Introduction to Statistical Learning Unfortunately this will be a rather unsatisfying answer... First of all usually for the AIC calculation you will use the Maximum Likelihood estimate of $\sigma^2$ which would be biased. So that would
26,334
Why add one in inverse document frequency?
As you will see pointed out elsewhere that tf-idf is discussed, there is no universally agreed single formula for computing tf-idf or even (as in your question) idf. The purpose of the $+ 1$ is to accomplish one of two objectives: a) to avoid division by zero, as when a term appears in no documents, even though this would not happen in a strictly "bag of words" approach, or b) to set a lower bound to avoid a term being given a zero weight just because it appeared in all documents. I've actually never seen the formulation $log(1+\frac{N}{n_t})$, although you mention a textbook. But the purpose would be to set a lower bound of $log(2)$ rather than zero, as you correctly interpret. I have seen 1 + $log(\frac{N}{n_t})$, which sets a lower bound of 1. The most commonly used computation seems to be $log(\frac{N}{n_t})$, as in Manning, Christopher D, Prabhakar Raghavan, and Hinrich Schütze (2008) Introduction to Information Retrieval, Cambridge University Press, p118 or Wikipedia (based on similar sources). Not directly relevant to your query, but the upper bound is not $\infty$, but rather $k + log(N/s)$ where $k, s \in {0, 1}$ depending on your smoothing formulation. This happens for terms that appear in 0 or 1 documents (again, depends on whether you smooth with $s$ to make it defined for terms with zero document frequency - if not then the max value occurs for terms that appear in just one document). IDF $\rightarrow \infty$ when $1 + n_t=1$ and $N \rightarrow \infty$.
Why add one in inverse document frequency?
As you will see pointed out elsewhere that tf-idf is discussed, there is no universally agreed single formula for computing tf-idf or even (as in your question) idf. The purpose of the $+ 1$ is to ac
Why add one in inverse document frequency? As you will see pointed out elsewhere that tf-idf is discussed, there is no universally agreed single formula for computing tf-idf or even (as in your question) idf. The purpose of the $+ 1$ is to accomplish one of two objectives: a) to avoid division by zero, as when a term appears in no documents, even though this would not happen in a strictly "bag of words" approach, or b) to set a lower bound to avoid a term being given a zero weight just because it appeared in all documents. I've actually never seen the formulation $log(1+\frac{N}{n_t})$, although you mention a textbook. But the purpose would be to set a lower bound of $log(2)$ rather than zero, as you correctly interpret. I have seen 1 + $log(\frac{N}{n_t})$, which sets a lower bound of 1. The most commonly used computation seems to be $log(\frac{N}{n_t})$, as in Manning, Christopher D, Prabhakar Raghavan, and Hinrich Schütze (2008) Introduction to Information Retrieval, Cambridge University Press, p118 or Wikipedia (based on similar sources). Not directly relevant to your query, but the upper bound is not $\infty$, but rather $k + log(N/s)$ where $k, s \in {0, 1}$ depending on your smoothing formulation. This happens for terms that appear in 0 or 1 documents (again, depends on whether you smooth with $s$ to make it defined for terms with zero document frequency - if not then the max value occurs for terms that appear in just one document). IDF $\rightarrow \infty$ when $1 + n_t=1$ and $N \rightarrow \infty$.
Why add one in inverse document frequency? As you will see pointed out elsewhere that tf-idf is discussed, there is no universally agreed single formula for computing tf-idf or even (as in your question) idf. The purpose of the $+ 1$ is to ac
26,335
Can quantiles be calculated for lognormal distributions?
Let's start with definitions and notation. A random variable $X$ is lognormal if its natural logarithm, $Y = \log(X)$, is normal. Denote with $M$ and $S$ the mean and standard deviation of $X$. Denote with $m$ and $s$ the mean and standard deviation of $Y$. Given $M$ and $S$, you can calculate $m$ and $s$ as: $m = \log[M^2/(M^2 + S^2)^{(1/2)}]$ and $s = (\log[(S/M)^2+1])^{(1/2)}$. To calculate a quantile of $X$, we use the fact that the exponential function (inverse of the log function) is monotone increasing -- it maps quantiles of $Y$ into quantiles of $X$. Suppose we want to calculate the .95-quantile of $X$ (nothing special about .95, substitute any quantile you like). Let $Q$ denote the .95 quantile of $X$. Let $q$ denote the .95 quantile of $Y$. We know the mean and standard deviation, $M$ and $S$, of $X$. From these, we calculate the mean and standard deviation, $m$ and $s$, of $Y$. Since $Y$ is normal, we can easily calculate its .95 quantile $q$. The .95 quantile $Q$ of $X$ is then simply: $Q = \exp[q]$. here is the original post by Glyn Holton: http://www.riskarchive.com/archive02_4/00000622.htm
Can quantiles be calculated for lognormal distributions?
Let's start with definitions and notation. A random variable $X$ is lognormal if its natural logarithm, $Y = \log(X)$, is normal. Denote with $M$ and $S$ the mean and standard deviation of $X$. Denote
Can quantiles be calculated for lognormal distributions? Let's start with definitions and notation. A random variable $X$ is lognormal if its natural logarithm, $Y = \log(X)$, is normal. Denote with $M$ and $S$ the mean and standard deviation of $X$. Denote with $m$ and $s$ the mean and standard deviation of $Y$. Given $M$ and $S$, you can calculate $m$ and $s$ as: $m = \log[M^2/(M^2 + S^2)^{(1/2)}]$ and $s = (\log[(S/M)^2+1])^{(1/2)}$. To calculate a quantile of $X$, we use the fact that the exponential function (inverse of the log function) is monotone increasing -- it maps quantiles of $Y$ into quantiles of $X$. Suppose we want to calculate the .95-quantile of $X$ (nothing special about .95, substitute any quantile you like). Let $Q$ denote the .95 quantile of $X$. Let $q$ denote the .95 quantile of $Y$. We know the mean and standard deviation, $M$ and $S$, of $X$. From these, we calculate the mean and standard deviation, $m$ and $s$, of $Y$. Since $Y$ is normal, we can easily calculate its .95 quantile $q$. The .95 quantile $Q$ of $X$ is then simply: $Q = \exp[q]$. here is the original post by Glyn Holton: http://www.riskarchive.com/archive02_4/00000622.htm
Can quantiles be calculated for lognormal distributions? Let's start with definitions and notation. A random variable $X$ is lognormal if its natural logarithm, $Y = \log(X)$, is normal. Denote with $M$ and $S$ the mean and standard deviation of $X$. Denote
26,336
Can quantiles be calculated for lognormal distributions?
I am not a statistician, but I am quite sure that the quantile function for the log-normal distribution is well-defined because it is the inverse of the cumulative distribution function, which is strictly increasing. For all continuous distributions, the ICDF exists and is unique if 0 < p < 1. (source) There is a software library (distributions-lognormal-quantile) I have used in some applications to evaluate that function, and I believe it uses this equation: This function is also available in Microsoft Excel as LOGNORM.INV.
Can quantiles be calculated for lognormal distributions?
I am not a statistician, but I am quite sure that the quantile function for the log-normal distribution is well-defined because it is the inverse of the cumulative distribution function, which is stri
Can quantiles be calculated for lognormal distributions? I am not a statistician, but I am quite sure that the quantile function for the log-normal distribution is well-defined because it is the inverse of the cumulative distribution function, which is strictly increasing. For all continuous distributions, the ICDF exists and is unique if 0 < p < 1. (source) There is a software library (distributions-lognormal-quantile) I have used in some applications to evaluate that function, and I believe it uses this equation: This function is also available in Microsoft Excel as LOGNORM.INV.
Can quantiles be calculated for lognormal distributions? I am not a statistician, but I am quite sure that the quantile function for the log-normal distribution is well-defined because it is the inverse of the cumulative distribution function, which is stri
26,337
Can quantiles be calculated for lognormal distributions?
Here is the proof. Take $\log X \sim \mathcal{N}(\mu, \sigma)$. Then $X$ is log-normally distributed with CDF: $$ F(x) = \frac{1}{2}\left(1 + erf \left(\frac{\log x - \mu}{\sigma \sqrt{2}} \right) \right) $$ we can now solve: \begin{align} x &= \frac{1}{2}\left(1 + erf \left(\frac{\log F^{-1}(u) - \mu}{\sigma \sqrt{2}} \right) \right) \\ erf^{-1} \left(2x-1\right) &= \frac{\log F^{-1}(u) - \mu}{\sigma \sqrt{2}} \\ \sigma \sqrt{2} erf^{-1} \left(2x-1\right) +\mu &= \log F^{-1}(u) \\ \exp\left(\sigma \sqrt{2} erf^{-1} \left(2x-1\right) +\mu\right) &= F^{-1}(u) \\ \end{align} which is what iX3 got.
Can quantiles be calculated for lognormal distributions?
Here is the proof. Take $\log X \sim \mathcal{N}(\mu, \sigma)$. Then $X$ is log-normally distributed with CDF: $$ F(x) = \frac{1}{2}\left(1 + erf \left(\frac{\log x - \mu}{\sigma \sqrt{2}} \right) \
Can quantiles be calculated for lognormal distributions? Here is the proof. Take $\log X \sim \mathcal{N}(\mu, \sigma)$. Then $X$ is log-normally distributed with CDF: $$ F(x) = \frac{1}{2}\left(1 + erf \left(\frac{\log x - \mu}{\sigma \sqrt{2}} \right) \right) $$ we can now solve: \begin{align} x &= \frac{1}{2}\left(1 + erf \left(\frac{\log F^{-1}(u) - \mu}{\sigma \sqrt{2}} \right) \right) \\ erf^{-1} \left(2x-1\right) &= \frac{\log F^{-1}(u) - \mu}{\sigma \sqrt{2}} \\ \sigma \sqrt{2} erf^{-1} \left(2x-1\right) +\mu &= \log F^{-1}(u) \\ \exp\left(\sigma \sqrt{2} erf^{-1} \left(2x-1\right) +\mu\right) &= F^{-1}(u) \\ \end{align} which is what iX3 got.
Can quantiles be calculated for lognormal distributions? Here is the proof. Take $\log X \sim \mathcal{N}(\mu, \sigma)$. Then $X$ is log-normally distributed with CDF: $$ F(x) = \frac{1}{2}\left(1 + erf \left(\frac{\log x - \mu}{\sigma \sqrt{2}} \right) \
26,338
Where is the dominated convergence theorem being used?
\begin{align*} n \text{Var}(\bar{X}_n) &= n^{-1} \sum_{i=1}^n \sum_{j=1}^n \text{Cov}(X_i,X_j) \\ &= n^{-1} \sum_{i=1}^n \sum_{j=1}^n \gamma(i-j) \\ &= n^{-1} \sum_{h = -(n-1)}^{n-1} (n-|h|) \gamma(h) \\ &= \sum_{h = -(n-1)}^{n-1} \left( 1-\frac{|h|}{n}\right) \gamma(h) \\ &= \sum_{h \in \mathbb{Z}} f_n(h) \end{align*} where $f_n(h) := \mathbb{I}(|h| < n)\left( 1-\frac{|h|}{n}\right) \gamma(h)$. Notice that $f_n \le |\gamma|$ pointwise for any $n$, and $|\gamma|$ is "integrable" because it's the same as absolute summability in this case (i.e. $\sum_{h=-\infty}^{\infty}|\gamma(h)| < \infty$). Taking the limit as $n \to \infty$ on everything and applying DCT gives us $$ \lim_n \sum_{h \in \mathbb{Z}} f_n(h) = \sum_{h \in \mathbb{Z}} \gamma(h) $$ because $f_n \to \gamma$ pointwise as $n \to \infty$.
Where is the dominated convergence theorem being used?
\begin{align*} n \text{Var}(\bar{X}_n) &= n^{-1} \sum_{i=1}^n \sum_{j=1}^n \text{Cov}(X_i,X_j) \\ &= n^{-1} \sum_{i=1}^n \sum_{j=1}^n \gamma(i-j) \\ &= n^{-1} \sum_{h = -(n-1)}^{n-1} (n-|h|) \gamm
Where is the dominated convergence theorem being used? \begin{align*} n \text{Var}(\bar{X}_n) &= n^{-1} \sum_{i=1}^n \sum_{j=1}^n \text{Cov}(X_i,X_j) \\ &= n^{-1} \sum_{i=1}^n \sum_{j=1}^n \gamma(i-j) \\ &= n^{-1} \sum_{h = -(n-1)}^{n-1} (n-|h|) \gamma(h) \\ &= \sum_{h = -(n-1)}^{n-1} \left( 1-\frac{|h|}{n}\right) \gamma(h) \\ &= \sum_{h \in \mathbb{Z}} f_n(h) \end{align*} where $f_n(h) := \mathbb{I}(|h| < n)\left( 1-\frac{|h|}{n}\right) \gamma(h)$. Notice that $f_n \le |\gamma|$ pointwise for any $n$, and $|\gamma|$ is "integrable" because it's the same as absolute summability in this case (i.e. $\sum_{h=-\infty}^{\infty}|\gamma(h)| < \infty$). Taking the limit as $n \to \infty$ on everything and applying DCT gives us $$ \lim_n \sum_{h \in \mathbb{Z}} f_n(h) = \sum_{h \in \mathbb{Z}} \gamma(h) $$ because $f_n \to \gamma$ pointwise as $n \to \infty$.
Where is the dominated convergence theorem being used? \begin{align*} n \text{Var}(\bar{X}_n) &= n^{-1} \sum_{i=1}^n \sum_{j=1}^n \text{Cov}(X_i,X_j) \\ &= n^{-1} \sum_{i=1}^n \sum_{j=1}^n \gamma(i-j) \\ &= n^{-1} \sum_{h = -(n-1)}^{n-1} (n-|h|) \gamm
26,339
Where is the dominated convergence theorem being used?
One way to think about the dominated convergence theorem is that it is one of those theorems that give sufficient conditions for when you can move a limit inside an infinite sum or integral. In this answer, I will give a slight variation in the explanation in the other answer, to frame things in this way. This is really the same thing that the other answer is showing, but it is re-expressed here in a way that may be more familiar to readers who think of the dominated convergence theorem in the way I have described it here. As shown in the other answer here, you can write the relevant quantity of interest as: $$n \mathbb{V}(\bar{X}_n) = \sum_{h \in \mathbb{Z}} f_n(h) \quad \quad \quad f_n(h) \equiv \mathbb{I}(|h| < n) \bigg( 1 - \frac{|h|}{n} \bigg) \gamma(h).$$ We also have the limiting function: $$f(h) \equiv \lim_{n \rightarrow \infty} f_n(h) = \gamma(h).$$ Now, if we are allowed to move the limit inside the infinite sum then we would have: $$\begin{align} \lim_{n \rightarrow \infty} n \mathbb{V}(\bar{X}_n) &= \lim_{n \rightarrow \infty} \sum_{h \in \mathbb{Z}} f_n(h) \\[6pt] &= \sum_{h \in \mathbb{Z}} \lim_{n \rightarrow \infty} f_n(h) \\[6pt] &= \sum_{h \in \mathbb{Z}} f(h) \\[6pt] &= \sum_{h \in \mathbb{Z}} \gamma(h). \\[6pt] \end{align}$$ What allows us to move the limit inside the infinite sum here (which is essentially an implicit interchange of limits) is the discrete version of the dominated convergence theorem. This says that we can move the limit inside the sum if $|f_n(h)| \leqslant f(h)$ for all $h \in \mathbb{Z}$ and $\sum_{h \in \mathbb{Z}} f(h) < \infty$. The first of these conditions clearly holds in this case. The proof you are looking at is saying that if $\sum_{h \in \mathbb{Z}} |\gamma(h)| < \infty$ then the latter condition also holds and so we can then apply the dominated convergence theorem to get the require result.
Where is the dominated convergence theorem being used?
One way to think about the dominated convergence theorem is that it is one of those theorems that give sufficient conditions for when you can move a limit inside an infinite sum or integral. In this
Where is the dominated convergence theorem being used? One way to think about the dominated convergence theorem is that it is one of those theorems that give sufficient conditions for when you can move a limit inside an infinite sum or integral. In this answer, I will give a slight variation in the explanation in the other answer, to frame things in this way. This is really the same thing that the other answer is showing, but it is re-expressed here in a way that may be more familiar to readers who think of the dominated convergence theorem in the way I have described it here. As shown in the other answer here, you can write the relevant quantity of interest as: $$n \mathbb{V}(\bar{X}_n) = \sum_{h \in \mathbb{Z}} f_n(h) \quad \quad \quad f_n(h) \equiv \mathbb{I}(|h| < n) \bigg( 1 - \frac{|h|}{n} \bigg) \gamma(h).$$ We also have the limiting function: $$f(h) \equiv \lim_{n \rightarrow \infty} f_n(h) = \gamma(h).$$ Now, if we are allowed to move the limit inside the infinite sum then we would have: $$\begin{align} \lim_{n \rightarrow \infty} n \mathbb{V}(\bar{X}_n) &= \lim_{n \rightarrow \infty} \sum_{h \in \mathbb{Z}} f_n(h) \\[6pt] &= \sum_{h \in \mathbb{Z}} \lim_{n \rightarrow \infty} f_n(h) \\[6pt] &= \sum_{h \in \mathbb{Z}} f(h) \\[6pt] &= \sum_{h \in \mathbb{Z}} \gamma(h). \\[6pt] \end{align}$$ What allows us to move the limit inside the infinite sum here (which is essentially an implicit interchange of limits) is the discrete version of the dominated convergence theorem. This says that we can move the limit inside the sum if $|f_n(h)| \leqslant f(h)$ for all $h \in \mathbb{Z}$ and $\sum_{h \in \mathbb{Z}} f(h) < \infty$. The first of these conditions clearly holds in this case. The proof you are looking at is saying that if $\sum_{h \in \mathbb{Z}} |\gamma(h)| < \infty$ then the latter condition also holds and so we can then apply the dominated convergence theorem to get the require result.
Where is the dominated convergence theorem being used? One way to think about the dominated convergence theorem is that it is one of those theorems that give sufficient conditions for when you can move a limit inside an infinite sum or integral. In this
26,340
Equation of a fitted smooth spline and its analytical derivative [duplicate]
This fits a natural spline (linear tail restricted) using the truncated power basis. In this example, default knots (based on quantiles of the predictor) are not used; instead we specify 4 knots. The only way to get a test of goodness of fit is to postulate a richer model than this and see if it improves the model fitted below. But anova() tests the goodness of fit of a linear relationship by pooling the nonlinear terms into a composite ("chunk") test (F=175.38). require(rms) x <- 1:11 y <- c(0.2,0.40, 0.6, 0.75, 0.88, 0.99, 1.1, 1.15, 1.16, 1.16, 1.16 ) dd <- datadist(x); options(datadist='dd') f <- ols(y ~ rcs(x, c(3, 5, 7, 9))) f Linear Regression Model ols(formula = y ~ rcs(x, c(3, 5, 7, 9))) Model Likelihood Discrimination Ratio Test Indexes Obs 11 LR chi2 66.08 R2 0.998 sigma 0.0201 d.f. 3 R2 adj 0.996 d.f. 7 Pr(> chi2) 0.0000 g 0.383 Residuals Min 1Q Median 3Q Max -0.027360 -0.011739 0.001227 0.009892 0.031166 Coef S.E. t Pr(>|t|) Intercept 0.0465 0.0224 2.08 0.0762 x 0.1741 0.0072 24.18 <0.0001 x' -0.1004 0.0311 -3.23 0.0144 x'' 0.0542 0.0913 0.59 0.5715 anova(f) Analysis of Variance Response: y Factor d.f. Partial SS MS F P x 3 1.152321844 0.3841072814 946.15 <.0001 Nonlinear 2 0.142398208 0.0711991039 175.38 <.0001 REGRESSION 3 1.152321844 0.3841072814 946.15 <.0001 ERROR 7 0.002841792 0.0004059703 ggplot(Predict(f)) + geom_point(aes(x=x, y=y), data=data.frame(x,y)) Function(f) ## if have latex installed can also use latex(f) function(x = 6) {0.046475489+0.17411942* x-0.002790266*pmax(x- 3,0)^3+0.0015048699*pmax(x-5,0)^3+0.0053610582*pmax(x-7,0)^3-0.0040756621*pmax(x-9,0)^3 } Function re-expresses the restricted cubic spline in simplest form. The first derivative is: function(x) 0.174 - 3 * 0.00279 * pmax(x - 3, 0) ^ 2 + 3 * 0.0015 * pmax(x - 5, 0) ^ 2 + ...
Equation of a fitted smooth spline and its analytical derivative [duplicate]
This fits a natural spline (linear tail restricted) using the truncated power basis. In this example, default knots (based on quantiles of the predictor) are not used; instead we specify 4 knots. Th
Equation of a fitted smooth spline and its analytical derivative [duplicate] This fits a natural spline (linear tail restricted) using the truncated power basis. In this example, default knots (based on quantiles of the predictor) are not used; instead we specify 4 knots. The only way to get a test of goodness of fit is to postulate a richer model than this and see if it improves the model fitted below. But anova() tests the goodness of fit of a linear relationship by pooling the nonlinear terms into a composite ("chunk") test (F=175.38). require(rms) x <- 1:11 y <- c(0.2,0.40, 0.6, 0.75, 0.88, 0.99, 1.1, 1.15, 1.16, 1.16, 1.16 ) dd <- datadist(x); options(datadist='dd') f <- ols(y ~ rcs(x, c(3, 5, 7, 9))) f Linear Regression Model ols(formula = y ~ rcs(x, c(3, 5, 7, 9))) Model Likelihood Discrimination Ratio Test Indexes Obs 11 LR chi2 66.08 R2 0.998 sigma 0.0201 d.f. 3 R2 adj 0.996 d.f. 7 Pr(> chi2) 0.0000 g 0.383 Residuals Min 1Q Median 3Q Max -0.027360 -0.011739 0.001227 0.009892 0.031166 Coef S.E. t Pr(>|t|) Intercept 0.0465 0.0224 2.08 0.0762 x 0.1741 0.0072 24.18 <0.0001 x' -0.1004 0.0311 -3.23 0.0144 x'' 0.0542 0.0913 0.59 0.5715 anova(f) Analysis of Variance Response: y Factor d.f. Partial SS MS F P x 3 1.152321844 0.3841072814 946.15 <.0001 Nonlinear 2 0.142398208 0.0711991039 175.38 <.0001 REGRESSION 3 1.152321844 0.3841072814 946.15 <.0001 ERROR 7 0.002841792 0.0004059703 ggplot(Predict(f)) + geom_point(aes(x=x, y=y), data=data.frame(x,y)) Function(f) ## if have latex installed can also use latex(f) function(x = 6) {0.046475489+0.17411942* x-0.002790266*pmax(x- 3,0)^3+0.0015048699*pmax(x-5,0)^3+0.0053610582*pmax(x-7,0)^3-0.0040756621*pmax(x-9,0)^3 } Function re-expresses the restricted cubic spline in simplest form. The first derivative is: function(x) 0.174 - 3 * 0.00279 * pmax(x - 3, 0) ^ 2 + 3 * 0.0015 * pmax(x - 5, 0) ^ 2 + ...
Equation of a fitted smooth spline and its analytical derivative [duplicate] This fits a natural spline (linear tail restricted) using the truncated power basis. In this example, default knots (based on quantiles of the predictor) are not used; instead we specify 4 knots. Th
26,341
Equation of a fitted smooth spline and its analytical derivative [duplicate]
I would suggest using interpSpline from the splines package. Building on your example, you could use: library(splines) x <- 1:11 y <- c(0.2,0.40, 0.6, 0.75, 0.88, 0.99, 1.1, 1.15, 1.16, 1.16, 1.16 ) spline <- interpSpline(x,y) plot(spline) points(x,y) The object spline then contains the coefficients you wanted as a part of your 1. and 3. question, getting the equations of the first derivative by differentiating the original equations constructed using these coefficients. As for your second question, I am not sure what you are trying to do. If you are simply interpolating a cubic spline through all of your data points, i.e. using all of them as knots, as you did in your example, then the curve fits the points perfectly.
Equation of a fitted smooth spline and its analytical derivative [duplicate]
I would suggest using interpSpline from the splines package. Building on your example, you could use: library(splines) x <- 1:11 y <- c(0.2,0.40, 0.6, 0.75, 0.88, 0.99, 1.1, 1.15, 1.16, 1.16, 1.16 ) s
Equation of a fitted smooth spline and its analytical derivative [duplicate] I would suggest using interpSpline from the splines package. Building on your example, you could use: library(splines) x <- 1:11 y <- c(0.2,0.40, 0.6, 0.75, 0.88, 0.99, 1.1, 1.15, 1.16, 1.16, 1.16 ) spline <- interpSpline(x,y) plot(spline) points(x,y) The object spline then contains the coefficients you wanted as a part of your 1. and 3. question, getting the equations of the first derivative by differentiating the original equations constructed using these coefficients. As for your second question, I am not sure what you are trying to do. If you are simply interpolating a cubic spline through all of your data points, i.e. using all of them as knots, as you did in your example, then the curve fits the points perfectly.
Equation of a fitted smooth spline and its analytical derivative [duplicate] I would suggest using interpSpline from the splines package. Building on your example, you could use: library(splines) x <- 1:11 y <- c(0.2,0.40, 0.6, 0.75, 0.88, 0.99, 1.1, 1.15, 1.16, 1.16, 1.16 ) s
26,342
Modelling count data where offset variable is 0 for some observations
So the response you want to model is "Number of calls per bird" and the troublesome lines are where you didn't observe any birds? Just drop those rows. They add no information to the thing you are trying to model.
Modelling count data where offset variable is 0 for some observations
So the response you want to model is "Number of calls per bird" and the troublesome lines are where you didn't observe any birds? Just drop those rows. They add no information to the thing you are try
Modelling count data where offset variable is 0 for some observations So the response you want to model is "Number of calls per bird" and the troublesome lines are where you didn't observe any birds? Just drop those rows. They add no information to the thing you are trying to model.
Modelling count data where offset variable is 0 for some observations So the response you want to model is "Number of calls per bird" and the troublesome lines are where you didn't observe any birds? Just drop those rows. They add no information to the thing you are try
26,343
Modelling count data where offset variable is 0 for some observations
In a Poisson GLM, an offset is simply a multiplicative scaling on the Poisson rate being modelled - and a Poisson with a rate of zero is not helpful or even meaningful... That's why Spacedman is correct!
Modelling count data where offset variable is 0 for some observations
In a Poisson GLM, an offset is simply a multiplicative scaling on the Poisson rate being modelled - and a Poisson with a rate of zero is not helpful or even meaningful... That's why Spacedman is corre
Modelling count data where offset variable is 0 for some observations In a Poisson GLM, an offset is simply a multiplicative scaling on the Poisson rate being modelled - and a Poisson with a rate of zero is not helpful or even meaningful... That's why Spacedman is correct!
Modelling count data where offset variable is 0 for some observations In a Poisson GLM, an offset is simply a multiplicative scaling on the Poisson rate being modelled - and a Poisson with a rate of zero is not helpful or even meaningful... That's why Spacedman is corre
26,344
Modelling count data where offset variable is 0 for some observations
Just try to do it (Hurdle) "by hand (for "didactic/gymnastic" purporse) : split to binomial part and cout part and anjoy fitting logit and cout regression separatedly! Or use standart Hurdle models (+ Vuong test) Poisson/ negBin/ Gamma..., GAM. You dont need the "offset" var here, seems to me. ;-)
Modelling count data where offset variable is 0 for some observations
Just try to do it (Hurdle) "by hand (for "didactic/gymnastic" purporse) : split to binomial part and cout part and anjoy fitting logit and cout regression separatedly! Or use standart Hurdle models (+
Modelling count data where offset variable is 0 for some observations Just try to do it (Hurdle) "by hand (for "didactic/gymnastic" purporse) : split to binomial part and cout part and anjoy fitting logit and cout regression separatedly! Or use standart Hurdle models (+ Vuong test) Poisson/ negBin/ Gamma..., GAM. You dont need the "offset" var here, seems to me. ;-)
Modelling count data where offset variable is 0 for some observations Just try to do it (Hurdle) "by hand (for "didactic/gymnastic" purporse) : split to binomial part and cout part and anjoy fitting logit and cout regression separatedly! Or use standart Hurdle models (+
26,345
GLM for proportional data and underdispersion
My answer from http://article.gmane.org/gmane.comp.lang.r.general/316863 : short answer: quasi-likelihood estimation (i.e. family= quasibinomial) should address underdispersion as well as overdispersion reasonably well If you just want to assume that $\textrm{variance} = \phi \cdot N p(1-p)$ with $\phi < 1$, quasi-likelihood estimation will work fine. Depending on the source of your underdispersion, how much you're concerned about modeling the details, other aspects of your data, you might want to look into ordinal or COM-Poisson models (both of these approaches have R packages devoted to them). There is generally less concern about underdispersion than overdispersion; I speculate that two of the reasons are overdispersion is probably the more common problem underdispersion leads to conservatism in statistical inference (e.g. decreased power, lowered type I errors), in contrast to overdispersion which leads to optimism (inflated type I error rate etc.), so reviewers etc. tend not to worry about it as much
GLM for proportional data and underdispersion
My answer from http://article.gmane.org/gmane.comp.lang.r.general/316863 : short answer: quasi-likelihood estimation (i.e. family= quasibinomial) should address underdispersion as well as overdi
GLM for proportional data and underdispersion My answer from http://article.gmane.org/gmane.comp.lang.r.general/316863 : short answer: quasi-likelihood estimation (i.e. family= quasibinomial) should address underdispersion as well as overdispersion reasonably well If you just want to assume that $\textrm{variance} = \phi \cdot N p(1-p)$ with $\phi < 1$, quasi-likelihood estimation will work fine. Depending on the source of your underdispersion, how much you're concerned about modeling the details, other aspects of your data, you might want to look into ordinal or COM-Poisson models (both of these approaches have R packages devoted to them). There is generally less concern about underdispersion than overdispersion; I speculate that two of the reasons are overdispersion is probably the more common problem underdispersion leads to conservatism in statistical inference (e.g. decreased power, lowered type I errors), in contrast to overdispersion which leads to optimism (inflated type I error rate etc.), so reviewers etc. tend not to worry about it as much
GLM for proportional data and underdispersion My answer from http://article.gmane.org/gmane.comp.lang.r.general/316863 : short answer: quasi-likelihood estimation (i.e. family= quasibinomial) should address underdispersion as well as overdi
26,346
Fisher information matrix determinant for an overparameterized model
For normal $X\sim N(\mu,\sigma^2)$, information matrix is $$\mathcal{I}_1 = \left( \begin{matrix} \frac{1}{\sigma^2} & 0 \\ 0 & \frac{1}{2\sigma^4} \end{matrix} \right) $$ For curved normal $X\sim N(\mu,\mu^2)$ $$\mathcal{I}_2=\frac{3}{\mu^2}.$$So, your observation that determinants being equal is not universal, but that is not the whole story. Generally, if $\mathcal{I}_g$ is the information matrix under the reparametrization $$g(\theta)=(g_1(\theta),...,g_k(\theta))',$$ then, it is not difficult to see that the information matrix for the original parameters is $$I(\theta)=G'I_g(g(\theta))G$$ where $G$ is the Jacobian of the transformation $g=g(\theta)$. For Bernoulli example $(\theta_0,\theta_1)=(p,1-p)$ and $g(p)=(p,1-p)$. So, the Jacobian is $(1,-1)'$ and thus $$\mathcal{I}(p) = \left( \begin{matrix} 1& -1 \end{matrix} \right)\left( \begin{matrix} \frac{1}{p} & 0 \\ 0 & \frac{1}{1-p} \end{matrix} \right) \left( \begin{matrix} 1 \\ -1 \end{matrix} \right)=\frac{1}{p(1-p)}$$ For curved normal example, $$\mathcal{I}_2 = \left( \begin{matrix} 1& 2\mu \end{matrix} \right)\left( \begin{matrix} \frac{1}{\mu^2} & 0 \\ 0 & \frac{1}{2\mu^4} \end{matrix} \right) \left( \begin{matrix} 1 \\ 2\mu \end{matrix} \right)=\frac{3}{\mu^2}.$$ I think now you can easily relate the determinants. Follow-up after the comment If I understood you correctly, the FIM is valid as long as you extend the parameters in meaningful way: the likelihood under new parametrization should be a valid density. Hence, I called the Bernoulli example a unfortunate one. I think the link you provided has a serious flaw in the derivation of the FIM for categorical variables, as we have $E(x_i^2)=\theta_i(1-\theta_i)\neq \theta_i$ and $E(x_ix_j)=\theta_i\theta_j\neq 0$. Expectection of the negative Hessian gives $\mathrm{diag}\{1/\theta_i\}$, but not for the covariance of the score vectors. If you neglect the constraints, the information matrix equality doesn't hold.
Fisher information matrix determinant for an overparameterized model
For normal $X\sim N(\mu,\sigma^2)$, information matrix is $$\mathcal{I}_1 = \left( \begin{matrix} \frac{1}{\sigma^2} & 0 \\ 0 & \frac{1}{2\sigma^4} \end{matrix} \right) $$ For curved normal $X\sim N(\
Fisher information matrix determinant for an overparameterized model For normal $X\sim N(\mu,\sigma^2)$, information matrix is $$\mathcal{I}_1 = \left( \begin{matrix} \frac{1}{\sigma^2} & 0 \\ 0 & \frac{1}{2\sigma^4} \end{matrix} \right) $$ For curved normal $X\sim N(\mu,\mu^2)$ $$\mathcal{I}_2=\frac{3}{\mu^2}.$$So, your observation that determinants being equal is not universal, but that is not the whole story. Generally, if $\mathcal{I}_g$ is the information matrix under the reparametrization $$g(\theta)=(g_1(\theta),...,g_k(\theta))',$$ then, it is not difficult to see that the information matrix for the original parameters is $$I(\theta)=G'I_g(g(\theta))G$$ where $G$ is the Jacobian of the transformation $g=g(\theta)$. For Bernoulli example $(\theta_0,\theta_1)=(p,1-p)$ and $g(p)=(p,1-p)$. So, the Jacobian is $(1,-1)'$ and thus $$\mathcal{I}(p) = \left( \begin{matrix} 1& -1 \end{matrix} \right)\left( \begin{matrix} \frac{1}{p} & 0 \\ 0 & \frac{1}{1-p} \end{matrix} \right) \left( \begin{matrix} 1 \\ -1 \end{matrix} \right)=\frac{1}{p(1-p)}$$ For curved normal example, $$\mathcal{I}_2 = \left( \begin{matrix} 1& 2\mu \end{matrix} \right)\left( \begin{matrix} \frac{1}{\mu^2} & 0 \\ 0 & \frac{1}{2\mu^4} \end{matrix} \right) \left( \begin{matrix} 1 \\ 2\mu \end{matrix} \right)=\frac{3}{\mu^2}.$$ I think now you can easily relate the determinants. Follow-up after the comment If I understood you correctly, the FIM is valid as long as you extend the parameters in meaningful way: the likelihood under new parametrization should be a valid density. Hence, I called the Bernoulli example a unfortunate one. I think the link you provided has a serious flaw in the derivation of the FIM for categorical variables, as we have $E(x_i^2)=\theta_i(1-\theta_i)\neq \theta_i$ and $E(x_ix_j)=\theta_i\theta_j\neq 0$. Expectection of the negative Hessian gives $\mathrm{diag}\{1/\theta_i\}$, but not for the covariance of the score vectors. If you neglect the constraints, the information matrix equality doesn't hold.
Fisher information matrix determinant for an overparameterized model For normal $X\sim N(\mu,\sigma^2)$, information matrix is $$\mathcal{I}_1 = \left( \begin{matrix} \frac{1}{\sigma^2} & 0 \\ 0 & \frac{1}{2\sigma^4} \end{matrix} \right) $$ For curved normal $X\sim N(\
26,347
Fisher information matrix determinant for an overparameterized model
It appears that the result holds for a specific kind of relation between the parameters. Without claiming full generality for the results below, I stick to the "one to two parameters" case. Denote $g(\theta_0,\theta_1) =0$ the implicit equation that expresses the relationship that must hold between the two parameters. Then the "correct extended", "two-parameter" log-likelihood (not what the OP calculates -we will arrive there) $$L^e=L^*(\theta_0,\theta_1) +\lambda g(\theta_0,\theta_1)$$ is equivalent to the true likelihood $L$, since $g(\theta_0,\theta_1)=0$, ($\lambda$ is a multiplier) and we can treat the two parameters as independent, while we differentiate. Using subscripts to denote derivatives with respect to parameters (one subscript first derivative, two subscripts second derivative), the determinant of the Hessian of the correct extended log-likelihood will be $$D_H(L^e) = [L^*_{00}+\lambda g_{00}][L^*_{11}+\lambda g_{11}] - [L^*_{01}+\lambda g_{01}]^2 = D_H(L) \tag{1}$$ What is the OP doing instead? He considers the wrong likelihood $L^*(\theta_0,\theta_1)$ "ignoring" the relation between the two parameters, and without taking into account the constraint $g(\theta_0,\theta_1)$. He then proceeds with differentiation and obtains $$D_H(L^*) = L^*_{00}L^*_{11} - [L^*_{01}]^2 \tag{2}$$ It is evident that $(2)$ is not in general equal to $(1)$. But if $g_{00}=g_{11}=g_{00}=0$, then $$(1) \rightarrow D_H(L^e) = L^*_{00}L^*_{11} - [L^*_{01}]^2 = D_H(L^*) = D_H(L)$$ So if the relation between the actual parameter and the redundant parameter is such that the second partial derivatives of the implicit function that links them are all zero, then the approach that is fundamentally wrong, ends up "correct". For the Bernoulli case, we indeed have $$g(\theta_0,\theta_1) = \theta_0 + \theta_1 -1 \Rightarrow g_{00}=g_{11}=g_{01}=0$$ ADDENDUM To respond to @Khashaa question and show the mechanics here, we consider a likelihood specified with a redundant parameter, but also under a constraint that links the redundant parameter with the true one. What we do with log-likelihoods is maximize them -so here we have a case of constrained maximization. Assume a sample of size $n$,: $$\max L_n^*(\theta_0, \theta_1) = \ln \theta_0\sum_{i=1}^nx_i + \left(n-\sum_{i=1}^nx_i\right)\ln\theta_1,\;\; s.t. \;\; \theta_1 = 1-\theta_0$$ This problem has a Langrangean (what informally I called "correct extended likelihood" above), $$L^e = \ln \theta_0\sum_{i=1}^nx_i + \left(n-\sum_{i=1}^nx_i\right)\ln\theta_1 + \lambda(\theta_1 - 1+\theta_0)$$ The first-order conditions for a maximum are $$ \frac {\sum_{i=1}^nx_i}{\theta_0} + \lambda = 0,\;\;\; \frac {n-\sum_{i=1}^nx_i}{\theta_1} +\lambda_0 =0$$ for which we obtain the relation $$\frac {\sum_{i=1}^nx_i}{\theta_0} = \frac {n-\sum_{i=1}^nx_i}{\theta_1} \Rightarrow \theta_1\sum_{i=1}^nx_i = \left(n-\sum_{i=1}^nx_i\right)\theta_0$$ using the constraint under which the above are valid, $\theta_1 = 1-\theta_0$ we obtain $$ (1-\theta_0)\sum_{i=1}^nx_i = \left(n-\sum_{i=1}^nx_i\right)\theta_0 $$ $$\Rightarrow \sum_{i=1}^nx_i = n\theta_0 \Rightarrow \hat \theta_0 = \frac 1n\sum_{i=1}^nx_i$$ as we should. Moreover, since the constraint is linear in all the parameters, its second derivatives will be zero. This is reflected in the fact that in the first-derivatives of the lagrangean, the multiplier $\lambda$ "stands alone" and it will be eliminated when we will take second derivatives of the lagrangean. Which in turn will lead us to a Hessian whose determinant will equal the (one-dimensional) second derivative of the original one-parameter log-likelihood, after imposing also the constraint (which is what the OP does). Then taking the negative of the expected value in both cases, does not change this mathematical equivalence, and we arrive at the relation "one-dimensional Fisher Information = determinant of two-dimensional Fisher Information". Now given that the constraint is linear in all the parameters, the OP obtains the same result (at second-derivative level) without introducing the constraint with a multiplier in the function to be maximized, because at second derivative level, the presence/effect of the constraint disappears in such a case. All these have to do with calculus, not with statistical concepts.
Fisher information matrix determinant for an overparameterized model
It appears that the result holds for a specific kind of relation between the parameters. Without claiming full generality for the results below, I stick to the "one to two parameters" case. Denote $
Fisher information matrix determinant for an overparameterized model It appears that the result holds for a specific kind of relation between the parameters. Without claiming full generality for the results below, I stick to the "one to two parameters" case. Denote $g(\theta_0,\theta_1) =0$ the implicit equation that expresses the relationship that must hold between the two parameters. Then the "correct extended", "two-parameter" log-likelihood (not what the OP calculates -we will arrive there) $$L^e=L^*(\theta_0,\theta_1) +\lambda g(\theta_0,\theta_1)$$ is equivalent to the true likelihood $L$, since $g(\theta_0,\theta_1)=0$, ($\lambda$ is a multiplier) and we can treat the two parameters as independent, while we differentiate. Using subscripts to denote derivatives with respect to parameters (one subscript first derivative, two subscripts second derivative), the determinant of the Hessian of the correct extended log-likelihood will be $$D_H(L^e) = [L^*_{00}+\lambda g_{00}][L^*_{11}+\lambda g_{11}] - [L^*_{01}+\lambda g_{01}]^2 = D_H(L) \tag{1}$$ What is the OP doing instead? He considers the wrong likelihood $L^*(\theta_0,\theta_1)$ "ignoring" the relation between the two parameters, and without taking into account the constraint $g(\theta_0,\theta_1)$. He then proceeds with differentiation and obtains $$D_H(L^*) = L^*_{00}L^*_{11} - [L^*_{01}]^2 \tag{2}$$ It is evident that $(2)$ is not in general equal to $(1)$. But if $g_{00}=g_{11}=g_{00}=0$, then $$(1) \rightarrow D_H(L^e) = L^*_{00}L^*_{11} - [L^*_{01}]^2 = D_H(L^*) = D_H(L)$$ So if the relation between the actual parameter and the redundant parameter is such that the second partial derivatives of the implicit function that links them are all zero, then the approach that is fundamentally wrong, ends up "correct". For the Bernoulli case, we indeed have $$g(\theta_0,\theta_1) = \theta_0 + \theta_1 -1 \Rightarrow g_{00}=g_{11}=g_{01}=0$$ ADDENDUM To respond to @Khashaa question and show the mechanics here, we consider a likelihood specified with a redundant parameter, but also under a constraint that links the redundant parameter with the true one. What we do with log-likelihoods is maximize them -so here we have a case of constrained maximization. Assume a sample of size $n$,: $$\max L_n^*(\theta_0, \theta_1) = \ln \theta_0\sum_{i=1}^nx_i + \left(n-\sum_{i=1}^nx_i\right)\ln\theta_1,\;\; s.t. \;\; \theta_1 = 1-\theta_0$$ This problem has a Langrangean (what informally I called "correct extended likelihood" above), $$L^e = \ln \theta_0\sum_{i=1}^nx_i + \left(n-\sum_{i=1}^nx_i\right)\ln\theta_1 + \lambda(\theta_1 - 1+\theta_0)$$ The first-order conditions for a maximum are $$ \frac {\sum_{i=1}^nx_i}{\theta_0} + \lambda = 0,\;\;\; \frac {n-\sum_{i=1}^nx_i}{\theta_1} +\lambda_0 =0$$ for which we obtain the relation $$\frac {\sum_{i=1}^nx_i}{\theta_0} = \frac {n-\sum_{i=1}^nx_i}{\theta_1} \Rightarrow \theta_1\sum_{i=1}^nx_i = \left(n-\sum_{i=1}^nx_i\right)\theta_0$$ using the constraint under which the above are valid, $\theta_1 = 1-\theta_0$ we obtain $$ (1-\theta_0)\sum_{i=1}^nx_i = \left(n-\sum_{i=1}^nx_i\right)\theta_0 $$ $$\Rightarrow \sum_{i=1}^nx_i = n\theta_0 \Rightarrow \hat \theta_0 = \frac 1n\sum_{i=1}^nx_i$$ as we should. Moreover, since the constraint is linear in all the parameters, its second derivatives will be zero. This is reflected in the fact that in the first-derivatives of the lagrangean, the multiplier $\lambda$ "stands alone" and it will be eliminated when we will take second derivatives of the lagrangean. Which in turn will lead us to a Hessian whose determinant will equal the (one-dimensional) second derivative of the original one-parameter log-likelihood, after imposing also the constraint (which is what the OP does). Then taking the negative of the expected value in both cases, does not change this mathematical equivalence, and we arrive at the relation "one-dimensional Fisher Information = determinant of two-dimensional Fisher Information". Now given that the constraint is linear in all the parameters, the OP obtains the same result (at second-derivative level) without introducing the constraint with a multiplier in the function to be maximized, because at second derivative level, the presence/effect of the constraint disappears in such a case. All these have to do with calculus, not with statistical concepts.
Fisher information matrix determinant for an overparameterized model It appears that the result holds for a specific kind of relation between the parameters. Without claiming full generality for the results below, I stick to the "one to two parameters" case. Denote $
26,348
James-Stein Estimator with unequal variances
This question was explicitly answered in the classical series of papers on James-Stein estimator in the Empirical Bayes context written in the 1970s by Efron & Morris. I am mainly referring to: Efron and Morris, 1973, Stein's Estimation Rule and Its Competitors -- An Empirical Bayes Approach Efron and Morris, 1975, Data Analysis with Stein's Estimator and Its Generalizations Efron and Morris, 1977, Stein's Paradox in Statistics The 1977 paper is a non-technical exposition that is a must read. There they introduce the baseball batting example (that is discussed in the thread you linked to); in this example the observation variances are indeed supposed to be equal for all variables, and the shrinkage factor $c$ is constant. However, they proceed to give another example, which is estimating the rates of toxoplasmosis in a number of cities in El Salvador. In each city different number of people were surveyed, and so individual observations (toxoplasmosis rate in each city) can be thought of having different variances (the lower the number of people surveyed, the higher the variance). The intuition is certainly that data points with low variance (low uncertainty) don't need to be shrunken as strongly as data points with high variance (high uncertainty). The result of their analysis is shown on the following figure, where this can indeed be seen to be happening: The same data and analysis are presented in the much more technical 1975 paper as well, in a much more elegant figure (unfortunately not showing the individual variances though), see Section 3: There they present a simplified Empirical Bayes treatment that goes as follows. Let $$X_i|\theta_i \sim \mathcal N(\theta_i, D_i)\\ \theta_i \sim \mathcal N(0, A)$$ where $A$ is unknown. In case all $D_i=1$ are identical, the standard Empirical Bayes treatment is to estimate $1/(1+A)$ as $(k-2)/\sum X_j ^2$, and to compute the a posteriori mean of $\theta_i$ as $$\hat \theta_i = \left(1-\frac{1}{1+A}\right)X_i = \left(1-\frac{k-2}{\sum X_j^2}\right)X_i,$$ which is nothing else than the James-Stein estimator. If now $D_i \ne 1$, then the Bayes update rule is $$\hat \theta_i = \left(1-\frac{D_i}{D_i+A}\right)X_i$$ and we can use the same Empirical Bayes trick to estimate $A$, even though there is no closed formula for $\hat A$ in this case (see paper). However, they note that ... this rule does not reduce to Stein's when all $D_j$ are equal, and we instead use a minor variant of this estimator derived in [the 1973 paper] which does reduce to Stein's. The variant rule estimates a different value $\hat A_i$ for each city. The difference between the rules is minor in this case, but it might be important if $k$ were smaller. The relevant section in the 1973 paper is Section 8, and it is a bit of a tougher read. Interestingly, they have an explicit comment there on the suggestion made by @guy in the comments above: A very simple way to generalize the James-Stein rule for this situation is to define $\tilde x_i = D_i^{-1/2} x_i, \tilde \theta_i = D_i^{-1/2} \theta_i$, so that $\tilde x_i \sim \mathcal N(\tilde \theta_i, 1)$, apply [the original James-Stein rule] to the transformed data, and then transform back to the original coordinates. The resulting rule estimates $\theta_i$ by $$\hat \theta_i = \left(1-\frac{k-2}{\sum [X_j^2 / D_j]}\right)X_i.$$ This is unappealing since each $X_i$ is shrunk toward the origin by the same factor. Then they go on and describe their preferred procedure for estimating $\hat A_i$ which I must confess I have not fully read (it is a bit involved). I suggest you look there if you are interested in the details.
James-Stein Estimator with unequal variances
This question was explicitly answered in the classical series of papers on James-Stein estimator in the Empirical Bayes context written in the 1970s by Efron & Morris. I am mainly referring to: Efron
James-Stein Estimator with unequal variances This question was explicitly answered in the classical series of papers on James-Stein estimator in the Empirical Bayes context written in the 1970s by Efron & Morris. I am mainly referring to: Efron and Morris, 1973, Stein's Estimation Rule and Its Competitors -- An Empirical Bayes Approach Efron and Morris, 1975, Data Analysis with Stein's Estimator and Its Generalizations Efron and Morris, 1977, Stein's Paradox in Statistics The 1977 paper is a non-technical exposition that is a must read. There they introduce the baseball batting example (that is discussed in the thread you linked to); in this example the observation variances are indeed supposed to be equal for all variables, and the shrinkage factor $c$ is constant. However, they proceed to give another example, which is estimating the rates of toxoplasmosis in a number of cities in El Salvador. In each city different number of people were surveyed, and so individual observations (toxoplasmosis rate in each city) can be thought of having different variances (the lower the number of people surveyed, the higher the variance). The intuition is certainly that data points with low variance (low uncertainty) don't need to be shrunken as strongly as data points with high variance (high uncertainty). The result of their analysis is shown on the following figure, where this can indeed be seen to be happening: The same data and analysis are presented in the much more technical 1975 paper as well, in a much more elegant figure (unfortunately not showing the individual variances though), see Section 3: There they present a simplified Empirical Bayes treatment that goes as follows. Let $$X_i|\theta_i \sim \mathcal N(\theta_i, D_i)\\ \theta_i \sim \mathcal N(0, A)$$ where $A$ is unknown. In case all $D_i=1$ are identical, the standard Empirical Bayes treatment is to estimate $1/(1+A)$ as $(k-2)/\sum X_j ^2$, and to compute the a posteriori mean of $\theta_i$ as $$\hat \theta_i = \left(1-\frac{1}{1+A}\right)X_i = \left(1-\frac{k-2}{\sum X_j^2}\right)X_i,$$ which is nothing else than the James-Stein estimator. If now $D_i \ne 1$, then the Bayes update rule is $$\hat \theta_i = \left(1-\frac{D_i}{D_i+A}\right)X_i$$ and we can use the same Empirical Bayes trick to estimate $A$, even though there is no closed formula for $\hat A$ in this case (see paper). However, they note that ... this rule does not reduce to Stein's when all $D_j$ are equal, and we instead use a minor variant of this estimator derived in [the 1973 paper] which does reduce to Stein's. The variant rule estimates a different value $\hat A_i$ for each city. The difference between the rules is minor in this case, but it might be important if $k$ were smaller. The relevant section in the 1973 paper is Section 8, and it is a bit of a tougher read. Interestingly, they have an explicit comment there on the suggestion made by @guy in the comments above: A very simple way to generalize the James-Stein rule for this situation is to define $\tilde x_i = D_i^{-1/2} x_i, \tilde \theta_i = D_i^{-1/2} \theta_i$, so that $\tilde x_i \sim \mathcal N(\tilde \theta_i, 1)$, apply [the original James-Stein rule] to the transformed data, and then transform back to the original coordinates. The resulting rule estimates $\theta_i$ by $$\hat \theta_i = \left(1-\frac{k-2}{\sum [X_j^2 / D_j]}\right)X_i.$$ This is unappealing since each $X_i$ is shrunk toward the origin by the same factor. Then they go on and describe their preferred procedure for estimating $\hat A_i$ which I must confess I have not fully read (it is a bit involved). I suggest you look there if you are interested in the details.
James-Stein Estimator with unequal variances This question was explicitly answered in the classical series of papers on James-Stein estimator in the Empirical Bayes context written in the 1970s by Efron & Morris. I am mainly referring to: Efron
26,349
James-Stein Estimator with unequal variances
Page 12 of Demaret's MSc thesis discusses the JS estimator for unequal variances. But some additional deciphering is required of the notation. For example: $Y$, $\eta$ & $s^*$ are defined as rescaling variables but not later mentioned $s$ ($n \times$ the sample variance ?) is said to follow ``$\sim$'' the distribution $\sigma^2 \chi_n^2$ distribution but how do you then explicitly code this distribution?
James-Stein Estimator with unequal variances
Page 12 of Demaret's MSc thesis discusses the JS estimator for unequal variances. But some additional deciphering is required of the notation. For example: $Y$, $\eta$ & $s^*$ are defined as rescalin
James-Stein Estimator with unequal variances Page 12 of Demaret's MSc thesis discusses the JS estimator for unequal variances. But some additional deciphering is required of the notation. For example: $Y$, $\eta$ & $s^*$ are defined as rescaling variables but not later mentioned $s$ ($n \times$ the sample variance ?) is said to follow ``$\sim$'' the distribution $\sigma^2 \chi_n^2$ distribution but how do you then explicitly code this distribution?
James-Stein Estimator with unequal variances Page 12 of Demaret's MSc thesis discusses the JS estimator for unequal variances. But some additional deciphering is required of the notation. For example: $Y$, $\eta$ & $s^*$ are defined as rescalin
26,350
Why does bagging use bootstrap samples?
Interesting question. The bootstrap has good sampling properties, compared to some alternatives like the jackknife. The main downside of bootstrapping is that every iteration has to work with a sample that's as big as the original data set (which can be computationally expensive), while some other sampling techniques can work with much smaller samples. This paper suggests that naïvely cutting the sample size can reduce performance, relative to bootstrap-based bagging, which would be a reason not to do so. The paper also introduces a novel method for using smaller samples in bagging estimates, while avoiding those problems.
Why does bagging use bootstrap samples?
Interesting question. The bootstrap has good sampling properties, compared to some alternatives like the jackknife. The main downside of bootstrapping is that every iteration has to work with a samp
Why does bagging use bootstrap samples? Interesting question. The bootstrap has good sampling properties, compared to some alternatives like the jackknife. The main downside of bootstrapping is that every iteration has to work with a sample that's as big as the original data set (which can be computationally expensive), while some other sampling techniques can work with much smaller samples. This paper suggests that naïvely cutting the sample size can reduce performance, relative to bootstrap-based bagging, which would be a reason not to do so. The paper also introduces a novel method for using smaller samples in bagging estimates, while avoiding those problems.
Why does bagging use bootstrap samples? Interesting question. The bootstrap has good sampling properties, compared to some alternatives like the jackknife. The main downside of bootstrapping is that every iteration has to work with a samp
26,351
Estimating the slope of the straight portion of a sigmoid curve
Here is a quick and dirty idea based on @alex's suggestion. #simulated data set.seed(100) x <- sort(exp(rnorm(1000, sd=0.6))) y <- ecdf(x)(x) It looks a little bit like your data. The idea is now to look at the derivative and try to see where it is biggest. This should be the part of your curve where it is straightest, because of it being an S-shape. NQ <- diff(y)/diff(x) plot.ts(NQ) It is wiggly because some of the $x$ values happen to be very close together. However, taking logs helps, and then you can use a smoothed version. log.NQ <- log(NQ) low <- lowess(log.NQ) cutoff <- 0.75 q <- quantile(low$y, cutoff) plot.ts(log.NQ) abline(h=q) Now you could try to find the $x$'s like this: x.lower <- x[min(which(low$y > q))] x.upper <- x[max(which(low$y > q))] plot(x,y) abline(v=c(x.lower, x.upper)) Of course, the whole thing is ultimately sensitive to the choice of cutoff and also the choice of smoothing algorithm and also happening to take logs, when we could have done some other transformation. Also, for real data, random variation in $y$ might cause problems with this method as well. Derivatives are not numerically well-behaved. Edit: added picture of output.
Estimating the slope of the straight portion of a sigmoid curve
Here is a quick and dirty idea based on @alex's suggestion. #simulated data set.seed(100) x <- sort(exp(rnorm(1000, sd=0.6))) y <- ecdf(x)(x) It looks a little bit like your data. The idea is now to
Estimating the slope of the straight portion of a sigmoid curve Here is a quick and dirty idea based on @alex's suggestion. #simulated data set.seed(100) x <- sort(exp(rnorm(1000, sd=0.6))) y <- ecdf(x)(x) It looks a little bit like your data. The idea is now to look at the derivative and try to see where it is biggest. This should be the part of your curve where it is straightest, because of it being an S-shape. NQ <- diff(y)/diff(x) plot.ts(NQ) It is wiggly because some of the $x$ values happen to be very close together. However, taking logs helps, and then you can use a smoothed version. log.NQ <- log(NQ) low <- lowess(log.NQ) cutoff <- 0.75 q <- quantile(low$y, cutoff) plot.ts(log.NQ) abline(h=q) Now you could try to find the $x$'s like this: x.lower <- x[min(which(low$y > q))] x.upper <- x[max(which(low$y > q))] plot(x,y) abline(v=c(x.lower, x.upper)) Of course, the whole thing is ultimately sensitive to the choice of cutoff and also the choice of smoothing algorithm and also happening to take logs, when we could have done some other transformation. Also, for real data, random variation in $y$ might cause problems with this method as well. Derivatives are not numerically well-behaved. Edit: added picture of output.
Estimating the slope of the straight portion of a sigmoid curve Here is a quick and dirty idea based on @alex's suggestion. #simulated data set.seed(100) x <- sort(exp(rnorm(1000, sd=0.6))) y <- ecdf(x)(x) It looks a little bit like your data. The idea is now to
26,352
Decision trees variable (feature) scaling and variable (feature) normalization (tuning) required in which implementations?
For 1, decision trees in in general don't usually require scaling. However, it helps with data visualization/manipulation, and might be useful if you intend to to compare performance with other data or other methods like SVM. For 2, this is a question of tuning. Units/hour might be considered a type of variable interaction and may have predictive power different from each alone. This really depends on your data, though. I'd try with and without to see if there is a difference.
Decision trees variable (feature) scaling and variable (feature) normalization (tuning) required in
For 1, decision trees in in general don't usually require scaling. However, it helps with data visualization/manipulation, and might be useful if you intend to to compare performance with other data
Decision trees variable (feature) scaling and variable (feature) normalization (tuning) required in which implementations? For 1, decision trees in in general don't usually require scaling. However, it helps with data visualization/manipulation, and might be useful if you intend to to compare performance with other data or other methods like SVM. For 2, this is a question of tuning. Units/hour might be considered a type of variable interaction and may have predictive power different from each alone. This really depends on your data, though. I'd try with and without to see if there is a difference.
Decision trees variable (feature) scaling and variable (feature) normalization (tuning) required in For 1, decision trees in in general don't usually require scaling. However, it helps with data visualization/manipulation, and might be useful if you intend to to compare performance with other data
26,353
Can a variable be included in a mixed model as a fixed effect and as a random effect at the same time?
First to make sure Group is defined as a factor. Your current model is $$M_{ij}=\alpha_i+\beta_1\mathrm{Time}_{ij}+u_i+e_{ij},$$ where $i$ denotes Group and $j$ the time points. If you run your model and test with ranef(), you will find that it is hard to distinguish $\alpha_i$ and $u_i$ in the estimation, thus $u_i$ almost equal 0. Two possible alternative models are: Random effects model, $$M_{ij}=\beta_0+\beta_1\mathrm{Time}_{ij}+u_i+e_{ij},$$ where $\beta_0$ is the average intercept, $u_i$ is the random individual deviance from the average intercept for each group and is assumed to follow a distribution. Fixed effects model (in the context of econometrics), $$M_{ij}=\alpha_i+\beta_1\mathrm{Time}_{ij}+e_{ij},$$ where $\alpha_i$ is the fixed individual intercept. Updates: I use the sleepstudy data set as an illustration. If the grouping variable (a factor) is included as a covariate (Model fm2 below), both the random effects and its variance tend to be zero. The intuitive explanation is that, $\alpha_i$ and $u_i$ basically model the same quantity (the group specific intercept), although one is assumed fixed and one is random. The majority of the variability is first absorbed by the fixed intercepts ($\alpha_i$), so the random intercepts $u_i$ tend to be all zero. The code and results are listed below. > fm1 <- lmer(Reaction ~ Days + (1 | Subject), sleepstudy, REML = F) > re1 = as.matrix(ranef(fm1)$Subject) > summary(as.vector(re1)) Min. 1st Qu. Median Mean 3rd Qu. Max. -77.570 -7.460 5.701 0.000 15.850 71.920 > VarCorr(fm1) Groups Name Std.Dev. Subject (Intercept) 36.012 Residual 30.895 > sd(re1) [1] 35.76336 > fm2 <- lmer(Reaction ~ Days + Subject + (1 | Subject), sleepstudy, REML = F) > re2 = as.matrix(ranef(fm2)$Subject) > summary(as.vector(re2)) Min. 1st Qu. Median Mean 3rd Qu. Max. 0 0 0 0 0 0 > VarCorr(fm2) Groups Name Std.Dev. Subject (Intercept) 0.00 Residual 29.31 > sd(re2) [1] 0 Updates 2: Previously I used ML in lmer because I found REML variance estimate for the random intercept seems off the true value in this extreme case. I know that it may not make sense to include both fixed and random group-specific intercepts in a single model, but it can be an interesting example. Note that the only different between this model and the random intercept model is a larger design matrix for the fixed effects. The lmer example below uses REML, and the model is the same as Model fm2 above. The estimated random effects are all close to zero, and their standard deviation is very close to zero. I know the standard deviation of the estimated random effects is not a perfect estimate of the variance of the random effects, but the two should correspond somehow. But the REML variance estimate for the random effects is 33, which is very off zero. > fm3 <- lmer(Reaction ~ Days + Subject + (1 | Subject), sleepstudy, REML = T, verbose = T) > re3 = as.matrix(ranef(fm3)$Subject) > summary(as.vector(re3)) Min. 1st Qu. Median Mean 3rd Qu. Max. -3.998e-12 -9.220e-13 -6.549e-13 -7.717e-13 -4.567e-13 6.893e-13 > VarCorr(fm3) Groups Name Std.Dev. Subject (Intercept) 33.050 Residual 30.991 > sd(re3) [1] 9.391908e-13 I also tested in Stata and it becomes more interesting. The mixed command uses an EM algorithm but it cannot converge and thus gives a very large estimate. In my understanding, REML and ML should not differ so much in this case. There may be some numerical issues. Given that the estimates rely on iterations, I will think more about this when I have more time. . mixed reaction days i.subject || subject:, reml Performing EM optimization: Performing gradient-based optimization: could not calculate numerical derivatives -- discontinuous region with missing values encountered could not calculate numerical derivatives -- discontinuous region with missing values encountered Computing standard errors: standard-error calculation failed Mixed-effects REML regression Number of obs = 180 Group variable: subject Number of groups = 18 Obs per group: min = 10 avg = 10.0 max = 10 Wald chi2(18) = 169.64 Log restricted-likelihood = -805.65036 Prob > chi2 = 0.0000 ... ------------------------------------------------------------------------------ Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval] -----------------------------+------------------------------------------------ subject: Identity | var(_cons) | 105231.5 . . . -----------------------------+------------------------------------------------ var(Residual) | 960.4566 . . . ------------------------------------------------------------------------------ LR test vs. linear model: chibar2(01) = 2.3e-13 Prob >= chibar2 = 1.0000 Warning: convergence not achieved; estimates are based on iterated EM
Can a variable be included in a mixed model as a fixed effect and as a random effect at the same tim
First to make sure Group is defined as a factor. Your current model is $$M_{ij}=\alpha_i+\beta_1\mathrm{Time}_{ij}+u_i+e_{ij},$$ where $i$ denotes Group and $j$ the time points. If you run your model
Can a variable be included in a mixed model as a fixed effect and as a random effect at the same time? First to make sure Group is defined as a factor. Your current model is $$M_{ij}=\alpha_i+\beta_1\mathrm{Time}_{ij}+u_i+e_{ij},$$ where $i$ denotes Group and $j$ the time points. If you run your model and test with ranef(), you will find that it is hard to distinguish $\alpha_i$ and $u_i$ in the estimation, thus $u_i$ almost equal 0. Two possible alternative models are: Random effects model, $$M_{ij}=\beta_0+\beta_1\mathrm{Time}_{ij}+u_i+e_{ij},$$ where $\beta_0$ is the average intercept, $u_i$ is the random individual deviance from the average intercept for each group and is assumed to follow a distribution. Fixed effects model (in the context of econometrics), $$M_{ij}=\alpha_i+\beta_1\mathrm{Time}_{ij}+e_{ij},$$ where $\alpha_i$ is the fixed individual intercept. Updates: I use the sleepstudy data set as an illustration. If the grouping variable (a factor) is included as a covariate (Model fm2 below), both the random effects and its variance tend to be zero. The intuitive explanation is that, $\alpha_i$ and $u_i$ basically model the same quantity (the group specific intercept), although one is assumed fixed and one is random. The majority of the variability is first absorbed by the fixed intercepts ($\alpha_i$), so the random intercepts $u_i$ tend to be all zero. The code and results are listed below. > fm1 <- lmer(Reaction ~ Days + (1 | Subject), sleepstudy, REML = F) > re1 = as.matrix(ranef(fm1)$Subject) > summary(as.vector(re1)) Min. 1st Qu. Median Mean 3rd Qu. Max. -77.570 -7.460 5.701 0.000 15.850 71.920 > VarCorr(fm1) Groups Name Std.Dev. Subject (Intercept) 36.012 Residual 30.895 > sd(re1) [1] 35.76336 > fm2 <- lmer(Reaction ~ Days + Subject + (1 | Subject), sleepstudy, REML = F) > re2 = as.matrix(ranef(fm2)$Subject) > summary(as.vector(re2)) Min. 1st Qu. Median Mean 3rd Qu. Max. 0 0 0 0 0 0 > VarCorr(fm2) Groups Name Std.Dev. Subject (Intercept) 0.00 Residual 29.31 > sd(re2) [1] 0 Updates 2: Previously I used ML in lmer because I found REML variance estimate for the random intercept seems off the true value in this extreme case. I know that it may not make sense to include both fixed and random group-specific intercepts in a single model, but it can be an interesting example. Note that the only different between this model and the random intercept model is a larger design matrix for the fixed effects. The lmer example below uses REML, and the model is the same as Model fm2 above. The estimated random effects are all close to zero, and their standard deviation is very close to zero. I know the standard deviation of the estimated random effects is not a perfect estimate of the variance of the random effects, but the two should correspond somehow. But the REML variance estimate for the random effects is 33, which is very off zero. > fm3 <- lmer(Reaction ~ Days + Subject + (1 | Subject), sleepstudy, REML = T, verbose = T) > re3 = as.matrix(ranef(fm3)$Subject) > summary(as.vector(re3)) Min. 1st Qu. Median Mean 3rd Qu. Max. -3.998e-12 -9.220e-13 -6.549e-13 -7.717e-13 -4.567e-13 6.893e-13 > VarCorr(fm3) Groups Name Std.Dev. Subject (Intercept) 33.050 Residual 30.991 > sd(re3) [1] 9.391908e-13 I also tested in Stata and it becomes more interesting. The mixed command uses an EM algorithm but it cannot converge and thus gives a very large estimate. In my understanding, REML and ML should not differ so much in this case. There may be some numerical issues. Given that the estimates rely on iterations, I will think more about this when I have more time. . mixed reaction days i.subject || subject:, reml Performing EM optimization: Performing gradient-based optimization: could not calculate numerical derivatives -- discontinuous region with missing values encountered could not calculate numerical derivatives -- discontinuous region with missing values encountered Computing standard errors: standard-error calculation failed Mixed-effects REML regression Number of obs = 180 Group variable: subject Number of groups = 18 Obs per group: min = 10 avg = 10.0 max = 10 Wald chi2(18) = 169.64 Log restricted-likelihood = -805.65036 Prob > chi2 = 0.0000 ... ------------------------------------------------------------------------------ Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval] -----------------------------+------------------------------------------------ subject: Identity | var(_cons) | 105231.5 . . . -----------------------------+------------------------------------------------ var(Residual) | 960.4566 . . . ------------------------------------------------------------------------------ LR test vs. linear model: chibar2(01) = 2.3e-13 Prob >= chibar2 = 1.0000 Warning: convergence not achieved; estimates are based on iterated EM
Can a variable be included in a mixed model as a fixed effect and as a random effect at the same tim First to make sure Group is defined as a factor. Your current model is $$M_{ij}=\alpha_i+\beta_1\mathrm{Time}_{ij}+u_i+e_{ij},$$ where $i$ denotes Group and $j$ the time points. If you run your model
26,354
Motivation behind random forest algorithm steps
Ensemble methods (such as random forests) require some element of variation in the datasets that the individual base classifiers are grown on (otherwise random forests would end up with a forest of trees that are too similar). As decision trees are highly sensitive to the observations in the training set, varying the observations (using the bootstrap) was, I suppose, a natural approach to getting the required diversity. The obvious alternative is to vary the features that are used, e.g. train each tree on a subset of the original features. Using the bootstrap samples also allows us to estimate the out-of-bag (OOB) error rate and variable importance. 2 is essentially another way of injecting randomness into the forest. It also has an impact on reducing the correlation among the trees (by using a low mtry value), with the trade-off being (potentially) worsening the predictive power. Using too large a value of mtry will cause the trees to become increasingly similar to one another (and in the extreme you end up with bagging) I believe that the reason for not pruning is more due to the fact that its not necessary than anything else. With a single decision tree you would normally prune it since it's highly susceptible to overfitting. However, by using the bootstrap samples and growing many trees random forests can grow trees that are individually strong, but not particularly correlated with one another. Basically, the individual trees are overfit but provided their errors are not correlated the forest should be reasonably accurate. The reason it works well is similar to Condorcet's jury theorem (and the logic behind methods such as boosting). Basically you have lots of weak learners that only need to perform marginally better than random guessing. If this is true you can keep adding weak learners, and in the limit you would get perfect predictions from your ensemble. Clearly this is restricted due to the errors of the learners becoming correlated, which prevents the ensemble's performance improving.
Motivation behind random forest algorithm steps
Ensemble methods (such as random forests) require some element of variation in the datasets that the individual base classifiers are grown on (otherwise random forests would end up with a forest of tr
Motivation behind random forest algorithm steps Ensemble methods (such as random forests) require some element of variation in the datasets that the individual base classifiers are grown on (otherwise random forests would end up with a forest of trees that are too similar). As decision trees are highly sensitive to the observations in the training set, varying the observations (using the bootstrap) was, I suppose, a natural approach to getting the required diversity. The obvious alternative is to vary the features that are used, e.g. train each tree on a subset of the original features. Using the bootstrap samples also allows us to estimate the out-of-bag (OOB) error rate and variable importance. 2 is essentially another way of injecting randomness into the forest. It also has an impact on reducing the correlation among the trees (by using a low mtry value), with the trade-off being (potentially) worsening the predictive power. Using too large a value of mtry will cause the trees to become increasingly similar to one another (and in the extreme you end up with bagging) I believe that the reason for not pruning is more due to the fact that its not necessary than anything else. With a single decision tree you would normally prune it since it's highly susceptible to overfitting. However, by using the bootstrap samples and growing many trees random forests can grow trees that are individually strong, but not particularly correlated with one another. Basically, the individual trees are overfit but provided their errors are not correlated the forest should be reasonably accurate. The reason it works well is similar to Condorcet's jury theorem (and the logic behind methods such as boosting). Basically you have lots of weak learners that only need to perform marginally better than random guessing. If this is true you can keep adding weak learners, and in the limit you would get perfect predictions from your ensemble. Clearly this is restricted due to the errors of the learners becoming correlated, which prevents the ensemble's performance improving.
Motivation behind random forest algorithm steps Ensemble methods (such as random forests) require some element of variation in the datasets that the individual base classifiers are grown on (otherwise random forests would end up with a forest of tr
26,355
Is CCA between two identical datasets equivalent to PCA on this dataset?
Let $\bf X$ be $n\times p_1$ and $\bf Y$ be $n \times p_2$ data matrices, representing two datasets with $n$ samples (i.e. observations of your random row vectors $X$ and $Y$) in each of them. CCA looks for a linear combination of $p_1$ variables in $\bf X$ and a linear combination of $p_2$ variables in $\bf Y$ such that they are maximally correlated between each other; then it looks for the next pair, under a constraint of zero correlation with the first pair; etc. In case $\mathbf{X} = \mathbf{Y}$ (and $p_1=p_2 = p$), any linear combination in one dataset will trivially have correlation $1$ with the same linear combination in another dataset. So all CCA pairs will have correlations $1$, and the order of pairs is arbitrary. The only remaining constraint is that linear combinations should be uncorrelated between each other. There is an infinite number of ways to choose $p$ uncorrelated linear combinations (note that the weights do not have to be orthogonal in the $p$-dimensional space) and any of them will produce a valid CCA solution. One such way is indeed given by PCA, as any two PCs have correlation zero. So PCA solution will indeed be a valid CCA solution, but there is an infinite number of equivalently good CCA solutions in this case. Mathematically, CCA looks for right ($\mathbf a$) and left ($\mathbf b$) singular vectors of $\mathbf C_{XX}^{-1/2} \mathbf C_{XY} \mathbf C_{YY}^{-1/2}$, which in this case is equal to $\mathbf I$, with any vector being an eigenvector. So $\mathbf a=\mathbf b$ can be arbitrary. CCA then obtains the linear combination weights as $\mathbf C_{XX}^{-1/2} \mathbf a$ and $\mathbf C_{YY}^{-1/2} \mathbf b$. In this case it boils down to taking an arbitrary basis and transforming it with $\mathbf C_{XX}^{-1/2}$, which will indeed produce uncorrelated directions.
Is CCA between two identical datasets equivalent to PCA on this dataset?
Let $\bf X$ be $n\times p_1$ and $\bf Y$ be $n \times p_2$ data matrices, representing two datasets with $n$ samples (i.e. observations of your random row vectors $X$ and $Y$) in each of them. CCA lo
Is CCA between two identical datasets equivalent to PCA on this dataset? Let $\bf X$ be $n\times p_1$ and $\bf Y$ be $n \times p_2$ data matrices, representing two datasets with $n$ samples (i.e. observations of your random row vectors $X$ and $Y$) in each of them. CCA looks for a linear combination of $p_1$ variables in $\bf X$ and a linear combination of $p_2$ variables in $\bf Y$ such that they are maximally correlated between each other; then it looks for the next pair, under a constraint of zero correlation with the first pair; etc. In case $\mathbf{X} = \mathbf{Y}$ (and $p_1=p_2 = p$), any linear combination in one dataset will trivially have correlation $1$ with the same linear combination in another dataset. So all CCA pairs will have correlations $1$, and the order of pairs is arbitrary. The only remaining constraint is that linear combinations should be uncorrelated between each other. There is an infinite number of ways to choose $p$ uncorrelated linear combinations (note that the weights do not have to be orthogonal in the $p$-dimensional space) and any of them will produce a valid CCA solution. One such way is indeed given by PCA, as any two PCs have correlation zero. So PCA solution will indeed be a valid CCA solution, but there is an infinite number of equivalently good CCA solutions in this case. Mathematically, CCA looks for right ($\mathbf a$) and left ($\mathbf b$) singular vectors of $\mathbf C_{XX}^{-1/2} \mathbf C_{XY} \mathbf C_{YY}^{-1/2}$, which in this case is equal to $\mathbf I$, with any vector being an eigenvector. So $\mathbf a=\mathbf b$ can be arbitrary. CCA then obtains the linear combination weights as $\mathbf C_{XX}^{-1/2} \mathbf a$ and $\mathbf C_{YY}^{-1/2} \mathbf b$. In this case it boils down to taking an arbitrary basis and transforming it with $\mathbf C_{XX}^{-1/2}$, which will indeed produce uncorrelated directions.
Is CCA between two identical datasets equivalent to PCA on this dataset? Let $\bf X$ be $n\times p_1$ and $\bf Y$ be $n \times p_2$ data matrices, representing two datasets with $n$ samples (i.e. observations of your random row vectors $X$ and $Y$) in each of them. CCA lo
26,356
How to interpret this PCA biplot coming from a survey of what areas people are interested in?
The dots are the respondents and the colours are the genders. This, you know. The principal axes of your plot represent the first and second PC scores and individuals are plotted on that basis. Somebody in the lower left hand quadrant got low scores on both. PC2 seems to flag "male" and "female" interests. I don't know what PC1 means, but it probably represents an overall interest score -- people with lots of interests score high. Or perhaps it represents people with passionate interests (score 5). The vectors are a projected coordinate system for the original variables. So if you project a point perpendicularly onto, say, the reading vector - you should get the reading score of that person. Relative position is important here. Take a "male" vector like "adrenaline sports". Now imagine that you project a pink spot onto it from high in the upper right quadrant. That person's co-ordinate on "adrenaline sports" will be negative. So why are the arrows all in the right half of the graph? Given the geometry, the deeper a person is into the left side of the graph, the fewer of their projections will be positive. This suggests that PC1 is a measure of overall interest level. I'm not sure what else you could learn here. You might want to look at PC3 and PC4, if PC1 and PC2 only tell you that some people have more interests than others and that men are different from women. Your plot seems almost symmetric around the PC1 axis, and symmetric with respect to gender. As many men have female interests as women have male interests ... or is that true? I'm just looking at the dots. It might be interesting to look at areas where the map is not symmetric: large PC1, moderately negative PC2 --- that sector has a lot of action. Why?
How to interpret this PCA biplot coming from a survey of what areas people are interested in?
The dots are the respondents and the colours are the genders. This, you know. The principal axes of your plot represent the first and second PC scores and individuals are plotted on that basis. Somebo
How to interpret this PCA biplot coming from a survey of what areas people are interested in? The dots are the respondents and the colours are the genders. This, you know. The principal axes of your plot represent the first and second PC scores and individuals are plotted on that basis. Somebody in the lower left hand quadrant got low scores on both. PC2 seems to flag "male" and "female" interests. I don't know what PC1 means, but it probably represents an overall interest score -- people with lots of interests score high. Or perhaps it represents people with passionate interests (score 5). The vectors are a projected coordinate system for the original variables. So if you project a point perpendicularly onto, say, the reading vector - you should get the reading score of that person. Relative position is important here. Take a "male" vector like "adrenaline sports". Now imagine that you project a pink spot onto it from high in the upper right quadrant. That person's co-ordinate on "adrenaline sports" will be negative. So why are the arrows all in the right half of the graph? Given the geometry, the deeper a person is into the left side of the graph, the fewer of their projections will be positive. This suggests that PC1 is a measure of overall interest level. I'm not sure what else you could learn here. You might want to look at PC3 and PC4, if PC1 and PC2 only tell you that some people have more interests than others and that men are different from women. Your plot seems almost symmetric around the PC1 axis, and symmetric with respect to gender. As many men have female interests as women have male interests ... or is that true? I'm just looking at the dots. It might be interesting to look at areas where the map is not symmetric: large PC1, moderately negative PC2 --- that sector has a lot of action. Why?
How to interpret this PCA biplot coming from a survey of what areas people are interested in? The dots are the respondents and the colours are the genders. This, you know. The principal axes of your plot represent the first and second PC scores and individuals are plotted on that basis. Somebo
26,357
What's the difference between Maximizing Conditional (Log) Likelihood or Joint (Log) Likelihood while estimating parameters of a model?
It depends what you want to do with your model later. Joint models attempt to predict the whole distribution over $X$ and $y$. It has some useful properties: Outlier detection. Samples very unlike your training samples can be identified as they'll have a low marginal probability. A conditional model won't necessarily be bale to tell you this. Sometimes it's easier to optimise. If your model was a gaussian mixture model, say, there are well documented ways to fit it to the joint density you can just plug in (expectation maximisation, variational bayes), but things get more complicated it you want to train it conditionally. Depending on the model, training can potentially be parallelised by taking advantages of conditional independences, and you may also avoid the need to retrain it later if new data becomes available. E.G. if every marginal distribution $f(X|y)$ is parameterised separately, and you observe a new sample $(X=x_1,y=y_1)$, then the only marginal distribution you need to retrain is $f(X|y=y_1)$. The other marginal distributions $f(X|y=y_2), f(X|y=y_3), \ldots$ are unaffected. This property is less common with conditional models. I recall reading a paper which indicated joint models have some other nice properties in cases where there's lots and lots of data, but cannot remember the exact claim, or find it in my big folder of interesting papers. If I find it later I'll put in a reference. Conditional models however have some interesting properties too They can work really well. Some have had a lot of work put in finding sensible optimisation strategies (e.g. support vector machines) The conditional distribution is very often `simpler' to model than the joint - to model the latter, you have to model the former as well as modelling the marginal distribution. If you're only interested in getting accurate predictions of what value $y$ is for a given $X$, it can be more sensible to concentrate your model's capacity on representing this alone.
What's the difference between Maximizing Conditional (Log) Likelihood or Joint (Log) Likelihood whil
It depends what you want to do with your model later. Joint models attempt to predict the whole distribution over $X$ and $y$. It has some useful properties: Outlier detection. Samples very unlike yo
What's the difference between Maximizing Conditional (Log) Likelihood or Joint (Log) Likelihood while estimating parameters of a model? It depends what you want to do with your model later. Joint models attempt to predict the whole distribution over $X$ and $y$. It has some useful properties: Outlier detection. Samples very unlike your training samples can be identified as they'll have a low marginal probability. A conditional model won't necessarily be bale to tell you this. Sometimes it's easier to optimise. If your model was a gaussian mixture model, say, there are well documented ways to fit it to the joint density you can just plug in (expectation maximisation, variational bayes), but things get more complicated it you want to train it conditionally. Depending on the model, training can potentially be parallelised by taking advantages of conditional independences, and you may also avoid the need to retrain it later if new data becomes available. E.G. if every marginal distribution $f(X|y)$ is parameterised separately, and you observe a new sample $(X=x_1,y=y_1)$, then the only marginal distribution you need to retrain is $f(X|y=y_1)$. The other marginal distributions $f(X|y=y_2), f(X|y=y_3), \ldots$ are unaffected. This property is less common with conditional models. I recall reading a paper which indicated joint models have some other nice properties in cases where there's lots and lots of data, but cannot remember the exact claim, or find it in my big folder of interesting papers. If I find it later I'll put in a reference. Conditional models however have some interesting properties too They can work really well. Some have had a lot of work put in finding sensible optimisation strategies (e.g. support vector machines) The conditional distribution is very often `simpler' to model than the joint - to model the latter, you have to model the former as well as modelling the marginal distribution. If you're only interested in getting accurate predictions of what value $y$ is for a given $X$, it can be more sensible to concentrate your model's capacity on representing this alone.
What's the difference between Maximizing Conditional (Log) Likelihood or Joint (Log) Likelihood whil It depends what you want to do with your model later. Joint models attempt to predict the whole distribution over $X$ and $y$. It has some useful properties: Outlier detection. Samples very unlike yo
26,358
Newey-West t-statistics
Seeing as how I had a similar question earlier and came across this long-unanswered question through a simple web search, I'll take a stab and post what I think is one possible solution to your situation that others may also be encountering. According to SAS Support, you can take the time-series you have and fit an intercept-only regression model to the series. The estimated intercept for this regression model will be the sample mean of the series. You can then pass this intercept-only regression model through the SAS commands used to retrieve Newey-West standard errors of a regression model. Here is the link to the SAS Support page: http://support.sas.com/kb/40/098.html Look for "Example 2. Newey-West standard error correction for the sample mean of a series" In your case, simply try the same approach with Matlab.If someone has a better approach, please enlighten us.
Newey-West t-statistics
Seeing as how I had a similar question earlier and came across this long-unanswered question through a simple web search, I'll take a stab and post what I think is one possible solution to your situat
Newey-West t-statistics Seeing as how I had a similar question earlier and came across this long-unanswered question through a simple web search, I'll take a stab and post what I think is one possible solution to your situation that others may also be encountering. According to SAS Support, you can take the time-series you have and fit an intercept-only regression model to the series. The estimated intercept for this regression model will be the sample mean of the series. You can then pass this intercept-only regression model through the SAS commands used to retrieve Newey-West standard errors of a regression model. Here is the link to the SAS Support page: http://support.sas.com/kb/40/098.html Look for "Example 2. Newey-West standard error correction for the sample mean of a series" In your case, simply try the same approach with Matlab.If someone has a better approach, please enlighten us.
Newey-West t-statistics Seeing as how I had a similar question earlier and came across this long-unanswered question through a simple web search, I'll take a stab and post what I think is one possible solution to your situat
26,359
Newey-West t-statistics
this is a function that returns the t.stat for the mean with null hypothesis that mean is equal to h0. Lag must be set equal to lagged returns that can be considered autocorrelated. function y=NWtest(ret,lag,h0) T=size(ret,1); vv=var(ret); for l=1:1:lag cc=cov(ret(1:end-l),ret(l+1:end)); vv=vv+2*(1-l/lag)*cc(1,2); end y=(mean(ret)-h0)/sqrt(vv)*sqrt(T); end
Newey-West t-statistics
this is a function that returns the t.stat for the mean with null hypothesis that mean is equal to h0. Lag must be set equal to lagged returns that can be considered autocorrelated. function y=NWtest(
Newey-West t-statistics this is a function that returns the t.stat for the mean with null hypothesis that mean is equal to h0. Lag must be set equal to lagged returns that can be considered autocorrelated. function y=NWtest(ret,lag,h0) T=size(ret,1); vv=var(ret); for l=1:1:lag cc=cov(ret(1:end-l),ret(l+1:end)); vv=vv+2*(1-l/lag)*cc(1,2); end y=(mean(ret)-h0)/sqrt(vv)*sqrt(T); end
Newey-West t-statistics this is a function that returns the t.stat for the mean with null hypothesis that mean is equal to h0. Lag must be set equal to lagged returns that can be considered autocorrelated. function y=NWtest(
26,360
Getting different results when plotting 95% CI ellipses with ggplot or the ellipse package
You're not doing anything wrong, the two functions are making different underlying assumptions about the distribution of the data. Your first implementation is assuming multivariate normal, and the 2nd a multivariate t-distribution (see ?cov.trob in package MASS). The effect is easier to see if you pull out one group: #pull out group 1 pick = group ==1 p3 <- qplot(data=df[pick,], x=x, y=y) tl = with(df[pick,], ellipse(cor(x, y),scale=c(sd(x),sd(y)), centre=c(mean(x),mean(y)))) p3 <- p3 + geom_path(data=as.data.frame(tl), aes(x=x, y=y)) p3 <- p3 + stat_ellipse(level=0.95) p3 # looks off center p3 <- p3 + geom_point(aes(x=mean(x),y=mean(y),size=2,color="red")) p3 So although it is close to the same center and orientation they are not the same. You can come close to the same size ellipse by using cov.trob() to get the correlation and scale for passing to ellipse(), and using the t argument to set the scaling equal to an f-distribution as stat_ellipse() does. tcv = cov.trob(data[pick,2:3],cor=TRUE) tl = with(df[pick,], ellipse(tcv$cor[2,1],scale=sqrt(diag(tcv$cov)), t=qf(0.95,2,length(x)-1), centre=tcv$center)) p3 <- p3 + geom_path(data=as.data.frame(tl), aes(x=x, y=y,color="red")) p3 but the correspondence still isn't exact. The difference must be arising between using the cholesky decomposition of the covariance matrix and creating the scaling from the correlation and the standard deviations. I'm not enough of a mathematician to see exactly where the difference is. Which one is correct? That's up to you to decide! The stat_ellipse() implementation will be less sensitive to outlying points, while the first will be more conservative.
Getting different results when plotting 95% CI ellipses with ggplot or the ellipse package
You're not doing anything wrong, the two functions are making different underlying assumptions about the distribution of the data. Your first implementation is assuming multivariate normal, and the 2n
Getting different results when plotting 95% CI ellipses with ggplot or the ellipse package You're not doing anything wrong, the two functions are making different underlying assumptions about the distribution of the data. Your first implementation is assuming multivariate normal, and the 2nd a multivariate t-distribution (see ?cov.trob in package MASS). The effect is easier to see if you pull out one group: #pull out group 1 pick = group ==1 p3 <- qplot(data=df[pick,], x=x, y=y) tl = with(df[pick,], ellipse(cor(x, y),scale=c(sd(x),sd(y)), centre=c(mean(x),mean(y)))) p3 <- p3 + geom_path(data=as.data.frame(tl), aes(x=x, y=y)) p3 <- p3 + stat_ellipse(level=0.95) p3 # looks off center p3 <- p3 + geom_point(aes(x=mean(x),y=mean(y),size=2,color="red")) p3 So although it is close to the same center and orientation they are not the same. You can come close to the same size ellipse by using cov.trob() to get the correlation and scale for passing to ellipse(), and using the t argument to set the scaling equal to an f-distribution as stat_ellipse() does. tcv = cov.trob(data[pick,2:3],cor=TRUE) tl = with(df[pick,], ellipse(tcv$cor[2,1],scale=sqrt(diag(tcv$cov)), t=qf(0.95,2,length(x)-1), centre=tcv$center)) p3 <- p3 + geom_path(data=as.data.frame(tl), aes(x=x, y=y,color="red")) p3 but the correspondence still isn't exact. The difference must be arising between using the cholesky decomposition of the covariance matrix and creating the scaling from the correlation and the standard deviations. I'm not enough of a mathematician to see exactly where the difference is. Which one is correct? That's up to you to decide! The stat_ellipse() implementation will be less sensitive to outlying points, while the first will be more conservative.
Getting different results when plotting 95% CI ellipses with ggplot or the ellipse package You're not doing anything wrong, the two functions are making different underlying assumptions about the distribution of the data. Your first implementation is assuming multivariate normal, and the 2n
26,361
How to standardize an array if standard deviation is zero?
The situation you describe will arise as a result of one of these two scenarios: The column you're referring to is the column of 1's which is added to your matrix of covariates so that your linear regression has an intercept term. The column is a different column than the previously-mentioned column of ones, giving you two columns of constants [****]. For Scenario 1: skip that column, standardize all the other columns, and then run the regression as you normally would. For Scenario 2, however, you'll have to get rid of that additional constant column entirely. In fact, regardless of the question of Standardization, you'll never be able to run the regression with two constant columns since then you would have perfect collinearity. The result is that even if you try running the regression, the computer program will spit out an error message and quit halfway through [Note: this is because an OLS regression requires the matrix X'X to be non-singular for things to work out correctly]. Anyway, good luck with your, um, regressing! [****] Just to clarify: What I mean by "two columns of constants" is that you have one column in which every element is '1' and a second column in which every element is some constant 'k'...
How to standardize an array if standard deviation is zero?
The situation you describe will arise as a result of one of these two scenarios: The column you're referring to is the column of 1's which is added to your matrix of covariates so that your linear
How to standardize an array if standard deviation is zero? The situation you describe will arise as a result of one of these two scenarios: The column you're referring to is the column of 1's which is added to your matrix of covariates so that your linear regression has an intercept term. The column is a different column than the previously-mentioned column of ones, giving you two columns of constants [****]. For Scenario 1: skip that column, standardize all the other columns, and then run the regression as you normally would. For Scenario 2, however, you'll have to get rid of that additional constant column entirely. In fact, regardless of the question of Standardization, you'll never be able to run the regression with two constant columns since then you would have perfect collinearity. The result is that even if you try running the regression, the computer program will spit out an error message and quit halfway through [Note: this is because an OLS regression requires the matrix X'X to be non-singular for things to work out correctly]. Anyway, good luck with your, um, regressing! [****] Just to clarify: What I mean by "two columns of constants" is that you have one column in which every element is '1' and a second column in which every element is some constant 'k'...
How to standardize an array if standard deviation is zero? The situation you describe will arise as a result of one of these two scenarios: The column you're referring to is the column of 1's which is added to your matrix of covariates so that your linear
26,362
How to standardize an array if standard deviation is zero?
The right way would be to delete the feature column from the data. But as a temporary hack - You could just replace the 0 std to 1 for that feature. This would basically mean that the scaled value would be zero for all the data points for that feature. This makes sense as this implies that the feature values do not deviate even a bit form the mean(as the values is constant, the constant is the mean.) FYI- This is what sklearn does! https://github.com/scikit-learn/scikit-learn/blob/7389dbac82d362f296dc2746f10e43ffa1615660/sklearn/preprocessing/data.py#L70
How to standardize an array if standard deviation is zero?
The right way would be to delete the feature column from the data. But as a temporary hack - You could just replace the 0 std to 1 for that feature. This would basically mean that the scaled value wou
How to standardize an array if standard deviation is zero? The right way would be to delete the feature column from the data. But as a temporary hack - You could just replace the 0 std to 1 for that feature. This would basically mean that the scaled value would be zero for all the data points for that feature. This makes sense as this implies that the feature values do not deviate even a bit form the mean(as the values is constant, the constant is the mean.) FYI- This is what sklearn does! https://github.com/scikit-learn/scikit-learn/blob/7389dbac82d362f296dc2746f10e43ffa1615660/sklearn/preprocessing/data.py#L70
How to standardize an array if standard deviation is zero? The right way would be to delete the feature column from the data. But as a temporary hack - You could just replace the 0 std to 1 for that feature. This would basically mean that the scaled value wou
26,363
How to standardize an array if standard deviation is zero?
The feature that has zero variance is useless, remove it. Consider this, if this was the only feature, you wouldn't learn anything about the response to this feature from the data. In multivariate case, it takes linear algebra to come to the same conclusion, but the idea's the same.
How to standardize an array if standard deviation is zero?
The feature that has zero variance is useless, remove it. Consider this, if this was the only feature, you wouldn't learn anything about the response to this feature from the data. In multivariate cas
How to standardize an array if standard deviation is zero? The feature that has zero variance is useless, remove it. Consider this, if this was the only feature, you wouldn't learn anything about the response to this feature from the data. In multivariate case, it takes linear algebra to come to the same conclusion, but the idea's the same.
How to standardize an array if standard deviation is zero? The feature that has zero variance is useless, remove it. Consider this, if this was the only feature, you wouldn't learn anything about the response to this feature from the data. In multivariate cas
26,364
Bonferroni correction with Pearson's correlation and linear regression
It sounds to me like this is exploratory research / data analysis, not confirmatory. That is, it doesn't sound like you started with a theory that said only extroversion should be related to PCT for some reason. So I wouldn't worry too much about alpha adjustments, as I think of that as more related to CDA, nor would I think that your finding is necessarily true. Instead, I would think about it as something that might be true, and play with these ideas / possibilities in light of what I know about the topics at hand. Having seen this finding, does it ring true or are you skeptical? What would it mean for the current theories if it were true? Would it be interesting? Would it be important? Is it worth running a new (confirmatory) study to determine if it's true, bearing in mind the potential time, effort and expense that that entails? Remember that the reason for Bonferroni corrections is that we expect something to show up when have so many variables. So I think a heuristic can be 'would this study be sufficiently informative, even if the truth turns out to be no'? If you decide that it's not worth it, this relationship stays in the 'might' category and you move on, but if it is worth doing, test it.
Bonferroni correction with Pearson's correlation and linear regression
It sounds to me like this is exploratory research / data analysis, not confirmatory. That is, it doesn't sound like you started with a theory that said only extroversion should be related to PCT for
Bonferroni correction with Pearson's correlation and linear regression It sounds to me like this is exploratory research / data analysis, not confirmatory. That is, it doesn't sound like you started with a theory that said only extroversion should be related to PCT for some reason. So I wouldn't worry too much about alpha adjustments, as I think of that as more related to CDA, nor would I think that your finding is necessarily true. Instead, I would think about it as something that might be true, and play with these ideas / possibilities in light of what I know about the topics at hand. Having seen this finding, does it ring true or are you skeptical? What would it mean for the current theories if it were true? Would it be interesting? Would it be important? Is it worth running a new (confirmatory) study to determine if it's true, bearing in mind the potential time, effort and expense that that entails? Remember that the reason for Bonferroni corrections is that we expect something to show up when have so many variables. So I think a heuristic can be 'would this study be sufficiently informative, even if the truth turns out to be no'? If you decide that it's not worth it, this relationship stays in the 'might' category and you move on, but if it is worth doing, test it.
Bonferroni correction with Pearson's correlation and linear regression It sounds to me like this is exploratory research / data analysis, not confirmatory. That is, it doesn't sound like you started with a theory that said only extroversion should be related to PCT for
26,365
Bonferroni correction with Pearson's correlation and linear regression
I think Chl has pointed you to a lot of good material and references without directly answering the question. The answer I give may be a little controversial because I know some statisticians don't believe in multiplicity adjustment and many Bayesians don't believe in p-value. In fact, I once heard Don Berry say that using the Bayesian approach, particularly in adaptive designs, controlling the type I error is not a concern. He took that back later after seeing how practically important it is to the FDA to make sure that bad drugs don't get to market. My answer is yes and no. If you do 45 tests, you certainly need to adjust for multiplicity but not to Bonferroni because it could be far too conservative. The inflation of the type I error when you data mine for correlation is clearly an issue that got attention with the cited post "look and you shall find correlation". All three links provide great information. What I think is missing is the resampling approach to p-value adjustment as developed so nicely by Westfall and Young. You can find examples in my bootstrap book or complete details in their resampling book. My recommendation would be to consider bootstrap or permutation methods for p-value adjustment and perhaps consider false discovery rate over the stringent family-wise error rate. Link to Westfall and Young: http://www.amazon.com/Resampling-Based-Multiple-Testing-Adjustment-Probability/dp/0471557617/ref=sr_1_1?s=books&ie=UTF8&qid=1343398751&sr=1-1&keywords=peter+westfall A recent book by Bretz et al on multiple comparisons: http://www.amazon.com/Multiple-Comparisons-Using-Frank-Bretz/dp/1584885742/ref=sr_1_2?s=books&ie=UTF8&qid=1343398796&sr=1-2&keywords=peter+westfall My book with the material in section 8.5 and tons of bootstrap references: http://www.amazon.com/Bootstrap-Methods-Practitioners-Researchers-Probability/dp/0471756210/ref=sr_1_2?s=books&ie=UTF8&qid=1343398953&sr=1-2&keywords=michael+chernick
Bonferroni correction with Pearson's correlation and linear regression
I think Chl has pointed you to a lot of good material and references without directly answering the question. The answer I give may be a little controversial because I know some statisticians don't b
Bonferroni correction with Pearson's correlation and linear regression I think Chl has pointed you to a lot of good material and references without directly answering the question. The answer I give may be a little controversial because I know some statisticians don't believe in multiplicity adjustment and many Bayesians don't believe in p-value. In fact, I once heard Don Berry say that using the Bayesian approach, particularly in adaptive designs, controlling the type I error is not a concern. He took that back later after seeing how practically important it is to the FDA to make sure that bad drugs don't get to market. My answer is yes and no. If you do 45 tests, you certainly need to adjust for multiplicity but not to Bonferroni because it could be far too conservative. The inflation of the type I error when you data mine for correlation is clearly an issue that got attention with the cited post "look and you shall find correlation". All three links provide great information. What I think is missing is the resampling approach to p-value adjustment as developed so nicely by Westfall and Young. You can find examples in my bootstrap book or complete details in their resampling book. My recommendation would be to consider bootstrap or permutation methods for p-value adjustment and perhaps consider false discovery rate over the stringent family-wise error rate. Link to Westfall and Young: http://www.amazon.com/Resampling-Based-Multiple-Testing-Adjustment-Probability/dp/0471557617/ref=sr_1_1?s=books&ie=UTF8&qid=1343398751&sr=1-1&keywords=peter+westfall A recent book by Bretz et al on multiple comparisons: http://www.amazon.com/Multiple-Comparisons-Using-Frank-Bretz/dp/1584885742/ref=sr_1_2?s=books&ie=UTF8&qid=1343398796&sr=1-2&keywords=peter+westfall My book with the material in section 8.5 and tons of bootstrap references: http://www.amazon.com/Bootstrap-Methods-Practitioners-Researchers-Probability/dp/0471756210/ref=sr_1_2?s=books&ie=UTF8&qid=1343398953&sr=1-2&keywords=michael+chernick
Bonferroni correction with Pearson's correlation and linear regression I think Chl has pointed you to a lot of good material and references without directly answering the question. The answer I give may be a little controversial because I know some statisticians don't b
26,366
Difference between superpopulation and infinite population
In survey sampling you have a finite population. One modeling method envisions the finite population as coming from a theoretical infinite population. This imaginary population is called a superpopulation model. On the other hand when selecting a random sample (not from a finite population) is viewed as sampling at random from an infinite population. So the term infinite population is for ordinary sampling and superpopulation specifically refers to the situation when the sample is taken from a finite population.
Difference between superpopulation and infinite population
In survey sampling you have a finite population. One modeling method envisions the finite population as coming from a theoretical infinite population. This imaginary population is called a superpopu
Difference between superpopulation and infinite population In survey sampling you have a finite population. One modeling method envisions the finite population as coming from a theoretical infinite population. This imaginary population is called a superpopulation model. On the other hand when selecting a random sample (not from a finite population) is viewed as sampling at random from an infinite population. So the term infinite population is for ordinary sampling and superpopulation specifically refers to the situation when the sample is taken from a finite population.
Difference between superpopulation and infinite population In survey sampling you have a finite population. One modeling method envisions the finite population as coming from a theoretical infinite population. This imaginary population is called a superpopu
26,367
Difference between superpopulation and infinite population
In the field of ecological statistics (e.g. mark-recapture) we often have long time series of data, where any individuals may only be exposed to sampling for only a portion of the total time series. In this context we can consider every individual that was exposed to sampling during the course of the experiment, a measure we call the superpopulation. This is different from the population at any given time point, which is the total number of individuals exposed to sampling at that time point. Both of these definitions of a population are finite measures. Aside - I used the term "exposed to sampling" as population heterogeneity is often the rule rather than the exception in ecology. Subpopulations may exist that behave differently and may completely avoid detection by our survey techniques. These individuals thus are not part of our statistical population definition.
Difference between superpopulation and infinite population
In the field of ecological statistics (e.g. mark-recapture) we often have long time series of data, where any individuals may only be exposed to sampling for only a portion of the total time series. I
Difference between superpopulation and infinite population In the field of ecological statistics (e.g. mark-recapture) we often have long time series of data, where any individuals may only be exposed to sampling for only a portion of the total time series. In this context we can consider every individual that was exposed to sampling during the course of the experiment, a measure we call the superpopulation. This is different from the population at any given time point, which is the total number of individuals exposed to sampling at that time point. Both of these definitions of a population are finite measures. Aside - I used the term "exposed to sampling" as population heterogeneity is often the rule rather than the exception in ecology. Subpopulations may exist that behave differently and may completely avoid detection by our survey techniques. These individuals thus are not part of our statistical population definition.
Difference between superpopulation and infinite population In the field of ecological statistics (e.g. mark-recapture) we often have long time series of data, where any individuals may only be exposed to sampling for only a portion of the total time series. I
26,368
Difference between superpopulation and infinite population
A population which is uncountable (or at least, not countable on fingertips) is called an "infinite population" — such as the number of red cells in blood, or the number of infective bacteria in the body of a patient. An imaginary or theoretical population is called a superpopulation, or hypothetical population.
Difference between superpopulation and infinite population
A population which is uncountable (or at least, not countable on fingertips) is called an "infinite population" — such as the number of red cells in blood, or the number of infective bacteria in the b
Difference between superpopulation and infinite population A population which is uncountable (or at least, not countable on fingertips) is called an "infinite population" — such as the number of red cells in blood, or the number of infective bacteria in the body of a patient. An imaginary or theoretical population is called a superpopulation, or hypothetical population.
Difference between superpopulation and infinite population A population which is uncountable (or at least, not countable on fingertips) is called an "infinite population" — such as the number of red cells in blood, or the number of infective bacteria in the b
26,369
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero predictors one desires?
If you want to have at least a definite number of predictors with some range of values defined by the literature, why choose the pure-LASSO approach to begin with? As @probabilityislogic suggested, you should be using some informative priors on those variables where you have some knowledge about. If you want to retain some of the LASSO properties for the rest of the predictors, maybe you could use a prior with a double exponential distribution for each other input, i.e., use a density of the form $$p(\beta_i)=\frac{\lambda}{2}\text{exp}\left(-\lambda|\beta_i|\right),$$ where $\lambda$ is the lagrange multiplier corresponding to the pure-LASSO solution. This last statement comes from the fact that, in the absense of the variables with the informative priors, this is another way of deriving the LASSO (by maximizing the posterior mode given normality assumptions for the residuals).
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero pr
If you want to have at least a definite number of predictors with some range of values defined by the literature, why choose the pure-LASSO approach to begin with? As @probabilityislogic suggested, yo
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero predictors one desires? If you want to have at least a definite number of predictors with some range of values defined by the literature, why choose the pure-LASSO approach to begin with? As @probabilityislogic suggested, you should be using some informative priors on those variables where you have some knowledge about. If you want to retain some of the LASSO properties for the rest of the predictors, maybe you could use a prior with a double exponential distribution for each other input, i.e., use a density of the form $$p(\beta_i)=\frac{\lambda}{2}\text{exp}\left(-\lambda|\beta_i|\right),$$ where $\lambda$ is the lagrange multiplier corresponding to the pure-LASSO solution. This last statement comes from the fact that, in the absense of the variables with the informative priors, this is another way of deriving the LASSO (by maximizing the posterior mode given normality assumptions for the residuals).
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero pr If you want to have at least a definite number of predictors with some range of values defined by the literature, why choose the pure-LASSO approach to begin with? As @probabilityislogic suggested, yo
26,370
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero predictors one desires?
There exists a nice way to perform LASSO but use a fixed number of predictors. It is Least angle regression (LAR or LARS) described in Efron's paper. During iterative procedure it creates a number of linear models, each new one has one more predictor, so you can select one with desired number of predictors. Another way is $l_1$ or $l_2$ regularization. As mentioned by Nestor using appropriate priors you can incorporate prior knowledge into the model. So called relevance vector machine by Tipping can be useful.
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero pr
There exists a nice way to perform LASSO but use a fixed number of predictors. It is Least angle regression (LAR or LARS) described in Efron's paper. During iterative procedure it creates a number of
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero predictors one desires? There exists a nice way to perform LASSO but use a fixed number of predictors. It is Least angle regression (LAR or LARS) described in Efron's paper. During iterative procedure it creates a number of linear models, each new one has one more predictor, so you can select one with desired number of predictors. Another way is $l_1$ or $l_2$ regularization. As mentioned by Nestor using appropriate priors you can incorporate prior knowledge into the model. So called relevance vector machine by Tipping can be useful.
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero pr There exists a nice way to perform LASSO but use a fixed number of predictors. It is Least angle regression (LAR or LARS) described in Efron's paper. During iterative procedure it creates a number of
26,371
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero predictors one desires?
No, that is not defensible. The great hurdle that model selection procedures are designed to overcome is that the the cardinality of the true support $\left| S^* \right| = \left| \left\{ j : \beta^*_j \neq 0 \right\} \right|$ is unknown. (Here we have that $\beta^*$ is the "true" coefficient.) Because $|S^*|$ is unknown, a model selection procedure has to exhaustively search over all $2^p$ possible models; however, if we did know $|S^*|$, we could just check the ${p \choose |S^*|}$ models, which is far fewer. The theory of the lasso relies on the regularization parameter $\lambda$ being sufficiently large so as to make the selected model sufficiently sparse. It could be that your 10 features are too many or too few, since it isn't trivial to turn a lower bound on $\lambda$ into an upper bound on $|S^*|$. Let $\hat\beta$ be our data-driven estimate for $\beta^*$, and put $\hat{S} = \{j \, : \, \hat\beta_j \neq 0 \}$. Then, perhaps you're trying to ensure that $S^* \subseteq \hat{S}$ so that you've recovered at least the relevant features? Or maybe you're trying to establish that $\hat{S} \subseteq S^*$ so that you know that features you've found are all worthwhile? In these cases, your procedure would be more justified if you had prior information on the relative sizes of $S^*$. Also, note, you can leave some coefficients unpenalized when performing lasso in, for instance, glmnet.
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero pr
No, that is not defensible. The great hurdle that model selection procedures are designed to overcome is that the the cardinality of the true support $\left| S^* \right| = \left| \left\{ j : \beta^*_j
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero predictors one desires? No, that is not defensible. The great hurdle that model selection procedures are designed to overcome is that the the cardinality of the true support $\left| S^* \right| = \left| \left\{ j : \beta^*_j \neq 0 \right\} \right|$ is unknown. (Here we have that $\beta^*$ is the "true" coefficient.) Because $|S^*|$ is unknown, a model selection procedure has to exhaustively search over all $2^p$ possible models; however, if we did know $|S^*|$, we could just check the ${p \choose |S^*|}$ models, which is far fewer. The theory of the lasso relies on the regularization parameter $\lambda$ being sufficiently large so as to make the selected model sufficiently sparse. It could be that your 10 features are too many or too few, since it isn't trivial to turn a lower bound on $\lambda$ into an upper bound on $|S^*|$. Let $\hat\beta$ be our data-driven estimate for $\beta^*$, and put $\hat{S} = \{j \, : \, \hat\beta_j \neq 0 \}$. Then, perhaps you're trying to ensure that $S^* \subseteq \hat{S}$ so that you've recovered at least the relevant features? Or maybe you're trying to establish that $\hat{S} \subseteq S^*$ so that you know that features you've found are all worthwhile? In these cases, your procedure would be more justified if you had prior information on the relative sizes of $S^*$. Also, note, you can leave some coefficients unpenalized when performing lasso in, for instance, glmnet.
How defensible is it to choose $\lambda$ in a LASSO model so that it yields the number of nonzero pr No, that is not defensible. The great hurdle that model selection procedures are designed to overcome is that the the cardinality of the true support $\left| S^* \right| = \left| \left\{ j : \beta^*_j
26,372
Which optimization algorithm to use for problems with many local optima and expensive goal function? [duplicate]
In case of expensive functions without derivative, a useful abstract framework for optimization is Compute the function in few points (it may be a regular grid or even not) Repeat Interpolate data/fit into a stochastic model Validate the model through statistical tests Find the model’s maximum. If it is better that the best one you previously have got, update the maximum Put this point in the dataset That should also fit well with the pseudo convexity you refered. References here: Efficient Global Optimization of Expensive Black-Box Functions A Rigorous Framework for Optimization of Expensive Functions by Surrogates
Which optimization algorithm to use for problems with many local optima and expensive goal function?
In case of expensive functions without derivative, a useful abstract framework for optimization is Compute the function in few points (it may be a regular grid or even not) Repeat Interpolate data
Which optimization algorithm to use for problems with many local optima and expensive goal function? [duplicate] In case of expensive functions without derivative, a useful abstract framework for optimization is Compute the function in few points (it may be a regular grid or even not) Repeat Interpolate data/fit into a stochastic model Validate the model through statistical tests Find the model’s maximum. If it is better that the best one you previously have got, update the maximum Put this point in the dataset That should also fit well with the pseudo convexity you refered. References here: Efficient Global Optimization of Expensive Black-Box Functions A Rigorous Framework for Optimization of Expensive Functions by Surrogates
Which optimization algorithm to use for problems with many local optima and expensive goal function? In case of expensive functions without derivative, a useful abstract framework for optimization is Compute the function in few points (it may be a regular grid or even not) Repeat Interpolate data
26,373
Improving variable names in a dataset
The best response to this question is to duck it. Fundamentally, it doesn't much matter what the short names of the variables are as long as they are well documented in a codebook somewhere. Alas, since R has no native resources for this, people tend not to bother. (The lack is, for me, the single biggest failing in the language as a statistical tool). There are various R packages providing this machinery, e.g. Hmisc which you use, and memisc. But really the best option is to make the whole thing into an R package. That way the processed data can be an object with a corresponding help page that describes what everything is now called and can assign credit where it's due. The package can also expose the raw data and your processing functions for people to see what you did to make the final product. Also, a suggestion: don't include derived data like variables and their z-scored versions in the final data object at all if you can help it, just provide the functions to make it instead. Derived data is just trouble from the data management point of view.
Improving variable names in a dataset
The best response to this question is to duck it. Fundamentally, it doesn't much matter what the short names of the variables are as long as they are well documented in a codebook somewhere. Alas, s
Improving variable names in a dataset The best response to this question is to duck it. Fundamentally, it doesn't much matter what the short names of the variables are as long as they are well documented in a codebook somewhere. Alas, since R has no native resources for this, people tend not to bother. (The lack is, for me, the single biggest failing in the language as a statistical tool). There are various R packages providing this machinery, e.g. Hmisc which you use, and memisc. But really the best option is to make the whole thing into an R package. That way the processed data can be an object with a corresponding help page that describes what everything is now called and can assign credit where it's due. The package can also expose the raw data and your processing functions for people to see what you did to make the final product. Also, a suggestion: don't include derived data like variables and their z-scored versions in the final data object at all if you can help it, just provide the functions to make it instead. Derived data is just trouble from the data management point of view.
Improving variable names in a dataset The best response to this question is to duck it. Fundamentally, it doesn't much matter what the short names of the variables are as long as they are well documented in a codebook somewhere. Alas, s
26,374
Improving variable names in a dataset
Here's a small thing: I think it's better to use underscores than periods. The reason is that most programming languages, unlike R, don't support periods in identifiers, but nearly all support underscores. And I presume you want your dataset to be useful to people who aren't using R.
Improving variable names in a dataset
Here's a small thing: I think it's better to use underscores than periods. The reason is that most programming languages, unlike R, don't support periods in identifiers, but nearly all support undersc
Improving variable names in a dataset Here's a small thing: I think it's better to use underscores than periods. The reason is that most programming languages, unlike R, don't support periods in identifiers, but nearly all support underscores. And I presume you want your dataset to be useful to people who aren't using R.
Improving variable names in a dataset Here's a small thing: I think it's better to use underscores than periods. The reason is that most programming languages, unlike R, don't support periods in identifiers, but nearly all support undersc
26,375
Improving variable names in a dataset
First of all, thank you for doing this - I'm sure many people will appreciate it, even though not many will know that you did it. RStudio user interface does not (at least with default options?) interpret any separators within variable name. For example, Eclipse treats capitalized parts as separate words, so you can use Ctrl+arrows to quickly edit Java-style code like ageStandardizedMaleSchool. I can't come up with any better reasons to prefer one separator over another, so either underscores or caps seem fine to me. In general, I suggest making the variable names longer, rather than sticking to some complex abbreviation scheme. It is easy to make typos like talk.prob.m.sum instead of talk.prob.sum.ms, and it's difficult to spot and trace errors in statistical analysis. (Somewhat related: a nice saying I've read on some blog is to write your variable names like Scandinavian words - SickHouse and ToothHealer instead of hospital and dentist.) On a final note: standardizing, centering etc. are generally done after data cleaning. If there's no cleaning, then maybe consider leaving that to whoever will analyze the data. Or, if you're doing the cleaning yourself as well, indicate all the steps you've taken - subsequent analyses and interpretations might depend a lot on that.
Improving variable names in a dataset
First of all, thank you for doing this - I'm sure many people will appreciate it, even though not many will know that you did it. RStudio user interface does not (at least with default options?) inter
Improving variable names in a dataset First of all, thank you for doing this - I'm sure many people will appreciate it, even though not many will know that you did it. RStudio user interface does not (at least with default options?) interpret any separators within variable name. For example, Eclipse treats capitalized parts as separate words, so you can use Ctrl+arrows to quickly edit Java-style code like ageStandardizedMaleSchool. I can't come up with any better reasons to prefer one separator over another, so either underscores or caps seem fine to me. In general, I suggest making the variable names longer, rather than sticking to some complex abbreviation scheme. It is easy to make typos like talk.prob.m.sum instead of talk.prob.sum.ms, and it's difficult to spot and trace errors in statistical analysis. (Somewhat related: a nice saying I've read on some blog is to write your variable names like Scandinavian words - SickHouse and ToothHealer instead of hospital and dentist.) On a final note: standardizing, centering etc. are generally done after data cleaning. If there's no cleaning, then maybe consider leaving that to whoever will analyze the data. Or, if you're doing the cleaning yourself as well, indicate all the steps you've taken - subsequent analyses and interpretations might depend a lot on that.
Improving variable names in a dataset First of all, thank you for doing this - I'm sure many people will appreciate it, even though not many will know that you did it. RStudio user interface does not (at least with default options?) inter
26,376
Optimal case/control ratio in a case-control study
As @EpiGrad says - there is no optimal ratio since otherwise everyone would use it. I suggest you address the issue by looking at the cost of a control versus the cost of a case. Cases The basis for a case-control study is that you want to study rare outcomes (cancer, re-operations etc). By being rare your problem is that finding these patient is the major cost. Controls Controls are basically anyone without the disease and therefore you have an abundance of these. Finding 10 more controls is usually not so difficult. Statistics What you want to see is something where you have a difference between the two studied sample like in the case below: If you think you'll end up in a situation where you can't see the difference you need to increase your number of patients. In other words you have this situation: That you want to change by recruiting more patients in one group into this one: The statistics is very straightforward you gain most power by having groups of equal size. Since your usually in a situation where you can't find more patients in the rare outcome group you want to increase the number of patients in the control group. The central limit theorem gives that the with of the normal curve is given by this simple equation: $SE = \frac{SD}{\sqrt{n}}$ SE = standard error (the standard deviation of the sampling distribution of the mean) SD = standard deviation of your sample n = number of patients in your sample As you can see, the effect on the width of the curve each studied person has, decreases as defined by the $\sqrt{n}$. This gives that the optimal ratio is where you get most out of the time and effort you spend recruiting patients/controls. What's vital in case-control studies is that you have to put just as much effort into the controls as you do with the patients. For instance you can't interview the interesting cases yourself while sending a student to talk to the controls. Identifying the correct source population can also be rather challenging.
Optimal case/control ratio in a case-control study
As @EpiGrad says - there is no optimal ratio since otherwise everyone would use it. I suggest you address the issue by looking at the cost of a control versus the cost of a case. Cases The basis for
Optimal case/control ratio in a case-control study As @EpiGrad says - there is no optimal ratio since otherwise everyone would use it. I suggest you address the issue by looking at the cost of a control versus the cost of a case. Cases The basis for a case-control study is that you want to study rare outcomes (cancer, re-operations etc). By being rare your problem is that finding these patient is the major cost. Controls Controls are basically anyone without the disease and therefore you have an abundance of these. Finding 10 more controls is usually not so difficult. Statistics What you want to see is something where you have a difference between the two studied sample like in the case below: If you think you'll end up in a situation where you can't see the difference you need to increase your number of patients. In other words you have this situation: That you want to change by recruiting more patients in one group into this one: The statistics is very straightforward you gain most power by having groups of equal size. Since your usually in a situation where you can't find more patients in the rare outcome group you want to increase the number of patients in the control group. The central limit theorem gives that the with of the normal curve is given by this simple equation: $SE = \frac{SD}{\sqrt{n}}$ SE = standard error (the standard deviation of the sampling distribution of the mean) SD = standard deviation of your sample n = number of patients in your sample As you can see, the effect on the width of the curve each studied person has, decreases as defined by the $\sqrt{n}$. This gives that the optimal ratio is where you get most out of the time and effort you spend recruiting patients/controls. What's vital in case-control studies is that you have to put just as much effort into the controls as you do with the patients. For instance you can't interview the interesting cases yourself while sending a student to talk to the controls. Identifying the correct source population can also be rather challenging.
Optimal case/control ratio in a case-control study As @EpiGrad says - there is no optimal ratio since otherwise everyone would use it. I suggest you address the issue by looking at the cost of a control versus the cost of a case. Cases The basis for
26,377
Optimal case/control ratio in a case-control study
There isn't necessarily an optimal case-control study ratio, otherwise it would be the one we all used. Generally, I is argued that a higher ratio of controls to cases results in greater study power, though at the cost of a more expensive study. I once did an analysis of a series of case-control studies nested within a cohort study. The precision of the estimates increased dramatically using 2 or 3 controls per cases, but then the payoff began to level out. It may be something worth evaluating in the study planning stage via simulation.
Optimal case/control ratio in a case-control study
There isn't necessarily an optimal case-control study ratio, otherwise it would be the one we all used. Generally, I is argued that a higher ratio of controls to cases results in greater study power,
Optimal case/control ratio in a case-control study There isn't necessarily an optimal case-control study ratio, otherwise it would be the one we all used. Generally, I is argued that a higher ratio of controls to cases results in greater study power, though at the cost of a more expensive study. I once did an analysis of a series of case-control studies nested within a cohort study. The precision of the estimates increased dramatically using 2 or 3 controls per cases, but then the payoff began to level out. It may be something worth evaluating in the study planning stage via simulation.
Optimal case/control ratio in a case-control study There isn't necessarily an optimal case-control study ratio, otherwise it would be the one we all used. Generally, I is argued that a higher ratio of controls to cases results in greater study power,
26,378
How to understand standardized residual in regression analysis?
I would say that an individual number (such as a residual), which resulted from a random draw from a probability distribution, is a realized value, not a random variable. Likewise, I would say that the set of $N$ residuals, calculated from your data and your model fit using $\bf{e}=\bf{y}-\bf{\hat{y}}$, is a set of realized values. This set of numbers may be loosely conceptualized as independent draws from an underlying distribution $\epsilon$ ~ $\mathcal{N}(\mu,\sigma^2)$. (Unfortunately however, there are several additional complexities here. For example, you do not actually have $N$ independent pieces of information, because the residuals, $\bf{e}$, must satisfy two conditions: $\sum e_i=0$, and $\sum x_ie_i=0$.) Now, given some set of numbers, be they residuals or whatever, it is certainly true that they have a variance, $\sum(e_i-\bar{e})^2/N$, but this is uninteresting. What we care about is being able to say something about the data generating process (for instance, to estimate the variance of the population distribution). Using the preceding formula, we could give an approximation by replacing the $N$ with the residual degrees of freedom, but this may not be a good approximation. This is a topic that can get very complicated very fast, but a couple of possible reasons could be heteroscedasticity (i.e., that the variance of the population differs at different levels of $x$), and the presence of outliers (i.e., that a given residual is drawn from a different population entirely). Almost certainly, in practice, you will not be able to estimate the variance of the population from which an outlier was drawn, but nonetheless, in theory, it does have a variance. I suspect something along these lines is what the authors had in mind, however, I should note that I have not read that book. Update: Upon rereading the question, I suspect the quote may be referring to the way the $x$-value of a point influences the fitted regression line, and thereby the value of the residual associated with that point. The key idea to grasp here is leverage. I discuss these topics in my answer here: Interpreting plot.lm().
How to understand standardized residual in regression analysis?
I would say that an individual number (such as a residual), which resulted from a random draw from a probability distribution, is a realized value, not a random variable. Likewise, I would say that t
How to understand standardized residual in regression analysis? I would say that an individual number (such as a residual), which resulted from a random draw from a probability distribution, is a realized value, not a random variable. Likewise, I would say that the set of $N$ residuals, calculated from your data and your model fit using $\bf{e}=\bf{y}-\bf{\hat{y}}$, is a set of realized values. This set of numbers may be loosely conceptualized as independent draws from an underlying distribution $\epsilon$ ~ $\mathcal{N}(\mu,\sigma^2)$. (Unfortunately however, there are several additional complexities here. For example, you do not actually have $N$ independent pieces of information, because the residuals, $\bf{e}$, must satisfy two conditions: $\sum e_i=0$, and $\sum x_ie_i=0$.) Now, given some set of numbers, be they residuals or whatever, it is certainly true that they have a variance, $\sum(e_i-\bar{e})^2/N$, but this is uninteresting. What we care about is being able to say something about the data generating process (for instance, to estimate the variance of the population distribution). Using the preceding formula, we could give an approximation by replacing the $N$ with the residual degrees of freedom, but this may not be a good approximation. This is a topic that can get very complicated very fast, but a couple of possible reasons could be heteroscedasticity (i.e., that the variance of the population differs at different levels of $x$), and the presence of outliers (i.e., that a given residual is drawn from a different population entirely). Almost certainly, in practice, you will not be able to estimate the variance of the population from which an outlier was drawn, but nonetheless, in theory, it does have a variance. I suspect something along these lines is what the authors had in mind, however, I should note that I have not read that book. Update: Upon rereading the question, I suspect the quote may be referring to the way the $x$-value of a point influences the fitted regression line, and thereby the value of the residual associated with that point. The key idea to grasp here is leverage. I discuss these topics in my answer here: Interpreting plot.lm().
How to understand standardized residual in regression analysis? I would say that an individual number (such as a residual), which resulted from a random draw from a probability distribution, is a realized value, not a random variable. Likewise, I would say that t
26,379
What is the Drosophila of AI now?
I just googled and a lot of people quote John McCarthy calling Go "the new Drosophila of AI" - although I haven't found his original saying. There's also an interesting paper "THE DROSOPHILA REVISITED" (pdf) which, in particular, reads: After the match DEEP BLUE - Kasparov (New York, 1997) in which the machine proved its superiority, a slow transition was observed in the games world from chess to other games, with Go as the current frontrunner. The ICCA changed its name to ICGA, and the question arose: Is Go the new Drosophila of AI? Some would agree with this statement and others would vigorously oppose it. In more balanced terms one would say: for such a change of paradigm, a paradigm shift is a prerequisite, a shift of focus is not sufficient. At this moment (2010), we may state that the conditions are fulfilled, since MCTS can be considered as a paradigm shift.
What is the Drosophila of AI now?
I just googled and a lot of people quote John McCarthy calling Go "the new Drosophila of AI" - although I haven't found his original saying. There's also an interesting paper "THE DROSOPHILA REVISITED
What is the Drosophila of AI now? I just googled and a lot of people quote John McCarthy calling Go "the new Drosophila of AI" - although I haven't found his original saying. There's also an interesting paper "THE DROSOPHILA REVISITED" (pdf) which, in particular, reads: After the match DEEP BLUE - Kasparov (New York, 1997) in which the machine proved its superiority, a slow transition was observed in the games world from chess to other games, with Go as the current frontrunner. The ICCA changed its name to ICGA, and the question arose: Is Go the new Drosophila of AI? Some would agree with this statement and others would vigorously oppose it. In more balanced terms one would say: for such a change of paradigm, a paradigm shift is a prerequisite, a shift of focus is not sufficient. At this moment (2010), we may state that the conditions are fulfilled, since MCTS can be considered as a paradigm shift.
What is the Drosophila of AI now? I just googled and a lot of people quote John McCarthy calling Go "the new Drosophila of AI" - although I haven't found his original saying. There's also an interesting paper "THE DROSOPHILA REVISITED
26,380
What is the Drosophila of AI now?
How about Robotics (specifically, humanoid robots)? Specifically I think the challenge in robotics is to combine a set of technologies that in themselves are quite well developed: Computer vision: the robots need fast processing of the visual world Internal modelling of the world: they also need to know how they can affect the world, and how to connect the visual landscape with their movement Speech recognition: we want to be able to talk to them, right? Speech synthesis: and we want to hear what they have to say! Reinforcement Learning: they should be able to learn through trial and error, etc. Bayesian reasoning: at some point they will probably need to have probabilistic notions of objects in the world in order to facilitate decision making It would be easy enough to give them chess- or go-playing capabilities as well ;-) I think the only trouble with this, from the Drosophila point of view, is that there is a significant cost in terms of hardware. However there's no reason why the robot couldn't live in a simulated world. And perhaps there is something in the gaming world like this, where you can create your own AI bot that can interact with the physics engine using multiple modalities?
What is the Drosophila of AI now?
How about Robotics (specifically, humanoid robots)? Specifically I think the challenge in robotics is to combine a set of technologies that in themselves are quite well developed: Computer vision: th
What is the Drosophila of AI now? How about Robotics (specifically, humanoid robots)? Specifically I think the challenge in robotics is to combine a set of technologies that in themselves are quite well developed: Computer vision: the robots need fast processing of the visual world Internal modelling of the world: they also need to know how they can affect the world, and how to connect the visual landscape with their movement Speech recognition: we want to be able to talk to them, right? Speech synthesis: and we want to hear what they have to say! Reinforcement Learning: they should be able to learn through trial and error, etc. Bayesian reasoning: at some point they will probably need to have probabilistic notions of objects in the world in order to facilitate decision making It would be easy enough to give them chess- or go-playing capabilities as well ;-) I think the only trouble with this, from the Drosophila point of view, is that there is a significant cost in terms of hardware. However there's no reason why the robot couldn't live in a simulated world. And perhaps there is something in the gaming world like this, where you can create your own AI bot that can interact with the physics engine using multiple modalities?
What is the Drosophila of AI now? How about Robotics (specifically, humanoid robots)? Specifically I think the challenge in robotics is to combine a set of technologies that in themselves are quite well developed: Computer vision: th
26,381
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural networks?
I would not touch the data at all. Use this for autocorrelation with NaNs: http://www.mathworks.com/matlabcentral/fileexchange/43840-autocorrelation-and-partial-autocorrelation-with-nans/content/nanautocorr.m "not touch the data" means not to remove any data or time-step or replace with 0 or the mean, it would compromise the information about the specific-time-lag linear dependence. I would also avoid simulating the values in the gaps, if you are interested in the "SAMPLE" autocorrelation, anyway even the best simulation technique will not add any more information about the autocorrelation, being based on the data themselves. I partially recoded the matlab (link above) autocorrelation and partial autocorrelation functions to deal with NaNs: any data couples including NaNs is excluded from the computation. This is done for each lag. It worked for me. Any suggestion is well accepted.
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural netw
I would not touch the data at all. Use this for autocorrelation with NaNs: http://www.mathworks.com/matlabcentral/fileexchange/43840-autocorrelation-and-partial-autocorrelation-with-nans/content/nanau
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural networks? I would not touch the data at all. Use this for autocorrelation with NaNs: http://www.mathworks.com/matlabcentral/fileexchange/43840-autocorrelation-and-partial-autocorrelation-with-nans/content/nanautocorr.m "not touch the data" means not to remove any data or time-step or replace with 0 or the mean, it would compromise the information about the specific-time-lag linear dependence. I would also avoid simulating the values in the gaps, if you are interested in the "SAMPLE" autocorrelation, anyway even the best simulation technique will not add any more information about the autocorrelation, being based on the data themselves. I partially recoded the matlab (link above) autocorrelation and partial autocorrelation functions to deal with NaNs: any data couples including NaNs is excluded from the computation. This is done for each lag. It worked for me. Any suggestion is well accepted.
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural netw I would not touch the data at all. Use this for autocorrelation with NaNs: http://www.mathworks.com/matlabcentral/fileexchange/43840-autocorrelation-and-partial-autocorrelation-with-nans/content/nanau
26,382
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural networks?
There are some algorithms which are immune to missing values, so the preferred solution is to look for them (for instance R's acf for autocorrelation). In general, the way to go is to either just discard data with missing observations (might be very painful) or just to impute their values -- mean of neighbors might be enough for smooth series and small gaps, but there are is of course of plethora of other more powerful methods, using splines, random/most frequent values, imputation from models, etc.
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural netw
There are some algorithms which are immune to missing values, so the preferred solution is to look for them (for instance R's acf for autocorrelation). In general, the way to go is to either just dis
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural networks? There are some algorithms which are immune to missing values, so the preferred solution is to look for them (for instance R's acf for autocorrelation). In general, the way to go is to either just discard data with missing observations (might be very painful) or just to impute their values -- mean of neighbors might be enough for smooth series and small gaps, but there are is of course of plethora of other more powerful methods, using splines, random/most frequent values, imputation from models, etc.
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural netw There are some algorithms which are immune to missing values, so the preferred solution is to look for them (for instance R's acf for autocorrelation). In general, the way to go is to either just dis
26,383
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural networks?
Use Intervention Detection to impute the missing vales exploiting the useful ARIMA structure and any local time trends and/or level shifts.
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural netw
Use Intervention Detection to impute the missing vales exploiting the useful ARIMA structure and any local time trends and/or level shifts.
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural networks? Use Intervention Detection to impute the missing vales exploiting the useful ARIMA structure and any local time trends and/or level shifts.
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural netw Use Intervention Detection to impute the missing vales exploiting the useful ARIMA structure and any local time trends and/or level shifts.
26,384
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural networks?
there are 2 problems here. the first is providing a meaningful numerical framework for your autocorrelation answer in matlab. for this to happen, you need to stretch and/or patch the time-series-portion of your data vectors...this 'data integrity' component of the problem is the most fundamental. secondly, you need to decide how to handle the 'value' component of your vector...this depends to a large extent to the particular application as to what's best to assume, (e.g., small, missing time-stamps and the corresponding NaNs or Nulls could be safely interpolated from it's neighbors...in larger gaps, setting the value to zero is probably safer...or impute as recommended above--obviously for this to be meaningful, the gaps again must be comparatively small.).
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural netw
there are 2 problems here. the first is providing a meaningful numerical framework for your autocorrelation answer in matlab. for this to happen, you need to stretch and/or patch the time-series-por
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural networks? there are 2 problems here. the first is providing a meaningful numerical framework for your autocorrelation answer in matlab. for this to happen, you need to stretch and/or patch the time-series-portion of your data vectors...this 'data integrity' component of the problem is the most fundamental. secondly, you need to decide how to handle the 'value' component of your vector...this depends to a large extent to the particular application as to what's best to assume, (e.g., small, missing time-stamps and the corresponding NaNs or Nulls could be safely interpolated from it's neighbors...in larger gaps, setting the value to zero is probably safer...or impute as recommended above--obviously for this to be meaningful, the gaps again must be comparatively small.).
How to deal with gaps/NaNs in time series data when using Matlab for autocorrelation and neural netw there are 2 problems here. the first is providing a meaningful numerical framework for your autocorrelation answer in matlab. for this to happen, you need to stretch and/or patch the time-series-por
26,385
Good practices when doing time series forecasting
I think it would be worth exploring exponential smoothing models as well. Exponential smoothing models are a fundamentally different class of models from ARIMA models, and may yield different results on your data. This sounds like a valid approach, and is very similar to the time series cross-validation method proposed by Rob Hyndman. I would aggregate the cross-validation error from each forecast (exponential smoothing, ARIMA, ARMAX) and then use the overall error to compare the 3 methods. You may also want to consider a "grid search" for ARIMA parameters, rather than using auto.arima. In a grid search, you would explore each possible parameter for an arima model, and then select the "best" ones using forecast accuracy.
Good practices when doing time series forecasting
I think it would be worth exploring exponential smoothing models as well. Exponential smoothing models are a fundamentally different class of models from ARIMA models, and may yield different results
Good practices when doing time series forecasting I think it would be worth exploring exponential smoothing models as well. Exponential smoothing models are a fundamentally different class of models from ARIMA models, and may yield different results on your data. This sounds like a valid approach, and is very similar to the time series cross-validation method proposed by Rob Hyndman. I would aggregate the cross-validation error from each forecast (exponential smoothing, ARIMA, ARMAX) and then use the overall error to compare the 3 methods. You may also want to consider a "grid search" for ARIMA parameters, rather than using auto.arima. In a grid search, you would explore each possible parameter for an arima model, and then select the "best" ones using forecast accuracy.
Good practices when doing time series forecasting I think it would be worth exploring exponential smoothing models as well. Exponential smoothing models are a fundamentally different class of models from ARIMA models, and may yield different results
26,386
What is the best way to discretize a 1D continuous random variable?
Hint: quantization might be a better keyword to search information. Designing an "optimal" quantization requires some criterion. To try to conserve the first moment of the discretized variable ... sounds interesting, but I don't think it's very usual. More frequently (especially if we assume a probabilistic model, as you do) one tries to minimize some distortion: we want the discrete variable to be close to the real one, in some sense. If we stipulate minimum average squared error (not always the best error measure, but the most tractable), the problem is well known, and we can easily build a non-uniform quantizer with minimum rate distortion, if we know the probability of the source; this is almost a synonym of "Max Lloyd quantizer". Because a non-uniform quantizer (in 1D) is equivalent to pre-applying a non-linear transformation to a uniform quantizer, this kind of transformation ("companding") (in probabilistic terms, a function that turns our variable into a quasi-uniform) are very related to non uniform quantization (sometimes the concepts are used interchangeably). A pair of venerable examples are the u-Law and A-Law specifications for telephony.
What is the best way to discretize a 1D continuous random variable?
Hint: quantization might be a better keyword to search information. Designing an "optimal" quantization requires some criterion. To try to conserve the first moment of the discretized variable ... so
What is the best way to discretize a 1D continuous random variable? Hint: quantization might be a better keyword to search information. Designing an "optimal" quantization requires some criterion. To try to conserve the first moment of the discretized variable ... sounds interesting, but I don't think it's very usual. More frequently (especially if we assume a probabilistic model, as you do) one tries to minimize some distortion: we want the discrete variable to be close to the real one, in some sense. If we stipulate minimum average squared error (not always the best error measure, but the most tractable), the problem is well known, and we can easily build a non-uniform quantizer with minimum rate distortion, if we know the probability of the source; this is almost a synonym of "Max Lloyd quantizer". Because a non-uniform quantizer (in 1D) is equivalent to pre-applying a non-linear transformation to a uniform quantizer, this kind of transformation ("companding") (in probabilistic terms, a function that turns our variable into a quasi-uniform) are very related to non uniform quantization (sometimes the concepts are used interchangeably). A pair of venerable examples are the u-Law and A-Law specifications for telephony.
What is the best way to discretize a 1D continuous random variable? Hint: quantization might be a better keyword to search information. Designing an "optimal" quantization requires some criterion. To try to conserve the first moment of the discretized variable ... so
26,387
What is the best way to discretize a 1D continuous random variable?
Here is one simple idea that may work. If $X$ has distribution $F$, draw a "large" i.i.d. sample $(x_1,\dots,x_n)$ from $F$. Construct the empirical distribution function of this sample as $$ \hat{F_n}(t) = \frac{1}{n} \sum_{i=1}^n I_{[x_i,\infty)}(t) \, , $$ and treat $\hat{F_n}$ as the distribution function of $Y$, the "discretization" of $X$. This way, $Y$ assumes the values $x_1,\dots,x_n$ with equal probability $1/n$. How large must $n$ be, will depend on the details of your application. I don't claim that this is "best" in any way.
What is the best way to discretize a 1D continuous random variable?
Here is one simple idea that may work. If $X$ has distribution $F$, draw a "large" i.i.d. sample $(x_1,\dots,x_n)$ from $F$. Construct the empirical distribution function of this sample as $$ \hat{
What is the best way to discretize a 1D continuous random variable? Here is one simple idea that may work. If $X$ has distribution $F$, draw a "large" i.i.d. sample $(x_1,\dots,x_n)$ from $F$. Construct the empirical distribution function of this sample as $$ \hat{F_n}(t) = \frac{1}{n} \sum_{i=1}^n I_{[x_i,\infty)}(t) \, , $$ and treat $\hat{F_n}$ as the distribution function of $Y$, the "discretization" of $X$. This way, $Y$ assumes the values $x_1,\dots,x_n$ with equal probability $1/n$. How large must $n$ be, will depend on the details of your application. I don't claim that this is "best" in any way.
What is the best way to discretize a 1D continuous random variable? Here is one simple idea that may work. If $X$ has distribution $F$, draw a "large" i.i.d. sample $(x_1,\dots,x_n)$ from $F$. Construct the empirical distribution function of this sample as $$ \hat{
26,388
How do I select the number of components for independent components analysis?
The Variational Ensemble Learning approach to ICA can do this "automatic relevance detection". It automatically turns off components that are not required to improve the bound on the Bayesian Evidence. Have a look at the thesis of James Miskin available here, which introduces the technique. This is implemented very eligently in Java by John Winn (another PhD that implements Bayesian Ensemble Learning via a message passing Algorithm). To learn the technique, I decided to implement Winn's algorithm in c++ which can be obtained from here (active development).
How do I select the number of components for independent components analysis?
The Variational Ensemble Learning approach to ICA can do this "automatic relevance detection". It automatically turns off components that are not required to improve the bound on the Bayesian Evidenc
How do I select the number of components for independent components analysis? The Variational Ensemble Learning approach to ICA can do this "automatic relevance detection". It automatically turns off components that are not required to improve the bound on the Bayesian Evidence. Have a look at the thesis of James Miskin available here, which introduces the technique. This is implemented very eligently in Java by John Winn (another PhD that implements Bayesian Ensemble Learning via a message passing Algorithm). To learn the technique, I decided to implement Winn's algorithm in c++ which can be obtained from here (active development).
How do I select the number of components for independent components analysis? The Variational Ensemble Learning approach to ICA can do this "automatic relevance detection". It automatically turns off components that are not required to improve the bound on the Bayesian Evidenc
26,389
How do I select the number of components for independent components analysis?
As Tom says, Automatic Relevance Determination is a good approach to select a small subset of components in a probabilistic model. Another approach for ICA is to use an Indian Buffet Process prior - Knowles and Ghahramani do this in "Infinite Sparse Factor Analysis and Infinite Independent Components Analysis."
How do I select the number of components for independent components analysis?
As Tom says, Automatic Relevance Determination is a good approach to select a small subset of components in a probabilistic model. Another approach for ICA is to use an Indian Buffet Process prior - K
How do I select the number of components for independent components analysis? As Tom says, Automatic Relevance Determination is a good approach to select a small subset of components in a probabilistic model. Another approach for ICA is to use an Indian Buffet Process prior - Knowles and Ghahramani do this in "Infinite Sparse Factor Analysis and Infinite Independent Components Analysis."
How do I select the number of components for independent components analysis? As Tom says, Automatic Relevance Determination is a good approach to select a small subset of components in a probabilistic model. Another approach for ICA is to use an Indian Buffet Process prior - K
26,390
How do I interpret the results of a Breusch–Pagan test?
Are you asking about these results in particular or the Breusch-Pagan test more generally? For these particular tests, see @mpiktas's answer. Broadly, the BP test asks whether the squared residuals from a regression can be predicted using some set of predictors. These predictors may be the same as those from the original regression. The White test version of the BP test includes all the predictors from the original regression, plus their squares and interactions in a regression against the squared residuals. If the squared residuals are predictable using some set of covariates, then the estimated squared residuals and thus the variances of the residuals (which follows because the mean of the residuals is 0) appear to vary across units, which is the definition of heteroskedasticity or non-constant variance, the phenomenon that the BP test considers.
How do I interpret the results of a Breusch–Pagan test?
Are you asking about these results in particular or the Breusch-Pagan test more generally? For these particular tests, see @mpiktas's answer. Broadly, the BP test asks whether the squared residuals fr
How do I interpret the results of a Breusch–Pagan test? Are you asking about these results in particular or the Breusch-Pagan test more generally? For these particular tests, see @mpiktas's answer. Broadly, the BP test asks whether the squared residuals from a regression can be predicted using some set of predictors. These predictors may be the same as those from the original regression. The White test version of the BP test includes all the predictors from the original regression, plus their squares and interactions in a regression against the squared residuals. If the squared residuals are predictable using some set of covariates, then the estimated squared residuals and thus the variances of the residuals (which follows because the mean of the residuals is 0) appear to vary across units, which is the definition of heteroskedasticity or non-constant variance, the phenomenon that the BP test considers.
How do I interpret the results of a Breusch–Pagan test? Are you asking about these results in particular or the Breusch-Pagan test more generally? For these particular tests, see @mpiktas's answer. Broadly, the BP test asks whether the squared residuals fr
26,391
How do I interpret the results of a Breusch–Pagan test?
First application of ncvTest reports that there is no heteroscedasticity, as it should. The second is not meaningful, since your dependent random variable is random walk. Breusch-Pagan test is assymptotic, so I suspect that it cannot be readily applied for random walk. I do not think that there are tests for heteroscedasticity for random walks, due to the fact that non-stationarity poses much more problems than the heteroscedasticity, hence testing for the latter in the presence of the former is not practical.
How do I interpret the results of a Breusch–Pagan test?
First application of ncvTest reports that there is no heteroscedasticity, as it should. The second is not meaningful, since your dependent random variable is random walk. Breusch-Pagan test is assympt
How do I interpret the results of a Breusch–Pagan test? First application of ncvTest reports that there is no heteroscedasticity, as it should. The second is not meaningful, since your dependent random variable is random walk. Breusch-Pagan test is assymptotic, so I suspect that it cannot be readily applied for random walk. I do not think that there are tests for heteroscedasticity for random walks, due to the fact that non-stationarity poses much more problems than the heteroscedasticity, hence testing for the latter in the presence of the former is not practical.
How do I interpret the results of a Breusch–Pagan test? First application of ncvTest reports that there is no heteroscedasticity, as it should. The second is not meaningful, since your dependent random variable is random walk. Breusch-Pagan test is assympt
26,392
Which kernel method gives the best probability outputs?
Gaussian process classification (using Expectation Propagation) is probably the state-of-the-art in machine learning. There is an excellent book by Rasmussen and Williams (downloadable for free), the website for which has a very good MATLAB implementation. More software, books, papers etc. here. However, in practice, KLR will probably work just as well for most problems, the major difficulty is in selecting the kernel and regularisation parameters, which is probably best done by cross-validation, although leave-one-out cross-validation can be approximated very efficiently, see Cawley and Talbot (2008).
Which kernel method gives the best probability outputs?
Gaussian process classification (using Expectation Propagation) is probably the state-of-the-art in machine learning. There is an excellent book by Rasmussen and Williams (downloadable for free), the
Which kernel method gives the best probability outputs? Gaussian process classification (using Expectation Propagation) is probably the state-of-the-art in machine learning. There is an excellent book by Rasmussen and Williams (downloadable for free), the website for which has a very good MATLAB implementation. More software, books, papers etc. here. However, in practice, KLR will probably work just as well for most problems, the major difficulty is in selecting the kernel and regularisation parameters, which is probably best done by cross-validation, although leave-one-out cross-validation can be approximated very efficiently, see Cawley and Talbot (2008).
Which kernel method gives the best probability outputs? Gaussian process classification (using Expectation Propagation) is probably the state-of-the-art in machine learning. There is an excellent book by Rasmussen and Williams (downloadable for free), the
26,393
Which kernel method gives the best probability outputs?
I guess you know that the kernel for logistic regression is a non parametric one, so first of all you have that restriction. Regarding the R package the one I know and works pretty well is np: Nonparametric kernel smoothing methods for mixed data types This package provides a variety of nonparametric (and semiparametric) kernel methods that seamlessly handle a mix of continuous, unordered, and ordered factor data types. Regarding the state of the art kernell I can recomend to experiment with the ones described in this paper from 2009. Read it carefully to choose the one that is best and more actual for you.
Which kernel method gives the best probability outputs?
I guess you know that the kernel for logistic regression is a non parametric one, so first of all you have that restriction. Regarding the R package the one I know and works pretty well is np: Nonpara
Which kernel method gives the best probability outputs? I guess you know that the kernel for logistic regression is a non parametric one, so first of all you have that restriction. Regarding the R package the one I know and works pretty well is np: Nonparametric kernel smoothing methods for mixed data types This package provides a variety of nonparametric (and semiparametric) kernel methods that seamlessly handle a mix of continuous, unordered, and ordered factor data types. Regarding the state of the art kernell I can recomend to experiment with the ones described in this paper from 2009. Read it carefully to choose the one that is best and more actual for you.
Which kernel method gives the best probability outputs? I guess you know that the kernel for logistic regression is a non parametric one, so first of all you have that restriction. Regarding the R package the one I know and works pretty well is np: Nonpara
26,394
Interpreting the step output in R
The last step table is indeed the end result of the "stepwise regression". The caveat here is that usually you don't want to use this approach when there is a principled way to approach your model specification. The call is the lm call which would produce the equation used in the final step. Coefficients are the actual parameter estimates. It is notable that because you did not define a scope or direction parameter step defaulted to a 'backwards' step approach, in which variable terms are evaluated for dropping at each step, at each step if dropping the selected variable decreases the AIC it is removed from the model and the entire process repeats until it becomes the case that no single variable can be dropped. In your example at the final step Fertility ~ Agriculture + Education + Catholic + Infant.Mortality produced an AIC of 189.86, and dropping any one of those variables did not result in a lower AIC (indicative of a better model fit).
Interpreting the step output in R
The last step table is indeed the end result of the "stepwise regression". The caveat here is that usually you don't want to use this approach when there is a principled way to approach your model sp
Interpreting the step output in R The last step table is indeed the end result of the "stepwise regression". The caveat here is that usually you don't want to use this approach when there is a principled way to approach your model specification. The call is the lm call which would produce the equation used in the final step. Coefficients are the actual parameter estimates. It is notable that because you did not define a scope or direction parameter step defaulted to a 'backwards' step approach, in which variable terms are evaluated for dropping at each step, at each step if dropping the selected variable decreases the AIC it is removed from the model and the entire process repeats until it becomes the case that no single variable can be dropped. In your example at the final step Fertility ~ Agriculture + Education + Catholic + Infant.Mortality produced an AIC of 189.86, and dropping any one of those variables did not result in a lower AIC (indicative of a better model fit).
Interpreting the step output in R The last step table is indeed the end result of the "stepwise regression". The caveat here is that usually you don't want to use this approach when there is a principled way to approach your model sp
26,395
Interpreting the step output in R
The part of the printout at the end is the model you are left with. You can also get it if you capture the value of the step function: final.mod <- step(lm1) final.mod
Interpreting the step output in R
The part of the printout at the end is the model you are left with. You can also get it if you capture the value of the step function: final.mod <- step(lm1) final.mod
Interpreting the step output in R The part of the printout at the end is the model you are left with. You can also get it if you capture the value of the step function: final.mod <- step(lm1) final.mod
Interpreting the step output in R The part of the printout at the end is the model you are left with. You can also get it if you capture the value of the step function: final.mod <- step(lm1) final.mod
26,396
Difference between GLS and SUR
In a narrow sense, GLS (and in particular Feasible GLS or FGLS) is an estimation method applied to SUR models. SUR implies a system of m equations that are assumed to have correlated errors, and (F)GLS helps to recover from this -- see Wikipedia on Seemingly Unrelated Regressions. GLS, on the other hand, is a method of incorporating information from the covariance structure of your model. See Wikipedia on GLS. To recap, you can use the latter (GLS) to estimate the former (SUR).
Difference between GLS and SUR
In a narrow sense, GLS (and in particular Feasible GLS or FGLS) is an estimation method applied to SUR models. SUR implies a system of m equations that are assumed to have correlated errors, and (F)G
Difference between GLS and SUR In a narrow sense, GLS (and in particular Feasible GLS or FGLS) is an estimation method applied to SUR models. SUR implies a system of m equations that are assumed to have correlated errors, and (F)GLS helps to recover from this -- see Wikipedia on Seemingly Unrelated Regressions. GLS, on the other hand, is a method of incorporating information from the covariance structure of your model. See Wikipedia on GLS. To recap, you can use the latter (GLS) to estimate the former (SUR).
Difference between GLS and SUR In a narrow sense, GLS (and in particular Feasible GLS or FGLS) is an estimation method applied to SUR models. SUR implies a system of m equations that are assumed to have correlated errors, and (F)G
26,397
Learning the Structure of a Hierarchical Reinforcement Task
According to this paper In the current state-of-the-art, the designer of an RL system typically uses prior knowledge about the task to add a specific set of options to the set of primitive actions available to the agent. Also see section 6.2 Learning Task Hierarchies in the same paper. The first idea that comes to my mind is that if you don't know task hierarchies, you should start with non-hierachial reinforcement learning and trying to discover the structure afterwards or while learning, i.e. you are trying to generalize your model. To me this task looks similar to Bayesian model merging technique for HMM (for example see this thesis)
Learning the Structure of a Hierarchical Reinforcement Task
According to this paper In the current state-of-the-art, the designer of an RL system typically uses prior knowledge about the task to add a specific set of options to the set of primitive a
Learning the Structure of a Hierarchical Reinforcement Task According to this paper In the current state-of-the-art, the designer of an RL system typically uses prior knowledge about the task to add a specific set of options to the set of primitive actions available to the agent. Also see section 6.2 Learning Task Hierarchies in the same paper. The first idea that comes to my mind is that if you don't know task hierarchies, you should start with non-hierachial reinforcement learning and trying to discover the structure afterwards or while learning, i.e. you are trying to generalize your model. To me this task looks similar to Bayesian model merging technique for HMM (for example see this thesis)
Learning the Structure of a Hierarchical Reinforcement Task According to this paper In the current state-of-the-art, the designer of an RL system typically uses prior knowledge about the task to add a specific set of options to the set of primitive a
26,398
Why are the tied weights in autoencoders transposed and not inverted?
I'll give my own somewhat handwavy explanation of why this might work. I'm not an expert; this is just my reasoning. I don't have a source. Though, as with much of deep learning, there may not actually be any particularly strong theoretical reasons here; it just works well in practice. Consider an autoencoder with a single hidden layer of lower dimensionality than the input, and no activation functions. Let $W$ be the weight matrix from the input layer to the hidden layer and $V$ be the weight matrix from the hidden layer to the output layer. We want a $V$ that inverts $W$ (so that any input $x$ gets mapped to its hidden representation $Wx$ and then back to $VWx = x$). This is not generally possible, because $W$ maps from a higher to a lower dimensional space and is thus many-to-one. The best we can do for $V$ is the right inverse of $W$, which will map any $x$ in the row space of $W$ back to exactly $x$, and any $x$ not in the row space of $W$ back to the component of $x$ in the row space of $W$ -- the remaining component of $x$ is mapped to $0$ by $W$ and cannot be recovered. (For $W$ to have a right inverse it must have independent rows, but we can safely assume this.) The right inverse of $W$ is $W^T(WW^T)^{-1} = V$. When the rows of $W$ are orthonormal, $WW^T = I$ and $V = W^T$. So in this case, the transpose of the weights is exactly what we want. What about when the rows of $W$ are not orthonormal? Well, $W$ isn't really a fixed matrix. It is optimised during training. And if $W$ needs to be orthonormal in order for $W^T$ to invert it as well as possible, then it will become so. Note that requiring $W$ to have orthonormal rows doesn't meaningfully affect the model. Whatever $W$ is, we can form a weight matrix $W'$ with orthonormal rows via row operations on $W$ (this is the Gram-Schmidt process), which we can write as $W' = LW$ ($L$ being the row operation matrix). This just causes the hidden representation to be transformed in a one-to-one manner (from $Wx$ to $L(Wx)$); it contains the same information about the inputs $x$. To summarise, the optimal choice for $V$ is the right inverse of $W$. This equals $W^T$ only when $W$ has orthonormal rows, but this can become true during the optimisation process. I think the other answer is correct in that this is done because (right) matrix inverses are expensive to compute. Using the transpose has the same outcome but is much cheaper.
Why are the tied weights in autoencoders transposed and not inverted?
I'll give my own somewhat handwavy explanation of why this might work. I'm not an expert; this is just my reasoning. I don't have a source. Though, as with much of deep learning, there may not actuall
Why are the tied weights in autoencoders transposed and not inverted? I'll give my own somewhat handwavy explanation of why this might work. I'm not an expert; this is just my reasoning. I don't have a source. Though, as with much of deep learning, there may not actually be any particularly strong theoretical reasons here; it just works well in practice. Consider an autoencoder with a single hidden layer of lower dimensionality than the input, and no activation functions. Let $W$ be the weight matrix from the input layer to the hidden layer and $V$ be the weight matrix from the hidden layer to the output layer. We want a $V$ that inverts $W$ (so that any input $x$ gets mapped to its hidden representation $Wx$ and then back to $VWx = x$). This is not generally possible, because $W$ maps from a higher to a lower dimensional space and is thus many-to-one. The best we can do for $V$ is the right inverse of $W$, which will map any $x$ in the row space of $W$ back to exactly $x$, and any $x$ not in the row space of $W$ back to the component of $x$ in the row space of $W$ -- the remaining component of $x$ is mapped to $0$ by $W$ and cannot be recovered. (For $W$ to have a right inverse it must have independent rows, but we can safely assume this.) The right inverse of $W$ is $W^T(WW^T)^{-1} = V$. When the rows of $W$ are orthonormal, $WW^T = I$ and $V = W^T$. So in this case, the transpose of the weights is exactly what we want. What about when the rows of $W$ are not orthonormal? Well, $W$ isn't really a fixed matrix. It is optimised during training. And if $W$ needs to be orthonormal in order for $W^T$ to invert it as well as possible, then it will become so. Note that requiring $W$ to have orthonormal rows doesn't meaningfully affect the model. Whatever $W$ is, we can form a weight matrix $W'$ with orthonormal rows via row operations on $W$ (this is the Gram-Schmidt process), which we can write as $W' = LW$ ($L$ being the row operation matrix). This just causes the hidden representation to be transformed in a one-to-one manner (from $Wx$ to $L(Wx)$); it contains the same information about the inputs $x$. To summarise, the optimal choice for $V$ is the right inverse of $W$. This equals $W^T$ only when $W$ has orthonormal rows, but this can become true during the optimisation process. I think the other answer is correct in that this is done because (right) matrix inverses are expensive to compute. Using the transpose has the same outcome but is much cheaper.
Why are the tied weights in autoencoders transposed and not inverted? I'll give my own somewhat handwavy explanation of why this might work. I'm not an expert; this is just my reasoning. I don't have a source. Though, as with much of deep learning, there may not actuall
26,399
Why are the tied weights in autoencoders transposed and not inverted?
You are right, it would make a lot more sense to me too if they would use the inverse. There are two reasons why I think they do this: I think the calculation of the inverse is more costly than doing a transposed matrix multiplication. As I remember, the differences might not be in pure algorithmic time complexity, rather GPUs are simply better at doing matrix multiplications. (https://scicomp.stackexchange.com/questions/5372/for-which-statistical-methods-are-gpus-faster-than-cpus) Transposition is motivated by arguing that AEs are can be looked at as a dimensionality reduction method like PCA when considering linear layers, activation functions. In that case, transposition would be inverse. By training with weight tying, the desire is probably to learn a set of weights where this property is true.
Why are the tied weights in autoencoders transposed and not inverted?
You are right, it would make a lot more sense to me too if they would use the inverse. There are two reasons why I think they do this: I think the calculation of the inverse is more costly than doing
Why are the tied weights in autoencoders transposed and not inverted? You are right, it would make a lot more sense to me too if they would use the inverse. There are two reasons why I think they do this: I think the calculation of the inverse is more costly than doing a transposed matrix multiplication. As I remember, the differences might not be in pure algorithmic time complexity, rather GPUs are simply better at doing matrix multiplications. (https://scicomp.stackexchange.com/questions/5372/for-which-statistical-methods-are-gpus-faster-than-cpus) Transposition is motivated by arguing that AEs are can be looked at as a dimensionality reduction method like PCA when considering linear layers, activation functions. In that case, transposition would be inverse. By training with weight tying, the desire is probably to learn a set of weights where this property is true.
Why are the tied weights in autoencoders transposed and not inverted? You are right, it would make a lot more sense to me too if they would use the inverse. There are two reasons why I think they do this: I think the calculation of the inverse is more costly than doing
26,400
Best practices in the selection of distance metric and clustering methods for gene expression data
This will probably not be the answer you want or expect, but this is how I see these things. Clustering problem Clustering, to a degree, is almost always a subjective procedure. You decide how you want to group different elements together, then choose a distance metric that satisfies your wishes, and then follow the procedures. Here is a short example - imagine we want to cluster these animals into groups: We can try different distances (based on how many legs they have, if they can swim or not, how high they are, their color) and all of the metrics would give different clusters. Can we say that some of them are correct and others incorrect? No. Does question "which result should I believe" makes sense? Also no. RNA expression data Same thing is happening with your example. Imagine you want to group distinct genes into clusters. Immediately questions arise: 1) Questions about the distance measure: should genes that show the same pattern, but have different levels of overall expression go into the same group (correlation based distance) or different groups (difference based distance)? Is the pattern more important the the overall expression level ? If two genes anti-correlate does that mean they are related and be in the same group, or in different groups (does sign matter)? Should larger deviations be "punished" more (euclidean distance), or all magnitudes of difference are equally important (manhattan distance)? 2) Questions about the linkage function: do I want all the elements within one group to be at most "X" distance apart (complete linkage)? Or do I want to group genes under the same cluster if there is a chain of small changes that lead from one profile to another (single linkage)? etc. These are the questions that practitioner has to answer in order to get a sensible result that he can later interpret. All of the above options can have biological meaning behind them. In one case you would get a cluster of genes that show similar levels of expression, in another case - a cluster of genes that show similar trends. There is no one way of doing it an no reason to think that you should believe one result and doubt the others. It may sound cliche but in a sense one has to know what he or she wants to do before he start doing it. I think the correct way to be looking at this is that one should prefer one method in one situation and another method in a different situation. Some possibilities Now let's imagine we care about the following things: we want to group genes if they are linearly related (increase or decrease among the same individuals). we do not care about the magnitude differences between two genes (since they can be expressed at different levels, but still be related). One possibility to satisfy the above is to use absolute correlation level as distance: $1 - |cor(gene_{1}, gene_{2})|$. Then after we create the dendrogram we want: to create groups so that all the elements within the group are correlated with one another by at least |0.7|. For this we would pick "complete" linkage and cut the tree at the height of 0.3 (remember the distance is one minus correlation value). Questions and advice Now with the above context, here are the answers to the questions: What are the most appropriate distance metric and hierarchical clustering methods for clustering samples (observations) and why? The most appropriate distance will depend on the situation. If you want to group samples/genes by their overall expressions - you have to use one distance. If you want to group them by patterns - another distance. I performed hclust with different methods below on mock data (mtx) and the results were highly variable. I'm not sure which one to believe in. All of them are mostly equally believable. Since they all tried to achieve slightly different things, the results obtained were also different. I'm trying to understand the most appropriate approach for clustering gene expression data (applicable to both RNAseq and microarray) to see real patterns while avoiding patterns that might occur due to random chance. Avoiding patterns that arise because of chance, or worse, because of technical reasons (i.e. samples were done in batches) is not easy. For noise I would advice to not scale your features (genes). Scaling would bring real signal and noise to the same level, which might have an influence on the result. For the technical part - I would make sure that the groups obtained by clustering procedure do not follow the pattern of some technical parameter (i.e. samples done on batch1 are in one cluster and samples done on batch2 - in another cluster). If you find that this is the case, such batch effects will potentially have a huge influence on both: sample clusters and gene clusters. Another thing you might try (when clustering genes for example) is to look for biological meaning behind the clusters. If you find that genes within one cluster have some common ontology terms that might provide additional confidence that the clusters you found are meaningful and not just noise. Finally, it seemed like you want to try using only the genes that showed differences between some groups for your clustering. This is quite a pointless exercise (in my opinion), because it is quite clear what the result will look like: your two groups that you were comparing are bound to be separated, even if the procedure was performed on randomly generated numbers.
Best practices in the selection of distance metric and clustering methods for gene expression data
This will probably not be the answer you want or expect, but this is how I see these things. Clustering problem Clustering, to a degree, is almost always a subjective procedure. You decide how you wan
Best practices in the selection of distance metric and clustering methods for gene expression data This will probably not be the answer you want or expect, but this is how I see these things. Clustering problem Clustering, to a degree, is almost always a subjective procedure. You decide how you want to group different elements together, then choose a distance metric that satisfies your wishes, and then follow the procedures. Here is a short example - imagine we want to cluster these animals into groups: We can try different distances (based on how many legs they have, if they can swim or not, how high they are, their color) and all of the metrics would give different clusters. Can we say that some of them are correct and others incorrect? No. Does question "which result should I believe" makes sense? Also no. RNA expression data Same thing is happening with your example. Imagine you want to group distinct genes into clusters. Immediately questions arise: 1) Questions about the distance measure: should genes that show the same pattern, but have different levels of overall expression go into the same group (correlation based distance) or different groups (difference based distance)? Is the pattern more important the the overall expression level ? If two genes anti-correlate does that mean they are related and be in the same group, or in different groups (does sign matter)? Should larger deviations be "punished" more (euclidean distance), or all magnitudes of difference are equally important (manhattan distance)? 2) Questions about the linkage function: do I want all the elements within one group to be at most "X" distance apart (complete linkage)? Or do I want to group genes under the same cluster if there is a chain of small changes that lead from one profile to another (single linkage)? etc. These are the questions that practitioner has to answer in order to get a sensible result that he can later interpret. All of the above options can have biological meaning behind them. In one case you would get a cluster of genes that show similar levels of expression, in another case - a cluster of genes that show similar trends. There is no one way of doing it an no reason to think that you should believe one result and doubt the others. It may sound cliche but in a sense one has to know what he or she wants to do before he start doing it. I think the correct way to be looking at this is that one should prefer one method in one situation and another method in a different situation. Some possibilities Now let's imagine we care about the following things: we want to group genes if they are linearly related (increase or decrease among the same individuals). we do not care about the magnitude differences between two genes (since they can be expressed at different levels, but still be related). One possibility to satisfy the above is to use absolute correlation level as distance: $1 - |cor(gene_{1}, gene_{2})|$. Then after we create the dendrogram we want: to create groups so that all the elements within the group are correlated with one another by at least |0.7|. For this we would pick "complete" linkage and cut the tree at the height of 0.3 (remember the distance is one minus correlation value). Questions and advice Now with the above context, here are the answers to the questions: What are the most appropriate distance metric and hierarchical clustering methods for clustering samples (observations) and why? The most appropriate distance will depend on the situation. If you want to group samples/genes by their overall expressions - you have to use one distance. If you want to group them by patterns - another distance. I performed hclust with different methods below on mock data (mtx) and the results were highly variable. I'm not sure which one to believe in. All of them are mostly equally believable. Since they all tried to achieve slightly different things, the results obtained were also different. I'm trying to understand the most appropriate approach for clustering gene expression data (applicable to both RNAseq and microarray) to see real patterns while avoiding patterns that might occur due to random chance. Avoiding patterns that arise because of chance, or worse, because of technical reasons (i.e. samples were done in batches) is not easy. For noise I would advice to not scale your features (genes). Scaling would bring real signal and noise to the same level, which might have an influence on the result. For the technical part - I would make sure that the groups obtained by clustering procedure do not follow the pattern of some technical parameter (i.e. samples done on batch1 are in one cluster and samples done on batch2 - in another cluster). If you find that this is the case, such batch effects will potentially have a huge influence on both: sample clusters and gene clusters. Another thing you might try (when clustering genes for example) is to look for biological meaning behind the clusters. If you find that genes within one cluster have some common ontology terms that might provide additional confidence that the clusters you found are meaningful and not just noise. Finally, it seemed like you want to try using only the genes that showed differences between some groups for your clustering. This is quite a pointless exercise (in my opinion), because it is quite clear what the result will look like: your two groups that you were comparing are bound to be separated, even if the procedure was performed on randomly generated numbers.
Best practices in the selection of distance metric and clustering methods for gene expression data This will probably not be the answer you want or expect, but this is how I see these things. Clustering problem Clustering, to a degree, is almost always a subjective procedure. You decide how you wan