idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
10,501
|
What is a feasible sequence length for an RNN to model?
|
It totally depends on the nature of your data and the inner correlations, there is no rule of thumb. However, given that you have a large amount of data a 2-layer LSTM can model a large body of time series problems / benchmarks.
Furthermore, you don't backpropagate-through-time to the whole series but usually to (200-300) last steps. To find the optimal value you can cross-validate using grid search or bayesian optimisation. Furthermore, you can have a look at the parameters here: https://github.com/wojzaremba/lstm/blob/master/main.lua.
So, the sequence length doesn't really affect your model training but it's like having more training examples, that you just keep the previous state instead of resetting it.
|
What is a feasible sequence length for an RNN to model?
|
It totally depends on the nature of your data and the inner correlations, there is no rule of thumb. However, given that you have a large amount of data a 2-layer LSTM can model a large body of time s
|
What is a feasible sequence length for an RNN to model?
It totally depends on the nature of your data and the inner correlations, there is no rule of thumb. However, given that you have a large amount of data a 2-layer LSTM can model a large body of time series problems / benchmarks.
Furthermore, you don't backpropagate-through-time to the whole series but usually to (200-300) last steps. To find the optimal value you can cross-validate using grid search or bayesian optimisation. Furthermore, you can have a look at the parameters here: https://github.com/wojzaremba/lstm/blob/master/main.lua.
So, the sequence length doesn't really affect your model training but it's like having more training examples, that you just keep the previous state instead of resetting it.
|
What is a feasible sequence length for an RNN to model?
It totally depends on the nature of your data and the inner correlations, there is no rule of thumb. However, given that you have a large amount of data a 2-layer LSTM can model a large body of time s
|
10,502
|
Is there a result that provides the bootstrap is valid if and only if the statistic is smooth?
|
$\blacksquare$ (1)Why quantile estimators are not Frechet differentiable but their bootstrap estimator is still consistent?
You need Hadamard differentialbility(or compact differentiability depending on your reference source) as a sufficient condition to make bootstrap work in that case, the median and any quantile is Hadamard differentiable. Frechet differentiability is too strong in most applications.
Since usually it suffices to discuss a Polish space, there you want a locally linear functional to apply a typical compactness argument to extend your consistency result to the global situation. Also see the lineariazation comment below.
Theorem 2.27 of [Wasserman] will give you an intuition how Hadamard derivative is a weaker notion. And Theorem 3.6 and 3.7 of [Shao&Tu] will give sufficient condition for weak consistency in terms of $\rho$-Hadamard differentiability of the statistical functional $T_{n}$ with observation size $n$.
$\blacksquare$ (2)What will affect the consistency of bootstrap estimators?
[Shao&Tu]pp.85-86 illustrated situations where inconsistency of bootstrap estimators may occur.
(1)The bootstrap is sensitive to the tail behavior of the population $F$. The consistency of $H_{BOOT}$ requires moment conditions that are more stringent than those needed for the existence of the limit of $H_0$.
(2)The consistency of bootstrap estimator requires a certain degrees of smoothness from the given statistic (functional) $T_{n}$.
(3)The behavior of the bootstrap estimator sometimes depends on the method used to obtain bootstrap data.
And in Sec 3.5.2 of [Shao&Tu] they revisited the quantile example using a smoothing kernel $K$. Notice that moments are linear functionals, the quote in your question "Typically local asymptotic linearity seems to be necessary for consistency of bootstrap" is requiring some level of analyticity of the functional, which might be necessary because if that fails you can create some pathological case like Weierstrass function(which is continuous yet nowhere differentiable).
$\blacksquare$ (3)Why local linearity seems necessary in ensuring the consistency of bootstrap estimator?
As for the comment "Typically local asymptotic linearity seems to be necessary for consistency of bootstrap" made by Mammen as you mentioned. A comment from [Shao&Tu]p.78 is as following, as they commented the (global) linearization is only a techinique that facilitates the proof of consistency and does not indicate any necessity:
Linearization is another important technique in proving the
consistency of bootstrap estimators, since results for linear
statistics are often available or may be established using the
techniques previously introduced. Suppose that a given statistic Tn
can be approximated by a linear random variable
$\bar{Z_n}=\frac{1}{n}\sum_{i=1}^{n}\phi(X_n)$ (where $\phi(X)$ is a
linear statistic in $X$), i.e.,
(3.19)$$T_n=\theta+\bar{Z_n}+o_{P}(\frac{1}{\sqrt{n}})$$ Let $T_n^{*}$
and $\bar{Z_n^{*}}$ be the bootstrap analogs of $T_n$ and $\bar{Z_n}$,
respectively, based on the bootstrap sample $\{X_1^{*},\cdots,X_n^{*}\}$. If we can establish a result for $T_n^{*}$ similar to
(3.19), i.e.,
(3.20)$$T_n^{*}=\theta+\bar{Z_n}^{*}+o_{P}(\frac{1}{\sqrt{n}})$$ then
the limit of $H_{BOOT}(x)$(where $x$ is the value of parameter)$=P\{\sqrt{n}(T_n-T_n^{*}) \leq x\}$ is the same as that of $P\{\sqrt{n}(\bar{Z_n}-\bar{Z_n}^{*}) \leq x\}$.We have thus reduced the
problem to a problem involving a "sample mean" $\bar{Z_n}$, whose
bootstrap distribution estimator can be shown to be consistent using
the methods in Sections 3.1.2-3.1.4.
And they gave an example 3.3 of obtaining the bootstrap consistency for MLE type bootstrapping. However if global linearity is effective in that way, it is hard to imagine how one would prove consistency without local linearity. So I guess that is what Mammen wanted to say.
$\blacksquare$ (4)Further comments
Beyond the discussion provided by [Shao&Tu] above, I think what you want is a characterization condition of consistency of bootstrap estimators.
Pitifully, I do not know one characterization of consistency of a bootstrap estimator for a very general class of distribution in $M(X)$. Even if there is one I feel it requires not only smoothness of $T$. But there does exist characterization for a certain class of statistical models like $CLT$ class in [Gine&Zinn]; or commonly compactly supported class(directly from above discussion) defined over a Polish space.
Plus, the Kolmogorov-Smirnov distance, according to my taste is the wrong distance if our focus is classic asymptotics(in contrast to "uniform" asymptotics for empirical processes). Because KS-distance does not induce the weak topology which is a natural ground for study of asymptotic behavior, the weak topology on the space $M(X)$ is induced by bounded Lipschitz distance(OR Prohorov-Levy distance) as adopted by [Huber] and many other authors when the focus is not empirical process. Sometimes the discussion of limiting behavior of empirical process also involve BL-distance like[Gine&Zinn].
I hate to be cynical yet I still feel that this is not the only statistical writing that is "citing from void". By saying this I simply feel the citation to van Zwet's talk is very irresponsible although van Zwet is a great scholar.
$\blacksquare$ Reference
[Wasserman]Wasserman, Larry. All of Nonparametric Statistics, Springer, 2010.
[Shao&Tu]Shao, Jun, and Dongsheng Tu. The jackknife and bootstrap. Springer, 1995.
[Gine&Zinn]Giné, Evarist, and Joel Zinn. "Bootstrapping general empirical measures." The Annals of Probability (1990): 851-869.
[Huber]Huber, Peter J. Robust statistics. Wiley, 1985.
|
Is there a result that provides the bootstrap is valid if and only if the statistic is smooth?
|
$\blacksquare$ (1)Why quantile estimators are not Frechet differentiable but their bootstrap estimator is still consistent?
You need Hadamard differentialbility(or compact differentiability depending
|
Is there a result that provides the bootstrap is valid if and only if the statistic is smooth?
$\blacksquare$ (1)Why quantile estimators are not Frechet differentiable but their bootstrap estimator is still consistent?
You need Hadamard differentialbility(or compact differentiability depending on your reference source) as a sufficient condition to make bootstrap work in that case, the median and any quantile is Hadamard differentiable. Frechet differentiability is too strong in most applications.
Since usually it suffices to discuss a Polish space, there you want a locally linear functional to apply a typical compactness argument to extend your consistency result to the global situation. Also see the lineariazation comment below.
Theorem 2.27 of [Wasserman] will give you an intuition how Hadamard derivative is a weaker notion. And Theorem 3.6 and 3.7 of [Shao&Tu] will give sufficient condition for weak consistency in terms of $\rho$-Hadamard differentiability of the statistical functional $T_{n}$ with observation size $n$.
$\blacksquare$ (2)What will affect the consistency of bootstrap estimators?
[Shao&Tu]pp.85-86 illustrated situations where inconsistency of bootstrap estimators may occur.
(1)The bootstrap is sensitive to the tail behavior of the population $F$. The consistency of $H_{BOOT}$ requires moment conditions that are more stringent than those needed for the existence of the limit of $H_0$.
(2)The consistency of bootstrap estimator requires a certain degrees of smoothness from the given statistic (functional) $T_{n}$.
(3)The behavior of the bootstrap estimator sometimes depends on the method used to obtain bootstrap data.
And in Sec 3.5.2 of [Shao&Tu] they revisited the quantile example using a smoothing kernel $K$. Notice that moments are linear functionals, the quote in your question "Typically local asymptotic linearity seems to be necessary for consistency of bootstrap" is requiring some level of analyticity of the functional, which might be necessary because if that fails you can create some pathological case like Weierstrass function(which is continuous yet nowhere differentiable).
$\blacksquare$ (3)Why local linearity seems necessary in ensuring the consistency of bootstrap estimator?
As for the comment "Typically local asymptotic linearity seems to be necessary for consistency of bootstrap" made by Mammen as you mentioned. A comment from [Shao&Tu]p.78 is as following, as they commented the (global) linearization is only a techinique that facilitates the proof of consistency and does not indicate any necessity:
Linearization is another important technique in proving the
consistency of bootstrap estimators, since results for linear
statistics are often available or may be established using the
techniques previously introduced. Suppose that a given statistic Tn
can be approximated by a linear random variable
$\bar{Z_n}=\frac{1}{n}\sum_{i=1}^{n}\phi(X_n)$ (where $\phi(X)$ is a
linear statistic in $X$), i.e.,
(3.19)$$T_n=\theta+\bar{Z_n}+o_{P}(\frac{1}{\sqrt{n}})$$ Let $T_n^{*}$
and $\bar{Z_n^{*}}$ be the bootstrap analogs of $T_n$ and $\bar{Z_n}$,
respectively, based on the bootstrap sample $\{X_1^{*},\cdots,X_n^{*}\}$. If we can establish a result for $T_n^{*}$ similar to
(3.19), i.e.,
(3.20)$$T_n^{*}=\theta+\bar{Z_n}^{*}+o_{P}(\frac{1}{\sqrt{n}})$$ then
the limit of $H_{BOOT}(x)$(where $x$ is the value of parameter)$=P\{\sqrt{n}(T_n-T_n^{*}) \leq x\}$ is the same as that of $P\{\sqrt{n}(\bar{Z_n}-\bar{Z_n}^{*}) \leq x\}$.We have thus reduced the
problem to a problem involving a "sample mean" $\bar{Z_n}$, whose
bootstrap distribution estimator can be shown to be consistent using
the methods in Sections 3.1.2-3.1.4.
And they gave an example 3.3 of obtaining the bootstrap consistency for MLE type bootstrapping. However if global linearity is effective in that way, it is hard to imagine how one would prove consistency without local linearity. So I guess that is what Mammen wanted to say.
$\blacksquare$ (4)Further comments
Beyond the discussion provided by [Shao&Tu] above, I think what you want is a characterization condition of consistency of bootstrap estimators.
Pitifully, I do not know one characterization of consistency of a bootstrap estimator for a very general class of distribution in $M(X)$. Even if there is one I feel it requires not only smoothness of $T$. But there does exist characterization for a certain class of statistical models like $CLT$ class in [Gine&Zinn]; or commonly compactly supported class(directly from above discussion) defined over a Polish space.
Plus, the Kolmogorov-Smirnov distance, according to my taste is the wrong distance if our focus is classic asymptotics(in contrast to "uniform" asymptotics for empirical processes). Because KS-distance does not induce the weak topology which is a natural ground for study of asymptotic behavior, the weak topology on the space $M(X)$ is induced by bounded Lipschitz distance(OR Prohorov-Levy distance) as adopted by [Huber] and many other authors when the focus is not empirical process. Sometimes the discussion of limiting behavior of empirical process also involve BL-distance like[Gine&Zinn].
I hate to be cynical yet I still feel that this is not the only statistical writing that is "citing from void". By saying this I simply feel the citation to van Zwet's talk is very irresponsible although van Zwet is a great scholar.
$\blacksquare$ Reference
[Wasserman]Wasserman, Larry. All of Nonparametric Statistics, Springer, 2010.
[Shao&Tu]Shao, Jun, and Dongsheng Tu. The jackknife and bootstrap. Springer, 1995.
[Gine&Zinn]Giné, Evarist, and Joel Zinn. "Bootstrapping general empirical measures." The Annals of Probability (1990): 851-869.
[Huber]Huber, Peter J. Robust statistics. Wiley, 1985.
|
Is there a result that provides the bootstrap is valid if and only if the statistic is smooth?
$\blacksquare$ (1)Why quantile estimators are not Frechet differentiable but their bootstrap estimator is still consistent?
You need Hadamard differentialbility(or compact differentiability depending
|
10,503
|
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
|
From the stanfords note on NN:
Real-world example. The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it used neurons with receptive field size F=11, stride S=4 and no zero padding P=0. Since (227 - 11)/4 + 1 = 55, and since the Conv layer had a depth of K=96, the Conv layer output volume had size [55x55x96]. Each of the 55*55*96 neurons in this volume was connected to a region of size [11x11x3] in the input volume. Moreover, all 96 neurons in each depth column are connected to the same [11x11x3] region of the input, but of course with different weights. As a fun aside, if you read the actual paper it claims that the input images were 224x224, which is surely incorrect because (224 - 11)/4 + 1 is quite clearly not an integer. This has confused many people in the history of ConvNets and little is known about what happened. My own best guess is that Alex used zero-padding of 3 extra pixels that he does not mention in the paper.
ref: http://cs231n.github.io/convolutional-networks/
These notes accompany the Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition.
For questions/concerns/bug reports regarding contact Justin Johnson regarding the assignments, or contact Andrej Karpathy regarding the course notes
|
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
|
From the stanfords note on NN:
Real-world example. The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it
|
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
From the stanfords note on NN:
Real-world example. The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it used neurons with receptive field size F=11, stride S=4 and no zero padding P=0. Since (227 - 11)/4 + 1 = 55, and since the Conv layer had a depth of K=96, the Conv layer output volume had size [55x55x96]. Each of the 55*55*96 neurons in this volume was connected to a region of size [11x11x3] in the input volume. Moreover, all 96 neurons in each depth column are connected to the same [11x11x3] region of the input, but of course with different weights. As a fun aside, if you read the actual paper it claims that the input images were 224x224, which is surely incorrect because (224 - 11)/4 + 1 is quite clearly not an integer. This has confused many people in the history of ConvNets and little is known about what happened. My own best guess is that Alex used zero-padding of 3 extra pixels that he does not mention in the paper.
ref: http://cs231n.github.io/convolutional-networks/
These notes accompany the Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition.
For questions/concerns/bug reports regarding contact Justin Johnson regarding the assignments, or contact Andrej Karpathy regarding the course notes
|
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
From the stanfords note on NN:
Real-world example. The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it
|
10,504
|
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
|
This paper is really confusing. First off, the input size of images is incorrect 224x224 does not give an output of 55. Those neurons are simply just like grouped pixels in one, so the output is a 2D image of random values (neuron values). So basically the number of neurons = widthxheightxdepth, no secrets are there to figure this out.
|
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
|
This paper is really confusing. First off, the input size of images is incorrect 224x224 does not give an output of 55. Those neurons are simply just like grouped pixels in one, so the output is a 2D
|
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
This paper is really confusing. First off, the input size of images is incorrect 224x224 does not give an output of 55. Those neurons are simply just like grouped pixels in one, so the output is a 2D image of random values (neuron values). So basically the number of neurons = widthxheightxdepth, no secrets are there to figure this out.
|
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
This paper is really confusing. First off, the input size of images is incorrect 224x224 does not give an output of 55. Those neurons are simply just like grouped pixels in one, so the output is a 2D
|
10,505
|
How to treat categorical predictors in LASSO
|
When dealing with categorical variables in LASSO regression, it is usual to use a grouped LASSO that keeps the dummy variables corresponding to a particular categorical variable together (i.e., you cannot exclude only some of the dummy variables from the model). A useful method is the Modified Group LASSO (MGL) described in Choi, Park and Seo (2012). In this method the penalty is proportional to the norm of the $\boldsymbol{\beta}$ vector for the set of dummy variables. You still keep a reference category in this method, so the intercept term is still included. This allows you to deal with multiple categorical variables without identifiability problems.
In answer to your specific questions:
(1) LASSO is an estimation method for the coefficients, but the coefficients themselves are defined by the initial model equation for your regression. As such, the interpretation of the coefficients is the same as in a standard linear regression; they represent rates-of-change of the expected response due to changes in the explanatory variables.
(2) The above literature recommends grouping the variables, but keeping a reference category. This implicitly assumes that you are comparing the presence of a categorical variable with a model that removes it but still has an intercept term.
(3) As stated above, the estimation method does not affect the interpretation of the coefficients, which are set by the model statement.
|
How to treat categorical predictors in LASSO
|
When dealing with categorical variables in LASSO regression, it is usual to use a grouped LASSO that keeps the dummy variables corresponding to a particular categorical variable together (i.e., you ca
|
How to treat categorical predictors in LASSO
When dealing with categorical variables in LASSO regression, it is usual to use a grouped LASSO that keeps the dummy variables corresponding to a particular categorical variable together (i.e., you cannot exclude only some of the dummy variables from the model). A useful method is the Modified Group LASSO (MGL) described in Choi, Park and Seo (2012). In this method the penalty is proportional to the norm of the $\boldsymbol{\beta}$ vector for the set of dummy variables. You still keep a reference category in this method, so the intercept term is still included. This allows you to deal with multiple categorical variables without identifiability problems.
In answer to your specific questions:
(1) LASSO is an estimation method for the coefficients, but the coefficients themselves are defined by the initial model equation for your regression. As such, the interpretation of the coefficients is the same as in a standard linear regression; they represent rates-of-change of the expected response due to changes in the explanatory variables.
(2) The above literature recommends grouping the variables, but keeping a reference category. This implicitly assumes that you are comparing the presence of a categorical variable with a model that removes it but still has an intercept term.
(3) As stated above, the estimation method does not affect the interpretation of the coefficients, which are set by the model statement.
|
How to treat categorical predictors in LASSO
When dealing with categorical variables in LASSO regression, it is usual to use a grouped LASSO that keeps the dummy variables corresponding to a particular categorical variable together (i.e., you ca
|
10,506
|
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
|
The following is quoted from the Scholarpedia website:
State space model (SSM) refers to a class of probabilistic graphical model (Koller and Friedman, 2009) that describes the probabilistic dependence between the latent state variable and the observed measurement. The state or the measurement can be either continuous or discrete. The term “state space” originated in 1960s in the area of control engineering (Kalman, 1960). SSM provides a general framework for analyzing deterministic and stochastic dynamical systems that are measured or observed through a stochastic process. The SSM framework has been successfully applied in engineering, statistics, computer science and economics to solve a broad range of dynamical systems problems. Other terms used to describe SSMs are hidden Markov models (HMMs) (Rabiner, 1989) and latent process models. The most well studied SSM is the Kalman filter, which defines an optimal algorithm for inferring linear Gaussian systems.
|
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
|
The following is quoted from the Scholarpedia website:
State space model (SSM) refers to a class of probabilistic graphical model (Koller and Friedman, 2009) that describes the probabilistic depende
|
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
The following is quoted from the Scholarpedia website:
State space model (SSM) refers to a class of probabilistic graphical model (Koller and Friedman, 2009) that describes the probabilistic dependence between the latent state variable and the observed measurement. The state or the measurement can be either continuous or discrete. The term “state space” originated in 1960s in the area of control engineering (Kalman, 1960). SSM provides a general framework for analyzing deterministic and stochastic dynamical systems that are measured or observed through a stochastic process. The SSM framework has been successfully applied in engineering, statistics, computer science and economics to solve a broad range of dynamical systems problems. Other terms used to describe SSMs are hidden Markov models (HMMs) (Rabiner, 1989) and latent process models. The most well studied SSM is the Kalman filter, which defines an optimal algorithm for inferring linear Gaussian systems.
|
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
The following is quoted from the Scholarpedia website:
State space model (SSM) refers to a class of probabilistic graphical model (Koller and Friedman, 2009) that describes the probabilistic depende
|
10,507
|
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
|
I and Alan Hawkes have written quite a lot about aggregated Markov processes with discrete states in continuous time. Our stuff has been about the problem of interpreting observations of single ion channel molecules, and includes an exact treatment of missed short events. Similar theory works in reliability theory too. It might well be adapted to other problems.
See http://www.onemol.org.uk/?page_id=175 for references.
|
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
|
I and Alan Hawkes have written quite a lot about aggregated Markov processes with discrete states in continuous time. Our stuff has been about the problem of interpreting observations of single ion c
|
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
I and Alan Hawkes have written quite a lot about aggregated Markov processes with discrete states in continuous time. Our stuff has been about the problem of interpreting observations of single ion channel molecules, and includes an exact treatment of missed short events. Similar theory works in reliability theory too. It might well be adapted to other problems.
See http://www.onemol.org.uk/?page_id=175 for references.
|
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
I and Alan Hawkes have written quite a lot about aggregated Markov processes with discrete states in continuous time. Our stuff has been about the problem of interpreting observations of single ion c
|
10,508
|
Common statistical tests as linear models
|
Not an exhaustive list but if you include generalized linear models, the scope of this problem becomes substantially larger.
For instance:
The Cochran-Armitage test of trend can be formulated by:
$$E[\mbox{logit} (p) | t] = \beta_0 + \beta_1 t \qquad \mathcal{H}_0: \beta_1 = 0$$
The Pearson Chi-Square test of independence for a $p \times k$ contingency table is a log-linear model for the cell frequencies given by:
$$E[\log (\mu)] = \beta_0 + \beta_{i.} + \beta_{.j} + \gamma_{ij} \quad i,j > 1 \qquad\mathcal{H}_0: \gamma_{ij} = 0, \quad i,j > 1$$
Also the t-test for unequal variances is well approximated by using the Huber White robust error estimation.
|
Common statistical tests as linear models
|
Not an exhaustive list but if you include generalized linear models, the scope of this problem becomes substantially larger.
For instance:
The Cochran-Armitage test of trend can be formulated by:
$$E[
|
Common statistical tests as linear models
Not an exhaustive list but if you include generalized linear models, the scope of this problem becomes substantially larger.
For instance:
The Cochran-Armitage test of trend can be formulated by:
$$E[\mbox{logit} (p) | t] = \beta_0 + \beta_1 t \qquad \mathcal{H}_0: \beta_1 = 0$$
The Pearson Chi-Square test of independence for a $p \times k$ contingency table is a log-linear model for the cell frequencies given by:
$$E[\log (\mu)] = \beta_0 + \beta_{i.} + \beta_{.j} + \gamma_{ij} \quad i,j > 1 \qquad\mathcal{H}_0: \gamma_{ij} = 0, \quad i,j > 1$$
Also the t-test for unequal variances is well approximated by using the Huber White robust error estimation.
|
Common statistical tests as linear models
Not an exhaustive list but if you include generalized linear models, the scope of this problem becomes substantially larger.
For instance:
The Cochran-Armitage test of trend can be formulated by:
$$E[
|
10,509
|
Random forests for multivariate regression
|
Here's an example of a multi-output regression problem undertaken with facial recognition. It includes a coding sample as well, it should give you a start with your methodology. http://scikit-learn.org/stable/auto_examples/plot_multioutput_face_completion.html
|
Random forests for multivariate regression
|
Here's an example of a multi-output regression problem undertaken with facial recognition. It includes a coding sample as well, it should give you a start with your methodology. http://scikit-learn.or
|
Random forests for multivariate regression
Here's an example of a multi-output regression problem undertaken with facial recognition. It includes a coding sample as well, it should give you a start with your methodology. http://scikit-learn.org/stable/auto_examples/plot_multioutput_face_completion.html
|
Random forests for multivariate regression
Here's an example of a multi-output regression problem undertaken with facial recognition. It includes a coding sample as well, it should give you a start with your methodology. http://scikit-learn.or
|
10,510
|
Random forests for multivariate regression
|
There is a new package specifically for that (not personally tested)
https://cran.r-project.org/package=MultivariateRandomForest
|
Random forests for multivariate regression
|
There is a new package specifically for that (not personally tested)
https://cran.r-project.org/package=MultivariateRandomForest
|
Random forests for multivariate regression
There is a new package specifically for that (not personally tested)
https://cran.r-project.org/package=MultivariateRandomForest
|
Random forests for multivariate regression
There is a new package specifically for that (not personally tested)
https://cran.r-project.org/package=MultivariateRandomForest
|
10,511
|
Is there a Bayesian approach to density estimation
|
Since you want a bayesian approach, you need to assume some prior knowledge about the thing you want to estimate. This will be in the form of a distribution.
Now, there's the issue that this is now a distribution over distributions. However, this is no problem if you assume that the candidate distributions come from some parameterized class of distributions.
For example, if you want to assume the data is gaussian distributed with unknown mean but known variance, then all you need is a prior over the mean.
MAP estimation of the unknown parameter (call it $\theta$) could proceed by assuming that all the observations / data points are conditionally independent given the unknown parameter. Then, the MAP estimate is
$\hat{\theta} = \arg \max_\theta ( \text{Pr}[x_1,x_2,...,x_n,\theta] )$,
where
$ \text{Pr}[x_1,x_2,...,x_n,\theta] = \text{Pr}[x_1,x_2,...,x_n | \theta] \text{Pr}[\theta] = \text{Pr}[\theta] \prod_{i=1}^n \text{Pr}[x_i | \theta]$.
It should be noted that there are particular combinations of the prior probability $\text{Pr}[\theta]$ and the candidate distributions $\text{Pr}[x | \theta]$ that give rise to easy (closed form) updates as more data points are received.
|
Is there a Bayesian approach to density estimation
|
Since you want a bayesian approach, you need to assume some prior knowledge about the thing you want to estimate. This will be in the form of a distribution.
Now, there's the issue that this is now a
|
Is there a Bayesian approach to density estimation
Since you want a bayesian approach, you need to assume some prior knowledge about the thing you want to estimate. This will be in the form of a distribution.
Now, there's the issue that this is now a distribution over distributions. However, this is no problem if you assume that the candidate distributions come from some parameterized class of distributions.
For example, if you want to assume the data is gaussian distributed with unknown mean but known variance, then all you need is a prior over the mean.
MAP estimation of the unknown parameter (call it $\theta$) could proceed by assuming that all the observations / data points are conditionally independent given the unknown parameter. Then, the MAP estimate is
$\hat{\theta} = \arg \max_\theta ( \text{Pr}[x_1,x_2,...,x_n,\theta] )$,
where
$ \text{Pr}[x_1,x_2,...,x_n,\theta] = \text{Pr}[x_1,x_2,...,x_n | \theta] \text{Pr}[\theta] = \text{Pr}[\theta] \prod_{i=1}^n \text{Pr}[x_i | \theta]$.
It should be noted that there are particular combinations of the prior probability $\text{Pr}[\theta]$ and the candidate distributions $\text{Pr}[x | \theta]$ that give rise to easy (closed form) updates as more data points are received.
|
Is there a Bayesian approach to density estimation
Since you want a bayesian approach, you need to assume some prior knowledge about the thing you want to estimate. This will be in the form of a distribution.
Now, there's the issue that this is now a
|
10,512
|
Is there a Bayesian approach to density estimation
|
For density estimation purposes what you need is not
$\theta_{n+1}|x_{1},\ldots,x_{n}$.
The formula in notes $\theta_{n+1}|\theta_{1},\ldots,\theta_{n}$ reffers to the predictive distribution of the Dirichlet process.
For density estimation you actually have to sample from the predictive distribution
$$
\pi(dx_{n+1}|x_{1},\ldots,x_{n})
$$
Sampling from the above distribution can be done either with conditional methods either with marginal methods. For the conditional methods, take a look at the paper of Stephen Walker [1]. For marginal methods you should check at Radford Neal paper [2].
For the concnetration parameter $\alpha$ Mike West [3] proposes a method for inference in the MCMC procedure including a full conditional distribution for $\alpha$. If you decide not to update the concentration $\alpha$ in the MCMC procedure, you should keep in mind that if you choose a large value for it, then the number of distinct values drawn from the Dirichlet process will be larger than the number of distinct values when a small number for $\alpha$ will be used.
[1] S.G., Walker (2006). Sampling the Dirichlet Mixture model with slices. Communications in Statitics (Simulation and Computation).
[2] R.M., Neal (2000) Markov Chain Monte Carlo methods for Dirichlet Process Mixture models. Journal of Computational and Graphical Statistics. Vol 9, No 2, pp 249-265
[3] M., West (1992).
Hyperparameter estimation in Dirichlet process mixture models. Technical report
|
Is there a Bayesian approach to density estimation
|
For density estimation purposes what you need is not
$\theta_{n+1}|x_{1},\ldots,x_{n}$.
The formula in notes $\theta_{n+1}|\theta_{1},\ldots,\theta_{n}$ reffers to the predictive distribution of the D
|
Is there a Bayesian approach to density estimation
For density estimation purposes what you need is not
$\theta_{n+1}|x_{1},\ldots,x_{n}$.
The formula in notes $\theta_{n+1}|\theta_{1},\ldots,\theta_{n}$ reffers to the predictive distribution of the Dirichlet process.
For density estimation you actually have to sample from the predictive distribution
$$
\pi(dx_{n+1}|x_{1},\ldots,x_{n})
$$
Sampling from the above distribution can be done either with conditional methods either with marginal methods. For the conditional methods, take a look at the paper of Stephen Walker [1]. For marginal methods you should check at Radford Neal paper [2].
For the concnetration parameter $\alpha$ Mike West [3] proposes a method for inference in the MCMC procedure including a full conditional distribution for $\alpha$. If you decide not to update the concentration $\alpha$ in the MCMC procedure, you should keep in mind that if you choose a large value for it, then the number of distinct values drawn from the Dirichlet process will be larger than the number of distinct values when a small number for $\alpha$ will be used.
[1] S.G., Walker (2006). Sampling the Dirichlet Mixture model with slices. Communications in Statitics (Simulation and Computation).
[2] R.M., Neal (2000) Markov Chain Monte Carlo methods for Dirichlet Process Mixture models. Journal of Computational and Graphical Statistics. Vol 9, No 2, pp 249-265
[3] M., West (1992).
Hyperparameter estimation in Dirichlet process mixture models. Technical report
|
Is there a Bayesian approach to density estimation
For density estimation purposes what you need is not
$\theta_{n+1}|x_{1},\ldots,x_{n}$.
The formula in notes $\theta_{n+1}|\theta_{1},\ldots,\theta_{n}$ reffers to the predictive distribution of the D
|
10,513
|
Is there a Bayesian approach to density estimation
|
Is there some approach to update F based on my new readings?
There is something precisely for that. It's pretty much the main idea of Bayesian inference.
$p(\theta | y) \propto p(y|\theta)p(\theta)$
The $p(\theta)$ is your prior, what you call $F$. The $p(y|\theta)$ is what Bayesians call the "likelihood" and it is the probability of observing your data given some value of theta. You just multiply them together and get what's called a "posterior" distribution of $\theta$. This is your "updated F". Check out chapter 1 of any Intro to Bayesian Stats book.
You don't have to get rid of $p(\theta)$ (your prior), you just have to realize that it's not your best guess anymore, now that you have data to refine it.
|
Is there a Bayesian approach to density estimation
|
Is there some approach to update F based on my new readings?
There is something precisely for that. It's pretty much the main idea of Bayesian inference.
$p(\theta | y) \propto p(y|\theta)p(\theta)$
|
Is there a Bayesian approach to density estimation
Is there some approach to update F based on my new readings?
There is something precisely for that. It's pretty much the main idea of Bayesian inference.
$p(\theta | y) \propto p(y|\theta)p(\theta)$
The $p(\theta)$ is your prior, what you call $F$. The $p(y|\theta)$ is what Bayesians call the "likelihood" and it is the probability of observing your data given some value of theta. You just multiply them together and get what's called a "posterior" distribution of $\theta$. This is your "updated F". Check out chapter 1 of any Intro to Bayesian Stats book.
You don't have to get rid of $p(\theta)$ (your prior), you just have to realize that it's not your best guess anymore, now that you have data to refine it.
|
Is there a Bayesian approach to density estimation
Is there some approach to update F based on my new readings?
There is something precisely for that. It's pretty much the main idea of Bayesian inference.
$p(\theta | y) \propto p(y|\theta)p(\theta)$
|
10,514
|
Appropriate residual degrees of freedom after dropping terms from a model
|
Do you disagree with @FrankHarrel's answer that parsimony comes with some ugly scientific trade-offs, anyways?
I love the link provided in @MikeWiezbicki's comment to Doug Bates' rationale. If someone disagrees with your analysis, they can do it their way, and this is a fun way to start a scientific discussion about your base assumptions. A p-value does not make your conclusion an "absolute truth".
If the decision of whether or not to include a parameter in your model comes down to "picking hairs" over what are, for scientifically meaningful samples, relatively small discrepancies in the df -- and you are not dealing with $n<p$ problems that justify more nuanced inference, anyways -- then you have a param so close to meeting your cutoffs that you should be transparent and talk about it either way: just include it, or analyze the model with and without it, but definitely transparently discuss your decision in the final analysis.
|
Appropriate residual degrees of freedom after dropping terms from a model
|
Do you disagree with @FrankHarrel's answer that parsimony comes with some ugly scientific trade-offs, anyways?
I love the link provided in @MikeWiezbicki's comment to Doug Bates' rationale. If some
|
Appropriate residual degrees of freedom after dropping terms from a model
Do you disagree with @FrankHarrel's answer that parsimony comes with some ugly scientific trade-offs, anyways?
I love the link provided in @MikeWiezbicki's comment to Doug Bates' rationale. If someone disagrees with your analysis, they can do it their way, and this is a fun way to start a scientific discussion about your base assumptions. A p-value does not make your conclusion an "absolute truth".
If the decision of whether or not to include a parameter in your model comes down to "picking hairs" over what are, for scientifically meaningful samples, relatively small discrepancies in the df -- and you are not dealing with $n<p$ problems that justify more nuanced inference, anyways -- then you have a param so close to meeting your cutoffs that you should be transparent and talk about it either way: just include it, or analyze the model with and without it, but definitely transparently discuss your decision in the final analysis.
|
Appropriate residual degrees of freedom after dropping terms from a model
Do you disagree with @FrankHarrel's answer that parsimony comes with some ugly scientific trade-offs, anyways?
I love the link provided in @MikeWiezbicki's comment to Doug Bates' rationale. If some
|
10,515
|
Backpropagation on a convolutional layer
|
Could you not simply say that the backpropagation on a convolutional layer is the sum of the backpropagation on each part, sliding window, of the image/tensor that the convolution covers?
This is important as it connects to the fact that the weights are shared over multiple pixels and thus weights should reflect general local features of the images independently from their location.
|
Backpropagation on a convolutional layer
|
Could you not simply say that the backpropagation on a convolutional layer is the sum of the backpropagation on each part, sliding window, of the image/tensor that the convolution covers?
This is impo
|
Backpropagation on a convolutional layer
Could you not simply say that the backpropagation on a convolutional layer is the sum of the backpropagation on each part, sliding window, of the image/tensor that the convolution covers?
This is important as it connects to the fact that the weights are shared over multiple pixels and thus weights should reflect general local features of the images independently from their location.
|
Backpropagation on a convolutional layer
Could you not simply say that the backpropagation on a convolutional layer is the sum of the backpropagation on each part, sliding window, of the image/tensor that the convolution covers?
This is impo
|
10,516
|
Why is logistic regression called a machine learning algorithm?
|
Machine Learning is not a well defined term.
In fact, if you Google "Machine Learning Definition" the first two things you get are quite different.
From WhatIs.com,
Machine learning is a type of artificial intelligence (AI) that
provides computers with the ability to learn without being explicitly
programmed. Machine learning focuses on the development of computer
programs that can teach themselves to grow and change when exposed to
new data.
From Wikipedia,
Machine learning explores the construction and study of algorithms
that can learn from and make predictions on data.
Logistic regression undoubtedly fits the Wikipedia definition and you could argue whether or not it fits the WhatIs defintion.
I personally define Machine Learning just as Wikipedia does and consider it a subset of statistics.
|
Why is logistic regression called a machine learning algorithm?
|
Machine Learning is not a well defined term.
In fact, if you Google "Machine Learning Definition" the first two things you get are quite different.
From WhatIs.com,
Machine learning is a type of art
|
Why is logistic regression called a machine learning algorithm?
Machine Learning is not a well defined term.
In fact, if you Google "Machine Learning Definition" the first two things you get are quite different.
From WhatIs.com,
Machine learning is a type of artificial intelligence (AI) that
provides computers with the ability to learn without being explicitly
programmed. Machine learning focuses on the development of computer
programs that can teach themselves to grow and change when exposed to
new data.
From Wikipedia,
Machine learning explores the construction and study of algorithms
that can learn from and make predictions on data.
Logistic regression undoubtedly fits the Wikipedia definition and you could argue whether or not it fits the WhatIs defintion.
I personally define Machine Learning just as Wikipedia does and consider it a subset of statistics.
|
Why is logistic regression called a machine learning algorithm?
Machine Learning is not a well defined term.
In fact, if you Google "Machine Learning Definition" the first two things you get are quite different.
From WhatIs.com,
Machine learning is a type of art
|
10,517
|
Why is logistic regression called a machine learning algorithm?
|
Machine Learning is hot and is where the money is. People call things they're trying to sell whatever is hot at the moment and therefore "sells". That can be selling software. That can be selling themselves as current employees trying to get promoted, as prospective employees, as consultants, etc. That can be a manager trying to get budget approved from a company bigwig to hire people and buy stuff, or to convince investors to invest in his/her hot new startup which does Machine Learning as the key to making an improved sexting app. So software does Machine Learning and people are Machine Learning experts, because that's what's hot and therefore what sells ... at least for now.
I did all kinds of linear and nonlinear statistical model fitting more than 30 years ago. It wasn't called Machine Learning then. Now, most of it would be.
Just as everyone and their uncle is now a Data "Scientist". That's hot, that's supposedly sexy, so that's what people call themselves. And that's what hiring managers who have to get budget approved to hire someone list positions as. So someone who doesn't know the first thing about math, probability, statistics, optimization, or numerical/floating point computation, uses an R or Python package of dubious correctness and robustness of implementation, and which is labeled as a Machine Learning algorithm, to apply to data they don't understand, and call themselves a Data Scientist based on their experience in doing so.
This may sound flippant, but I believe it to be the essence of the situation.
Edit: The following was tweeted on September 26, 2019:
https://twitter.com/daniela_witten/status/1177294449702928384
Daniela Witten @daniela_witten "When we raise money it’s AI, when we
hire it's machine learning, and when we do the work it's logistic
regression."
(I'm not sure who came up with this but it's a gem 💎)
|
Why is logistic regression called a machine learning algorithm?
|
Machine Learning is hot and is where the money is. People call things they're trying to sell whatever is hot at the moment and therefore "sells". That can be selling software. That can be selling th
|
Why is logistic regression called a machine learning algorithm?
Machine Learning is hot and is where the money is. People call things they're trying to sell whatever is hot at the moment and therefore "sells". That can be selling software. That can be selling themselves as current employees trying to get promoted, as prospective employees, as consultants, etc. That can be a manager trying to get budget approved from a company bigwig to hire people and buy stuff, or to convince investors to invest in his/her hot new startup which does Machine Learning as the key to making an improved sexting app. So software does Machine Learning and people are Machine Learning experts, because that's what's hot and therefore what sells ... at least for now.
I did all kinds of linear and nonlinear statistical model fitting more than 30 years ago. It wasn't called Machine Learning then. Now, most of it would be.
Just as everyone and their uncle is now a Data "Scientist". That's hot, that's supposedly sexy, so that's what people call themselves. And that's what hiring managers who have to get budget approved to hire someone list positions as. So someone who doesn't know the first thing about math, probability, statistics, optimization, or numerical/floating point computation, uses an R or Python package of dubious correctness and robustness of implementation, and which is labeled as a Machine Learning algorithm, to apply to data they don't understand, and call themselves a Data Scientist based on their experience in doing so.
This may sound flippant, but I believe it to be the essence of the situation.
Edit: The following was tweeted on September 26, 2019:
https://twitter.com/daniela_witten/status/1177294449702928384
Daniela Witten @daniela_witten "When we raise money it’s AI, when we
hire it's machine learning, and when we do the work it's logistic
regression."
(I'm not sure who came up with this but it's a gem 💎)
|
Why is logistic regression called a machine learning algorithm?
Machine Learning is hot and is where the money is. People call things they're trying to sell whatever is hot at the moment and therefore "sells". That can be selling software. That can be selling th
|
10,518
|
Why is logistic regression called a machine learning algorithm?
|
As others have mentioned already, there's no clear separation between statistics, machine learning, artificial intelligence and so on so take any definition with a grain of salt. Logistic regression is probably more often labeled as statistics rather than machine learning, while neural networks are typically labeled as machine learning (even though neural networks are often just a collection of logistic regression models).
In my opinion, machine learning studies methods that can somehow learn from data, typically by constructing a model in some shape or form. Logistic regression, like SVM, neural networks, random forests and many other techniques, does learn from data when constructing the model.
If I understood correctly, in a Machine Learning algorithm, the model has to learn from its experience
That is not really how machine learning is usually defined. Not all machine learning methods yield models which dynamically adapt to new data (this subfield is called online learning).
What is the difference between logistic regression with the normal regression in term of "learning"?
Many regression methods are also classified as machine learning (e.g. SVM).
|
Why is logistic regression called a machine learning algorithm?
|
As others have mentioned already, there's no clear separation between statistics, machine learning, artificial intelligence and so on so take any definition with a grain of salt. Logistic regression i
|
Why is logistic regression called a machine learning algorithm?
As others have mentioned already, there's no clear separation between statistics, machine learning, artificial intelligence and so on so take any definition with a grain of salt. Logistic regression is probably more often labeled as statistics rather than machine learning, while neural networks are typically labeled as machine learning (even though neural networks are often just a collection of logistic regression models).
In my opinion, machine learning studies methods that can somehow learn from data, typically by constructing a model in some shape or form. Logistic regression, like SVM, neural networks, random forests and many other techniques, does learn from data when constructing the model.
If I understood correctly, in a Machine Learning algorithm, the model has to learn from its experience
That is not really how machine learning is usually defined. Not all machine learning methods yield models which dynamically adapt to new data (this subfield is called online learning).
What is the difference between logistic regression with the normal regression in term of "learning"?
Many regression methods are also classified as machine learning (e.g. SVM).
|
Why is logistic regression called a machine learning algorithm?
As others have mentioned already, there's no clear separation between statistics, machine learning, artificial intelligence and so on so take any definition with a grain of salt. Logistic regression i
|
10,519
|
Why is logistic regression called a machine learning algorithm?
|
Logistic regression was invented by statistician DR Cox in 1958 and so predates the field of machine learning. Logistic regression is not a classification method, thank goodness. It is a direct probability model.
If you think that an algorithm has to have two phases (initial guess, then "correct" the prediction "errors") consider this: Logistic regression gets it right the first time. That is, in the space of additive (in the logit) models. Logistic regression is a direct competitor of many machine learning methods and outperforms many of them when predictors mainly act additively (or when subject matter knowledge correctly pre-specifies interactions). Some call logistic regression a type of machine learning but most would not. You could call some machine learning methods (neural networks are examples) statistical models.
|
Why is logistic regression called a machine learning algorithm?
|
Logistic regression was invented by statistician DR Cox in 1958 and so predates the field of machine learning. Logistic regression is not a classification method, thank goodness. It is a direct prob
|
Why is logistic regression called a machine learning algorithm?
Logistic regression was invented by statistician DR Cox in 1958 and so predates the field of machine learning. Logistic regression is not a classification method, thank goodness. It is a direct probability model.
If you think that an algorithm has to have two phases (initial guess, then "correct" the prediction "errors") consider this: Logistic regression gets it right the first time. That is, in the space of additive (in the logit) models. Logistic regression is a direct competitor of many machine learning methods and outperforms many of them when predictors mainly act additively (or when subject matter knowledge correctly pre-specifies interactions). Some call logistic regression a type of machine learning but most would not. You could call some machine learning methods (neural networks are examples) statistical models.
|
Why is logistic regression called a machine learning algorithm?
Logistic regression was invented by statistician DR Cox in 1958 and so predates the field of machine learning. Logistic regression is not a classification method, thank goodness. It is a direct prob
|
10,520
|
Why is logistic regression called a machine learning algorithm?
|
I'll have to disagree with most of the answers here and claim that Machine Learning has a very precise scope and a clear cut distinction from Statistics. ML is a sub-field of computer science with a long history, which only in recent years has found applications outside its domain. ML's paternal field and application domain lies within Artificial Intelligence (robotics, pattern recognition software, etc), therefore, it's not just a "hot term" like "Big Data" or "Data Science". Statistics, on the other hand, (which comes from the word "state") was developed within social and economical sciences as a tool for humans, not machines. ML evolved separately from statistics and, eventhough somewhere along the way it started relying heavily on statistical principles, it is by no means a subfield of statistics. ML and statistics are complementary, not overlapping fields.
Long answer:
As implied by its name ML methods were made for software/machines while statistical methods were made for humans. Both ML and statistics deal with predictions on data, however, ML methods follow a non-parametric automatised approach whereas statistical methods require a great deal of manual model-building work with an added explanatory factor. This makes perfect sense if you consider that ML algorithms were developed in AI research as a means of automatised prediction-making that was meant to be integrated in robotics software (e.g. for the purposes of voice and face recognition). When a "machine" makes a prediction, it doesn't care about the reasons behind it. A machine doesn't care to know the drivers/predictors behind a model which classifies email as spam or non-spam, it only cares to have the best accuracy of prediction. This is why virtually all ML methods are black boxes, it's not because they don't have a model, it's because the model is constructed algorithmically and not meant to be visible to neither human nor machine.
The concept of "training" in ML relies on computational power, whereas statistical model-building with OLS-type of methods for parameter estimation relies on the knowledge of a human expert. In a multiple regression scenario it's strictly up to the statistician to use his expert judgement in order to choose his model and verify all required statistical assumptions. A statistician's goal is not just to find patterns and use them for predictions but also to understand his data and his problem in a much greater depth than ML.
Of course in some occasions ML and statistics do overlap, as is the case with many disciplines. Logistic regression is one of these occasions; originally a statistical method, which bears so much resemblance to the simple Perceptron (one of the most fundamental ML techniques), that it is by some seen as a ML method.
|
Why is logistic regression called a machine learning algorithm?
|
I'll have to disagree with most of the answers here and claim that Machine Learning has a very precise scope and a clear cut distinction from Statistics. ML is a sub-field of computer science with a l
|
Why is logistic regression called a machine learning algorithm?
I'll have to disagree with most of the answers here and claim that Machine Learning has a very precise scope and a clear cut distinction from Statistics. ML is a sub-field of computer science with a long history, which only in recent years has found applications outside its domain. ML's paternal field and application domain lies within Artificial Intelligence (robotics, pattern recognition software, etc), therefore, it's not just a "hot term" like "Big Data" or "Data Science". Statistics, on the other hand, (which comes from the word "state") was developed within social and economical sciences as a tool for humans, not machines. ML evolved separately from statistics and, eventhough somewhere along the way it started relying heavily on statistical principles, it is by no means a subfield of statistics. ML and statistics are complementary, not overlapping fields.
Long answer:
As implied by its name ML methods were made for software/machines while statistical methods were made for humans. Both ML and statistics deal with predictions on data, however, ML methods follow a non-parametric automatised approach whereas statistical methods require a great deal of manual model-building work with an added explanatory factor. This makes perfect sense if you consider that ML algorithms were developed in AI research as a means of automatised prediction-making that was meant to be integrated in robotics software (e.g. for the purposes of voice and face recognition). When a "machine" makes a prediction, it doesn't care about the reasons behind it. A machine doesn't care to know the drivers/predictors behind a model which classifies email as spam or non-spam, it only cares to have the best accuracy of prediction. This is why virtually all ML methods are black boxes, it's not because they don't have a model, it's because the model is constructed algorithmically and not meant to be visible to neither human nor machine.
The concept of "training" in ML relies on computational power, whereas statistical model-building with OLS-type of methods for parameter estimation relies on the knowledge of a human expert. In a multiple regression scenario it's strictly up to the statistician to use his expert judgement in order to choose his model and verify all required statistical assumptions. A statistician's goal is not just to find patterns and use them for predictions but also to understand his data and his problem in a much greater depth than ML.
Of course in some occasions ML and statistics do overlap, as is the case with many disciplines. Logistic regression is one of these occasions; originally a statistical method, which bears so much resemblance to the simple Perceptron (one of the most fundamental ML techniques), that it is by some seen as a ML method.
|
Why is logistic regression called a machine learning algorithm?
I'll have to disagree with most of the answers here and claim that Machine Learning has a very precise scope and a clear cut distinction from Statistics. ML is a sub-field of computer science with a l
|
10,521
|
Why is logistic regression called a machine learning algorithm?
|
I finally figured it out. I now know the difference between statistical model fitting and machine learning.
If you fit a model (regression), that's statistical model fitting
If you learn a model (regression), that's machine learning
So if you learn a logistic regression, that is a machine learning algorithm.
Comment: Pardon me for being an old geezer, but whenever I hear people talking about learning a model, or learning a regression, it makes me think of Jethro "I done learned me an education".
END OF THREAD
|
Why is logistic regression called a machine learning algorithm?
|
I finally figured it out. I now know the difference between statistical model fitting and machine learning.
If you fit a model (regression), that's statistical model fitting
If you learn a model (reg
|
Why is logistic regression called a machine learning algorithm?
I finally figured it out. I now know the difference between statistical model fitting and machine learning.
If you fit a model (regression), that's statistical model fitting
If you learn a model (regression), that's machine learning
So if you learn a logistic regression, that is a machine learning algorithm.
Comment: Pardon me for being an old geezer, but whenever I hear people talking about learning a model, or learning a regression, it makes me think of Jethro "I done learned me an education".
END OF THREAD
|
Why is logistic regression called a machine learning algorithm?
I finally figured it out. I now know the difference between statistical model fitting and machine learning.
If you fit a model (regression), that's statistical model fitting
If you learn a model (reg
|
10,522
|
Why is logistic regression called a machine learning algorithm?
|
Machine learning is pretty loosely defined and you're correct in thinking that regression models--and not just logistic regression ones--also "learn" from the data. I'm not really sure if this means machine learning is really statistics or statistics is really machine learning--or if any of this matters at all.
However, I don't think it's necessary for an algorithm to repeatedly learn from its mistakes. Most methods use a training set to calculate some parameters and then use these fixed parameters to make predictions on some additional test data. The training process may involve repeatedly updating the parameters (as in backpropagation), but it doesn't necessarily ($k$-nearest neighbours doesn't do anything at all during training!). In any case, at test-time, you may not even have access to ground-truth data.
That said, some algorithms do learn from prediction errors--this is particularly common in reinforcement learning, where an agent takes some action, observes its result, and then uses the outcome to plan future actions. For example, a robotic vacuum might start with a model of the world where it cleans all locations equally often, and then learn to vacuum dirty places (where it is "rewarded" by finding dirt) more and clean places less.
Online or incremental algorithms can be repeatedly updated with new training data. This doesn't necessarily depend on the model's prediction accuracy, but I could imagine an algorithm where the weights are updated more aggressively if, for example, the new data seems very unlikely given the current model. There are online versions for logistic regression: e.g., McMahan and Streeeter (2012).
|
Why is logistic regression called a machine learning algorithm?
|
Machine learning is pretty loosely defined and you're correct in thinking that regression models--and not just logistic regression ones--also "learn" from the data. I'm not really sure if this means m
|
Why is logistic regression called a machine learning algorithm?
Machine learning is pretty loosely defined and you're correct in thinking that regression models--and not just logistic regression ones--also "learn" from the data. I'm not really sure if this means machine learning is really statistics or statistics is really machine learning--or if any of this matters at all.
However, I don't think it's necessary for an algorithm to repeatedly learn from its mistakes. Most methods use a training set to calculate some parameters and then use these fixed parameters to make predictions on some additional test data. The training process may involve repeatedly updating the parameters (as in backpropagation), but it doesn't necessarily ($k$-nearest neighbours doesn't do anything at all during training!). In any case, at test-time, you may not even have access to ground-truth data.
That said, some algorithms do learn from prediction errors--this is particularly common in reinforcement learning, where an agent takes some action, observes its result, and then uses the outcome to plan future actions. For example, a robotic vacuum might start with a model of the world where it cleans all locations equally often, and then learn to vacuum dirty places (where it is "rewarded" by finding dirt) more and clean places less.
Online or incremental algorithms can be repeatedly updated with new training data. This doesn't necessarily depend on the model's prediction accuracy, but I could imagine an algorithm where the weights are updated more aggressively if, for example, the new data seems very unlikely given the current model. There are online versions for logistic regression: e.g., McMahan and Streeeter (2012).
|
Why is logistic regression called a machine learning algorithm?
Machine learning is pretty loosely defined and you're correct in thinking that regression models--and not just logistic regression ones--also "learn" from the data. I'm not really sure if this means m
|
10,523
|
Why is logistic regression called a machine learning algorithm?
|
Logistic regression (and more generally, GLM) does NOT belong to Machine Learning! Rather, these methods belongs to parametric modeling.
Both parametric and algorithmic (ML) models use the data, but in different ways. Algorithmic models learn from the data how predictors map to the predictand, but they do not make any assumption about the process that has generated the observations (nor any other assumption, actually). They consider that the underlying relationships between input and output variables are complex and unknown, and thus, adopt a data driven approach to understand what's going on, rather than imposing a formal equation.
On the other hand, parametric models are prescribed a priori based on some knowledge of the process studied, use the data to estimate their parameters, and make a lot of unrealistic assumptions that rarely hold in practice (such as the independence, equal variance, and Normal distribution of the errors).
Also, parametric models (like logistic regression) are global models. They cannot capture local patterns in the data (unlike ML methods that use trees as their base models, for instance RF or Boosted Trees). See this paper page 5. As a remediation strategy, local (i.e., nonparametric) GLM can be used (see for instance the locfit R package).
Often, when little knowledge about the underlying phenomenon is available, it is better to adopt a data-driven approach and to use algorithmic modeling. For instance, if you use logistic regression in a case where the interplay between input and output variables is not linear, your model will be clearly inadequate and a lot of signal will not be captured. However, when the process is well understood, parametric models have the advantage of providing a formal equation to summarize everything, which is powerful from a theoretical standpoint.
For a more detailed discussion, read this excellent paper by Leo Breiman.
|
Why is logistic regression called a machine learning algorithm?
|
Logistic regression (and more generally, GLM) does NOT belong to Machine Learning! Rather, these methods belongs to parametric modeling.
Both parametric and algorithmic (ML) models use the data, but
|
Why is logistic regression called a machine learning algorithm?
Logistic regression (and more generally, GLM) does NOT belong to Machine Learning! Rather, these methods belongs to parametric modeling.
Both parametric and algorithmic (ML) models use the data, but in different ways. Algorithmic models learn from the data how predictors map to the predictand, but they do not make any assumption about the process that has generated the observations (nor any other assumption, actually). They consider that the underlying relationships between input and output variables are complex and unknown, and thus, adopt a data driven approach to understand what's going on, rather than imposing a formal equation.
On the other hand, parametric models are prescribed a priori based on some knowledge of the process studied, use the data to estimate their parameters, and make a lot of unrealistic assumptions that rarely hold in practice (such as the independence, equal variance, and Normal distribution of the errors).
Also, parametric models (like logistic regression) are global models. They cannot capture local patterns in the data (unlike ML methods that use trees as their base models, for instance RF or Boosted Trees). See this paper page 5. As a remediation strategy, local (i.e., nonparametric) GLM can be used (see for instance the locfit R package).
Often, when little knowledge about the underlying phenomenon is available, it is better to adopt a data-driven approach and to use algorithmic modeling. For instance, if you use logistic regression in a case where the interplay between input and output variables is not linear, your model will be clearly inadequate and a lot of signal will not be captured. However, when the process is well understood, parametric models have the advantage of providing a formal equation to summarize everything, which is powerful from a theoretical standpoint.
For a more detailed discussion, read this excellent paper by Leo Breiman.
|
Why is logistic regression called a machine learning algorithm?
Logistic regression (and more generally, GLM) does NOT belong to Machine Learning! Rather, these methods belongs to parametric modeling.
Both parametric and algorithmic (ML) models use the data, but
|
10,524
|
Why is logistic regression called a machine learning algorithm?
|
I think the other answers do a good job at identifying more or less what Machine Learning is (as they indicate, it can be a fuzzy thing). I will add that Logistic Regression (and its more general multinomial version) is very commonly used as a means of performing classification in artificial neural networks (which I think are unambiguously covered by whatever sensible machine learning definition you choose), and so if you mention Logistic Regression to a neural net person, they are likely to immediately think of it in this context. Getting tied up with a heavy hitter in machine learning is a good way to become a machine learning technique yourself, and I think to some extent that is what happened with various regression techniques, though I wouldn't discount them from being proper machine learning techniques in and of themselves.
|
Why is logistic regression called a machine learning algorithm?
|
I think the other answers do a good job at identifying more or less what Machine Learning is (as they indicate, it can be a fuzzy thing). I will add that Logistic Regression (and its more general mult
|
Why is logistic regression called a machine learning algorithm?
I think the other answers do a good job at identifying more or less what Machine Learning is (as they indicate, it can be a fuzzy thing). I will add that Logistic Regression (and its more general multinomial version) is very commonly used as a means of performing classification in artificial neural networks (which I think are unambiguously covered by whatever sensible machine learning definition you choose), and so if you mention Logistic Regression to a neural net person, they are likely to immediately think of it in this context. Getting tied up with a heavy hitter in machine learning is a good way to become a machine learning technique yourself, and I think to some extent that is what happened with various regression techniques, though I wouldn't discount them from being proper machine learning techniques in and of themselves.
|
Why is logistic regression called a machine learning algorithm?
I think the other answers do a good job at identifying more or less what Machine Learning is (as they indicate, it can be a fuzzy thing). I will add that Logistic Regression (and its more general mult
|
10,525
|
Why is logistic regression called a machine learning algorithm?
|
I think any procedure which is "iterative" can be considered a case of machine learning. Regression can be considered machine learning. We could do it by hand, but it would take a long time, if at all possible. So now we have these programs, machines, which do the iterations for us. It gets closer and closer to a solution, or to the best solution or best fit. Thus, "machine learning". Of course things like neural networks get most of the attention in regard to machine learning, so we usually associate machine learning to these sexy procedures. Also, the difference between "supervised" and "unsupervised" machine learning is relevant here
|
Why is logistic regression called a machine learning algorithm?
|
I think any procedure which is "iterative" can be considered a case of machine learning. Regression can be considered machine learning. We could do it by hand, but it would take a long time, if at a
|
Why is logistic regression called a machine learning algorithm?
I think any procedure which is "iterative" can be considered a case of machine learning. Regression can be considered machine learning. We could do it by hand, but it would take a long time, if at all possible. So now we have these programs, machines, which do the iterations for us. It gets closer and closer to a solution, or to the best solution or best fit. Thus, "machine learning". Of course things like neural networks get most of the attention in regard to machine learning, so we usually associate machine learning to these sexy procedures. Also, the difference between "supervised" and "unsupervised" machine learning is relevant here
|
Why is logistic regression called a machine learning algorithm?
I think any procedure which is "iterative" can be considered a case of machine learning. Regression can be considered machine learning. We could do it by hand, but it would take a long time, if at a
|
10,526
|
Why is logistic regression called a machine learning algorithm?
|
It is a very common mistake that most people do and i can see it here also (done by almost everyone). Let me explain it in detail...
Logistic Regression and linear Regression model, both are parametric model as well as Machine Learning Technique. It just depends on the method you are using to estimate the model parameters(theta's).
There are 2 ways of finding model parameters in Linear Regression and Logistic reg.
Gradient Descent Technique: Here we starts by assigning random values to the parameters and find cost function(error). In each iteration we update our parameters and minimize cost function. After certain number of iterations, cost function reduced to desired values and corresponding parameters values are our final values. This is what a machine learning techniques supposed to do.
So, if You are using Gradient Descent technique, Logistic regression can call as a machine learning technique.
By using Least Square Method: Here we have direct formula to find our parameters (some matrix algebra is required to understand the derivation of this formula) which is known as normal equation.
Here b represents parameters X is design Matrix.
Both Methods have their own advantages and limitations.
To get more details: follow coursera Machine Learning course still running.
I hope this post might be helpful .. :-)
|
Why is logistic regression called a machine learning algorithm?
|
It is a very common mistake that most people do and i can see it here also (done by almost everyone). Let me explain it in detail...
Logistic Regression and linear Regression model, both are parametri
|
Why is logistic regression called a machine learning algorithm?
It is a very common mistake that most people do and i can see it here also (done by almost everyone). Let me explain it in detail...
Logistic Regression and linear Regression model, both are parametric model as well as Machine Learning Technique. It just depends on the method you are using to estimate the model parameters(theta's).
There are 2 ways of finding model parameters in Linear Regression and Logistic reg.
Gradient Descent Technique: Here we starts by assigning random values to the parameters and find cost function(error). In each iteration we update our parameters and minimize cost function. After certain number of iterations, cost function reduced to desired values and corresponding parameters values are our final values. This is what a machine learning techniques supposed to do.
So, if You are using Gradient Descent technique, Logistic regression can call as a machine learning technique.
By using Least Square Method: Here we have direct formula to find our parameters (some matrix algebra is required to understand the derivation of this formula) which is known as normal equation.
Here b represents parameters X is design Matrix.
Both Methods have their own advantages and limitations.
To get more details: follow coursera Machine Learning course still running.
I hope this post might be helpful .. :-)
|
Why is logistic regression called a machine learning algorithm?
It is a very common mistake that most people do and i can see it here also (done by almost everyone). Let me explain it in detail...
Logistic Regression and linear Regression model, both are parametri
|
10,527
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
|
All/other things being equal (when?) machine learning models require similar quantities of data as statistical models. In general statistical models tend to have more assumptions than machine learning models and it is these additional assumptions that give you more power (assuming they are true/valid), which means that smaller samples are needed to obtain the same confidence. You can think of the difference between statistical/machine learning models as a difference between parametric and non-parametric models.
Complex models (which are more prevalent in machine learning) with many parameters do require more data (such as deep NN), but it has to do with the parameters and not the models themselves. If you built a complex statistical model with many interactions and polynomial terms you would similarly need large amounts of data to estimate all the parameters (unless you are Bayesian... then you do not even need data!).
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
|
All/other things being equal (when?) machine learning models require similar quantities of data as statistical models. In general statistical models tend to have more assumptions than machine learning
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
All/other things being equal (when?) machine learning models require similar quantities of data as statistical models. In general statistical models tend to have more assumptions than machine learning models and it is these additional assumptions that give you more power (assuming they are true/valid), which means that smaller samples are needed to obtain the same confidence. You can think of the difference between statistical/machine learning models as a difference between parametric and non-parametric models.
Complex models (which are more prevalent in machine learning) with many parameters do require more data (such as deep NN), but it has to do with the parameters and not the models themselves. If you built a complex statistical model with many interactions and polynomial terms you would similarly need large amounts of data to estimate all the parameters (unless you are Bayesian... then you do not even need data!).
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
All/other things being equal (when?) machine learning models require similar quantities of data as statistical models. In general statistical models tend to have more assumptions than machine learning
|
10,528
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
|
Well, you could do inference with a small amount of data. We just have concepts like statistical power to tell us when our results would be reliable and when they would not be.
In general, lots of data is needed in machine learning to overcome the variance in estimators/models. Trees, as an example, are incredibly high variance estimators. The only real way to combat that is to add more data since the variance shrinks proportional to $1/n$.
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
|
Well, you could do inference with a small amount of data. We just have concepts like statistical power to tell us when our results would be reliable and when they would not be.
In general, lots of da
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
Well, you could do inference with a small amount of data. We just have concepts like statistical power to tell us when our results would be reliable and when they would not be.
In general, lots of data is needed in machine learning to overcome the variance in estimators/models. Trees, as an example, are incredibly high variance estimators. The only real way to combat that is to add more data since the variance shrinks proportional to $1/n$.
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
Well, you could do inference with a small amount of data. We just have concepts like statistical power to tell us when our results would be reliable and when they would not be.
In general, lots of da
|
10,529
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
|
Machine learning does not require large amounts of data, it is just that the current bandwagon is for models that work on big data (mainly deep neural networks, which have been around since the 1990s, but before that it was SVMs and before that "shallow" neural nets), but research on other forms of machine learning has continued. My own personal research interests are in model selection for small data, which is far from a solved problem, just not in fashion. Another example would be Gaussian Processes, which are very good where a complex (non-linear) model is required, but the data are relatively scarce.
It is a pity that there is so much focus on deep learning and big data as it means that a lot of new practitioners are unwaware of research that was done 20 or more years ago that is still valid today, and as a result they are falling into many of the same pitfalls that we found back in the day. Sadly ML and AI goes thorough these cycles of hype and doldrums.
At the end of the day though, ML is just statistics, but a more computationally focussed branch of statistics.
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
|
Machine learning does not require large amounts of data, it is just that the current bandwagon is for models that work on big data (mainly deep neural networks, which have been around since the 1990s,
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
Machine learning does not require large amounts of data, it is just that the current bandwagon is for models that work on big data (mainly deep neural networks, which have been around since the 1990s, but before that it was SVMs and before that "shallow" neural nets), but research on other forms of machine learning has continued. My own personal research interests are in model selection for small data, which is far from a solved problem, just not in fashion. Another example would be Gaussian Processes, which are very good where a complex (non-linear) model is required, but the data are relatively scarce.
It is a pity that there is so much focus on deep learning and big data as it means that a lot of new practitioners are unwaware of research that was done 20 or more years ago that is still valid today, and as a result they are falling into many of the same pitfalls that we found back in the day. Sadly ML and AI goes thorough these cycles of hype and doldrums.
At the end of the day though, ML is just statistics, but a more computationally focussed branch of statistics.
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
Machine learning does not require large amounts of data, it is just that the current bandwagon is for models that work on big data (mainly deep neural networks, which have been around since the 1990s,
|
10,530
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
|
Machine learning (often) needs a lot of data because it doesn't start with a well defined model and uses (additional) data to define or improve the model. As a consequence there are often a lot of additional parameters to be estimated, parameters or settings that are already defined a-priori in non-machine-learning methods.
Statistical inference, if it only requires little data, is often performed with some model that is already known/defined before the observations are made. The learning has already been done.
The goal of the inference is to estimate the few missing parameters in the model and verify the accuracy of the model.
Machine learning is often starting with only a very minimal model or has not even a model but just a few set of rules from which a model can be created or selected.
For instance, one learns which variables are actually suitable to make good predictions or one uses a flexible neural network to come up with a function that fits well and makes good predictions.
Machine learning does not just search for a few parameters in an already fixed model. Instead it is the model itself that is being generated in machine learning. For that you need additional data.
Sometimes it is also the other way around: a lot of data needs machine learning. That is the situation with lots of variables but without a well defined model.
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
|
Machine learning (often) needs a lot of data because it doesn't start with a well defined model and uses (additional) data to define or improve the model. As a consequence there are often a lot of add
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
Machine learning (often) needs a lot of data because it doesn't start with a well defined model and uses (additional) data to define or improve the model. As a consequence there are often a lot of additional parameters to be estimated, parameters or settings that are already defined a-priori in non-machine-learning methods.
Statistical inference, if it only requires little data, is often performed with some model that is already known/defined before the observations are made. The learning has already been done.
The goal of the inference is to estimate the few missing parameters in the model and verify the accuracy of the model.
Machine learning is often starting with only a very minimal model or has not even a model but just a few set of rules from which a model can be created or selected.
For instance, one learns which variables are actually suitable to make good predictions or one uses a flexible neural network to come up with a function that fits well and makes good predictions.
Machine learning does not just search for a few parameters in an already fixed model. Instead it is the model itself that is being generated in machine learning. For that you need additional data.
Sometimes it is also the other way around: a lot of data needs machine learning. That is the situation with lots of variables but without a well defined model.
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
Machine learning (often) needs a lot of data because it doesn't start with a well defined model and uses (additional) data to define or improve the model. As a consequence there are often a lot of add
|
10,531
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
|
A typical machine learning model contains thousands to millions of parameters, while statistical modelling is typically limited to a handful parameters.
As a rule of thumb, the minimum an amount of samples you need is proportional to the amount of parameters you want to estimate. So for statistical modelling of a handful of parameters you might only need a hundred samples, while for machine learning with millions of parameters you may need millions of samples.
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
|
A typical machine learning model contains thousands to millions of parameters, while statistical modelling is typically limited to a handful parameters.
As a rule of thumb, the minimum an amount of sa
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
A typical machine learning model contains thousands to millions of parameters, while statistical modelling is typically limited to a handful parameters.
As a rule of thumb, the minimum an amount of samples you need is proportional to the amount of parameters you want to estimate. So for statistical modelling of a handful of parameters you might only need a hundred samples, while for machine learning with millions of parameters you may need millions of samples.
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
A typical machine learning model contains thousands to millions of parameters, while statistical modelling is typically limited to a handful parameters.
As a rule of thumb, the minimum an amount of sa
|
10,532
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
|
Machine Learning and Statistical inference deal with different type of problems and are not comparable in this point of view.
Statistical inference is used in problems that are inherently statistic, for example, if there was ten days raining then next day will more probable (using Bayesian approach) be raining as well, no need for more data.
But in machine learning, some features or patterns which exist in data must be learned. For an example in classification with machine learning, it first must learn (given a lot/enough/balanced learning data) to classify between pictures of cats and dogs, and then after learning phase, the problem in inference phase is that we show it a picture and it should tell us whether it is a cat or a dog. Now suppose that we show it 10 pictures of cats to infer and classify, and all was successfully inferred. Now, does the probability of 11th picture to be cat matter for that machine? No, because it should classify that picture based on its learned abilities to discover a cat, not the probability of being a cat after 10 cats.
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
|
Machine Learning and Statistical inference deal with different type of problems and are not comparable in this point of view.
Statistical inference is used in problems that are inherently statistic, f
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
Machine Learning and Statistical inference deal with different type of problems and are not comparable in this point of view.
Statistical inference is used in problems that are inherently statistic, for example, if there was ten days raining then next day will more probable (using Bayesian approach) be raining as well, no need for more data.
But in machine learning, some features or patterns which exist in data must be learned. For an example in classification with machine learning, it first must learn (given a lot/enough/balanced learning data) to classify between pictures of cats and dogs, and then after learning phase, the problem in inference phase is that we show it a picture and it should tell us whether it is a cat or a dog. Now suppose that we show it 10 pictures of cats to infer and classify, and all was successfully inferred. Now, does the probability of 11th picture to be cat matter for that machine? No, because it should classify that picture based on its learned abilities to discover a cat, not the probability of being a cat after 10 cats.
|
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
Machine Learning and Statistical inference deal with different type of problems and are not comparable in this point of view.
Statistical inference is used in problems that are inherently statistic, f
|
10,533
|
When is quantile regression worse than OLS?
|
If you are interested in the mean, use OLS, if in the median, use quantile.
One big difference is that the mean is more affected by outliers and other extreme data. Sometimes, that is what you want. One example is if your dependent variable is the social capital in a neighborhood. The presence of a single person with a lot of social capital may be very important for the whole neighborhood.
|
When is quantile regression worse than OLS?
|
If you are interested in the mean, use OLS, if in the median, use quantile.
One big difference is that the mean is more affected by outliers and other extreme data. Sometimes, that is what you want. O
|
When is quantile regression worse than OLS?
If you are interested in the mean, use OLS, if in the median, use quantile.
One big difference is that the mean is more affected by outliers and other extreme data. Sometimes, that is what you want. One example is if your dependent variable is the social capital in a neighborhood. The presence of a single person with a lot of social capital may be very important for the whole neighborhood.
|
When is quantile regression worse than OLS?
If you are interested in the mean, use OLS, if in the median, use quantile.
One big difference is that the mean is more affected by outliers and other extreme data. Sometimes, that is what you want. O
|
10,534
|
When is quantile regression worse than OLS?
|
There seems to be a confusion in the premise of the question. In the second paragraph it says, "we could just use median regression as the OLS substitute". Note that regressing the conditional median on X is (a form of) quantile regression.
If the error in the underlying data generating process is normally distributed (which can be assessed by checking if the residuals are normal), then the conditional mean equals the conditional median. Moreover, any quantile you may be interested in (e.g., the 95th percentile, or the 37th percentile), can be determined for a given point in the X dimension with standard OLS methods. The main appeal of quantile regression is that it is more robust than OLS. The downside is that if all assumptions are met, it will be less efficient (that is, you will need a larger sample size to achieve the same power / your estimates will be less precise).
|
When is quantile regression worse than OLS?
|
There seems to be a confusion in the premise of the question. In the second paragraph it says, "we could just use median regression as the OLS substitute". Note that regressing the conditional media
|
When is quantile regression worse than OLS?
There seems to be a confusion in the premise of the question. In the second paragraph it says, "we could just use median regression as the OLS substitute". Note that regressing the conditional median on X is (a form of) quantile regression.
If the error in the underlying data generating process is normally distributed (which can be assessed by checking if the residuals are normal), then the conditional mean equals the conditional median. Moreover, any quantile you may be interested in (e.g., the 95th percentile, or the 37th percentile), can be determined for a given point in the X dimension with standard OLS methods. The main appeal of quantile regression is that it is more robust than OLS. The downside is that if all assumptions are met, it will be less efficient (that is, you will need a larger sample size to achieve the same power / your estimates will be less precise).
|
When is quantile regression worse than OLS?
There seems to be a confusion in the premise of the question. In the second paragraph it says, "we could just use median regression as the OLS substitute". Note that regressing the conditional media
|
10,535
|
When is quantile regression worse than OLS?
|
Both OLS and quantile regression (QR) are estimation techniques for estimating the coefficient vector $\beta$ in a linear regression model
$$
y = X\beta + \varepsilon
$$
(for the case of QR see Koenker (1978), p. 33, second paragraph).
For certain error distributions (e.g. those with heavy tails), the QR estimator $\hat\beta_{QR}$ is more efficient than the OLS estimator $\hat\beta_{OLS}$; recall that $\hat\beta_{OLS}$ is efficient only in the class of linear unbiased estimators. This is the main motivation for Koenker (1978) that suggests using the QR in place of OLS under a variety of settings. I think that for any moment of the conditional distribution $P_Y(y|X)$ we should use the one of $\hat\beta_{OLS}$ and $\hat\beta_{QR}$ that is more efficient (please correct me if I am wrong).
Now to answer your question directly, QR is "worse" than OLS (and thus $\hat\beta_{OLS}$ should be preferred over $\hat\beta_{QR}$) when $\hat\beta_{OLS}$ is more efficient than $\hat\beta_{QR}$. One such example is when the error distribution is Normal.
References:
Koenker, Roger, and Gilbert Bassett Jr. "Regression quantiles." Econometrica: Journal of the Econometric Society (1978): 33-50.
|
When is quantile regression worse than OLS?
|
Both OLS and quantile regression (QR) are estimation techniques for estimating the coefficient vector $\beta$ in a linear regression model
$$
y = X\beta + \varepsilon
$$
(for the case of QR see Koenk
|
When is quantile regression worse than OLS?
Both OLS and quantile regression (QR) are estimation techniques for estimating the coefficient vector $\beta$ in a linear regression model
$$
y = X\beta + \varepsilon
$$
(for the case of QR see Koenker (1978), p. 33, second paragraph).
For certain error distributions (e.g. those with heavy tails), the QR estimator $\hat\beta_{QR}$ is more efficient than the OLS estimator $\hat\beta_{OLS}$; recall that $\hat\beta_{OLS}$ is efficient only in the class of linear unbiased estimators. This is the main motivation for Koenker (1978) that suggests using the QR in place of OLS under a variety of settings. I think that for any moment of the conditional distribution $P_Y(y|X)$ we should use the one of $\hat\beta_{OLS}$ and $\hat\beta_{QR}$ that is more efficient (please correct me if I am wrong).
Now to answer your question directly, QR is "worse" than OLS (and thus $\hat\beta_{OLS}$ should be preferred over $\hat\beta_{QR}$) when $\hat\beta_{OLS}$ is more efficient than $\hat\beta_{QR}$. One such example is when the error distribution is Normal.
References:
Koenker, Roger, and Gilbert Bassett Jr. "Regression quantiles." Econometrica: Journal of the Econometric Society (1978): 33-50.
|
When is quantile regression worse than OLS?
Both OLS and quantile regression (QR) are estimation techniques for estimating the coefficient vector $\beta$ in a linear regression model
$$
y = X\beta + \varepsilon
$$
(for the case of QR see Koenk
|
10,536
|
When is quantile regression worse than OLS?
|
To say what some of the excellent responses above said, but in a slightly different way, quantile regression makes fewer assumptions. On the right hand side of the model the assumptions are the same as with OLS, but on the left hand side the only assumption is continuity of the distribution of $Y$ (few ties). One could say that OLS provides an estimate of the median if the distribution of residuals is symmetric (hence median=mean), and under symmetry and not-too-heavy tails (especially under normality), OLS is superior to quantile regression for estimating the median, because of much better precision. If there is only an intercept in the model, the quantile regression estimate is exactly the sample median, which has efficiency of $\frac{2}{\pi}$ when compared to the mean, under normality. Given a good estimate of the root mean squared error (residual SD) you can use OLS parametrically to estimate any quantile. But quantile estimates from OLS are assumption-laden, which is why we often use quantile regression.
If you want to estimate the mean, you can't get that from quantile regression.
If you want to estimate the mean and quantiles with minimal assumptions (but more assumptions than quantile regression) but have more efficiency, use semiparametric ordinal regression. This also gives you exceedance probabilities. A detailed case study is in my RMS course notes where it is shown on one dataset that the average mean absolute estimation error over several parameters (quantiles and mean) is achieved by ordinal regression. But for just estimating the mean, OLS is best and for just estimating quantiles, quantile regression was best.
Another big advantage of ordinal regression is that it is, except for estimating the mean, completely $Y$-transformation invariant.
|
When is quantile regression worse than OLS?
|
To say what some of the excellent responses above said, but in a slightly different way, quantile regression makes fewer assumptions. On the right hand side of the model the assumptions are the same
|
When is quantile regression worse than OLS?
To say what some of the excellent responses above said, but in a slightly different way, quantile regression makes fewer assumptions. On the right hand side of the model the assumptions are the same as with OLS, but on the left hand side the only assumption is continuity of the distribution of $Y$ (few ties). One could say that OLS provides an estimate of the median if the distribution of residuals is symmetric (hence median=mean), and under symmetry and not-too-heavy tails (especially under normality), OLS is superior to quantile regression for estimating the median, because of much better precision. If there is only an intercept in the model, the quantile regression estimate is exactly the sample median, which has efficiency of $\frac{2}{\pi}$ when compared to the mean, under normality. Given a good estimate of the root mean squared error (residual SD) you can use OLS parametrically to estimate any quantile. But quantile estimates from OLS are assumption-laden, which is why we often use quantile regression.
If you want to estimate the mean, you can't get that from quantile regression.
If you want to estimate the mean and quantiles with minimal assumptions (but more assumptions than quantile regression) but have more efficiency, use semiparametric ordinal regression. This also gives you exceedance probabilities. A detailed case study is in my RMS course notes where it is shown on one dataset that the average mean absolute estimation error over several parameters (quantiles and mean) is achieved by ordinal regression. But for just estimating the mean, OLS is best and for just estimating quantiles, quantile regression was best.
Another big advantage of ordinal regression is that it is, except for estimating the mean, completely $Y$-transformation invariant.
|
When is quantile regression worse than OLS?
To say what some of the excellent responses above said, but in a slightly different way, quantile regression makes fewer assumptions. On the right hand side of the model the assumptions are the same
|
10,537
|
When is quantile regression worse than OLS?
|
Peter Flom had a great and concise answer, I just want to expand it. The most important part of the question is how to define "worse".
In order to define worse, we need to have some metrics, and the function to calculate how good or bad the fittings are called loss functions.
We can have different definitions of the loss function, and there is no right or wrong on each definition, but different definition satisfy different needs. Two well known loss functions are squared loss and absolute value loss.
$$L_{sq}(y,\hat y)=\sum_i (y_i-\hat y_i)^2$$
$$L_{abs}(y,\hat y)=\sum_i |y_i-\hat y_i|$$
If we use squared loss as a measure of success, quantile regression will be worse than OLS. On the other hand, if we use absolute value loss, quantile regression will be better.
Which is what Peter Folm's answer:
If you are interested in the mean, use OLS, if in the median, use quantile.
|
When is quantile regression worse than OLS?
|
Peter Flom had a great and concise answer, I just want to expand it. The most important part of the question is how to define "worse".
In order to define worse, we need to have some metrics, and the
|
When is quantile regression worse than OLS?
Peter Flom had a great and concise answer, I just want to expand it. The most important part of the question is how to define "worse".
In order to define worse, we need to have some metrics, and the function to calculate how good or bad the fittings are called loss functions.
We can have different definitions of the loss function, and there is no right or wrong on each definition, but different definition satisfy different needs. Two well known loss functions are squared loss and absolute value loss.
$$L_{sq}(y,\hat y)=\sum_i (y_i-\hat y_i)^2$$
$$L_{abs}(y,\hat y)=\sum_i |y_i-\hat y_i|$$
If we use squared loss as a measure of success, quantile regression will be worse than OLS. On the other hand, if we use absolute value loss, quantile regression will be better.
Which is what Peter Folm's answer:
If you are interested in the mean, use OLS, if in the median, use quantile.
|
When is quantile regression worse than OLS?
Peter Flom had a great and concise answer, I just want to expand it. The most important part of the question is how to define "worse".
In order to define worse, we need to have some metrics, and the
|
10,538
|
Least Squares Regression Step-By-Step Linear Algebra Computation
|
Note: I've posted an expanded version of this answer on my website.
Would you kindly consider posting a similar answer with the actual R engine exposed?
Sure! Down the rabbit hole we go.
The first layer is lm, the interface exposed to the R programmer. You can look at the source for this by just typing lm at the R console. The majority of it (like the majority of most production level code) is busy checking of inputs, setting of object attributes, and throwing of errors; but this line sticks out
lm.fit(x, y, offset = offset, singular.ok = singular.ok,
...)
lm.fit is another R function, you can call it yourself. While lm conveniently works with formulas and data frame, lm.fit wants matrices, so that's one level of abstraction removed. Checking the source for lm.fit, more busywork, and the following really interesting line
z <- .Call(C_Cdqrls, x, y, tol, FALSE)
Now we are getting somewhere. .Call is R's way of calling into C code. There is a C function, C_Cdqrls in the R source somewhere, and we need to find it. Here it is.
Looking at the C function, again, we find mostly bounds checking, error cleanup, and busy work. But this line is different
F77_CALL(dqrls)(REAL(qr), &n, &p, REAL(y), &ny, &rtol,
REAL(coefficients), REAL(residuals), REAL(effects),
&rank, INTEGER(pivot), REAL(qraux), work);
So now we are on our third language, R has called C which is calling into fortran. Here's the fortran code.
The first comment tells it all
c dqrfit is a subroutine to compute least squares solutions
c to the system
c
c (1) x * b = y
(interestingly, looks like the name of this routine was changed at some point, but someone forgot to update the comment). So we're finally at the point where we can do some linear algebra, and actually solve the system of equations. This is the sort of thing that fortran is really good at, which explains why we passed through so many layers to get here.
The comment also explains what the code is going to do
c on return
c
c x contains the output array from dqrdc2.
c namely the qr decomposition of x stored in
c compact form.
So fortran is going to solve the system by finding the $QR$ decomposition.
The first thing that happens, and by far the most important, is
call dqrdc2(x,n,n,p,tol,k,qraux,jpvt,work)
This calls the fortran function dqrdc2 on our input matrix x. Whats this?
c dqrfit uses the linpack routines dqrdc and dqrsl.
So we've finally made it to linpack. Linpack is a fortran linear algebra library that has been around since the 70s. Most serious linear algebra eventualy finds its way to linpack. In our case, we are using the function dqrdc2
c dqrdc2 uses householder transformations to compute the qr
c factorization of an n by p matrix x.
This is where the actual work is done. It would take a good full day for me to figure out what this code is doing, it is as low level as they come. But generically, we have a matrix $X$ and we want to factor it into a product $X = QR$ where $Q$ is an orthogonal matrix and $R$ is an upper triangular matrix. This is a smart thing to do, because once you have $Q$ and $R$ you can solve the linear equations for regression
$$ X^t X \beta = X^t Y $$
very easily. Indeed
$$ X^t X = R^t Q^t Q R = R^t R $$
so the whole system becomes
$$ R^t R \beta = R^t Q^t y $$
but $R$ is upper triangular and has the same rank as $X^t X$, so as long as our problem is well posed, it is full rank, and we may as well just solve the reduced system
$$ R \beta = Q^t y $$
But here's the awesome thing. $R$ is upper triangular, so the last linear equation here is just constant * beta_n = constant, so solving for $\beta_n$ is trivial. You can then go up the rows, one by one, and substitute in the $\beta$s you already know, each time getting a simple one variable linear equation to solve. So, once you have $Q$ and $R$, the whole thing collapses to what is called backwards substitution, which is easy. You can read about this in more detail here, where an explicit small example is fully worked out.
|
Least Squares Regression Step-By-Step Linear Algebra Computation
|
Note: I've posted an expanded version of this answer on my website.
Would you kindly consider posting a similar answer with the actual R engine exposed?
Sure! Down the rabbit hole we go.
The first
|
Least Squares Regression Step-By-Step Linear Algebra Computation
Note: I've posted an expanded version of this answer on my website.
Would you kindly consider posting a similar answer with the actual R engine exposed?
Sure! Down the rabbit hole we go.
The first layer is lm, the interface exposed to the R programmer. You can look at the source for this by just typing lm at the R console. The majority of it (like the majority of most production level code) is busy checking of inputs, setting of object attributes, and throwing of errors; but this line sticks out
lm.fit(x, y, offset = offset, singular.ok = singular.ok,
...)
lm.fit is another R function, you can call it yourself. While lm conveniently works with formulas and data frame, lm.fit wants matrices, so that's one level of abstraction removed. Checking the source for lm.fit, more busywork, and the following really interesting line
z <- .Call(C_Cdqrls, x, y, tol, FALSE)
Now we are getting somewhere. .Call is R's way of calling into C code. There is a C function, C_Cdqrls in the R source somewhere, and we need to find it. Here it is.
Looking at the C function, again, we find mostly bounds checking, error cleanup, and busy work. But this line is different
F77_CALL(dqrls)(REAL(qr), &n, &p, REAL(y), &ny, &rtol,
REAL(coefficients), REAL(residuals), REAL(effects),
&rank, INTEGER(pivot), REAL(qraux), work);
So now we are on our third language, R has called C which is calling into fortran. Here's the fortran code.
The first comment tells it all
c dqrfit is a subroutine to compute least squares solutions
c to the system
c
c (1) x * b = y
(interestingly, looks like the name of this routine was changed at some point, but someone forgot to update the comment). So we're finally at the point where we can do some linear algebra, and actually solve the system of equations. This is the sort of thing that fortran is really good at, which explains why we passed through so many layers to get here.
The comment also explains what the code is going to do
c on return
c
c x contains the output array from dqrdc2.
c namely the qr decomposition of x stored in
c compact form.
So fortran is going to solve the system by finding the $QR$ decomposition.
The first thing that happens, and by far the most important, is
call dqrdc2(x,n,n,p,tol,k,qraux,jpvt,work)
This calls the fortran function dqrdc2 on our input matrix x. Whats this?
c dqrfit uses the linpack routines dqrdc and dqrsl.
So we've finally made it to linpack. Linpack is a fortran linear algebra library that has been around since the 70s. Most serious linear algebra eventualy finds its way to linpack. In our case, we are using the function dqrdc2
c dqrdc2 uses householder transformations to compute the qr
c factorization of an n by p matrix x.
This is where the actual work is done. It would take a good full day for me to figure out what this code is doing, it is as low level as they come. But generically, we have a matrix $X$ and we want to factor it into a product $X = QR$ where $Q$ is an orthogonal matrix and $R$ is an upper triangular matrix. This is a smart thing to do, because once you have $Q$ and $R$ you can solve the linear equations for regression
$$ X^t X \beta = X^t Y $$
very easily. Indeed
$$ X^t X = R^t Q^t Q R = R^t R $$
so the whole system becomes
$$ R^t R \beta = R^t Q^t y $$
but $R$ is upper triangular and has the same rank as $X^t X$, so as long as our problem is well posed, it is full rank, and we may as well just solve the reduced system
$$ R \beta = Q^t y $$
But here's the awesome thing. $R$ is upper triangular, so the last linear equation here is just constant * beta_n = constant, so solving for $\beta_n$ is trivial. You can then go up the rows, one by one, and substitute in the $\beta$s you already know, each time getting a simple one variable linear equation to solve. So, once you have $Q$ and $R$, the whole thing collapses to what is called backwards substitution, which is easy. You can read about this in more detail here, where an explicit small example is fully worked out.
|
Least Squares Regression Step-By-Step Linear Algebra Computation
Note: I've posted an expanded version of this answer on my website.
Would you kindly consider posting a similar answer with the actual R engine exposed?
Sure! Down the rabbit hole we go.
The first
|
10,539
|
Least Squares Regression Step-By-Step Linear Algebra Computation
|
The actual step-by-step calculations in R are beautifully described in the answer by Matthew Drury in this same thread. In this answer I want to walk through the process of proving to oneself that the results in R with a simple example can be reached following the linear algebra of projections onto the column space, and perpendicular (dot product) errors concept, illustrated in different posts, and nicely explained by Dr. Strang in Linear Algebra and Its Applications, and readily accessible here.
In order to estimate the coefficients $\small \beta$ in the regression,
$$\small mpg = intercept\,(cyl=4) + \beta_1\,*\,weight + D1\,* intercept\,(cyl=6) + D2\, * intercept\,(cyl=8)\,\,\,\,[*]$$
with $\small D1$ and $\small D2$ representing dummy variables with values [0,1],
we first would need to include in the design matrix ($\small X$) the dummy coding for the number of cylinders, as follows:
attach(mtcars)
x1 <- wt
x2 <- cyl; x2[x2==4] <- 1; x2[!x2==1] <-0
x3 <- cyl; x3[x3==6] <- 1; x3[!x3==1] <-0
x4 <- cyl; x4[x4==8] <- 1; x4[!x4==1] <-0
X <- cbind(x1, x2, x3, x4)
colnames(X) <-c('wt','4cyl', '6cyl', '8cyl')
head(X)
wt 4cyl 6cyl 8cyl
[1,] 2.620 0 1 0
[2,] 2.875 0 1 0
[3,] 2.320 1 0 0
[4,] 3.215 0 1 0
[5,] 3.440 0 0 1
[6,] 3.460 0 1 0
If the design matrix had to parallel strictly equation $\small [*]$ (above), where the first intercept corresponds to cars of four cylinders, as in the lm without a `-1', it would require a first column of just ones, but we'll derive the same results without this intercept column.
Continuing then, to calculate the coefficients ($\small\beta$) we need the projection matrix - we project the vector of the independent variable values on to the column space of the vectors constituting the design matrix. The linear algebra is $\small ProjMatrix = \small (X^{T}X)^{-1}X^{T}$, which multiplied by the vector of the independent variable: $\small [ProjMatrix] \,[y]\, =\, [RegrCoef's]$, or $\small (X^{T}X)^{-1}X^{T}\,y = \beta$:
X_tr_X_inv <- solve(t(X) %*% X)
Proj_M <- X_tr_X_inv %*% t(X)
Proj_M %*% mpg
[,1]
wt -3.205613
4cyl 33.990794
6cyl 29.735212
8cyl 27.919934
Identical to: coef(lm(mpg ~ wt + as.factor(cyl)-1)).
Finally, to calculate the predicted values, we will need the hat matrix, which is defined as, $\small Hat Matrix = \small X(X^{T}X)^{-1}X^{T}$. This is readily calculated as:
HAT <- X %*% X_tr_X_inv %*% t(X)
And the estimated ($\hat{y}$) values as $\small X(X^{T}X)^{-1}X^{T}\,y$, in this case: y_hat <- HAT %*% mpg, which gives identical values to:
cyl <- as.factor(cyl); OLS <- lm(mpg ~ wt + cyl); predict(OLS):
y_hat <- as.numeric(y_hat)
predicted <- as.numeric(predict(OLS))
all.equal(y_hat,predicted)
[1] TRUE
|
Least Squares Regression Step-By-Step Linear Algebra Computation
|
The actual step-by-step calculations in R are beautifully described in the answer by Matthew Drury in this same thread. In this answer I want to walk through the process of proving to oneself that the
|
Least Squares Regression Step-By-Step Linear Algebra Computation
The actual step-by-step calculations in R are beautifully described in the answer by Matthew Drury in this same thread. In this answer I want to walk through the process of proving to oneself that the results in R with a simple example can be reached following the linear algebra of projections onto the column space, and perpendicular (dot product) errors concept, illustrated in different posts, and nicely explained by Dr. Strang in Linear Algebra and Its Applications, and readily accessible here.
In order to estimate the coefficients $\small \beta$ in the regression,
$$\small mpg = intercept\,(cyl=4) + \beta_1\,*\,weight + D1\,* intercept\,(cyl=6) + D2\, * intercept\,(cyl=8)\,\,\,\,[*]$$
with $\small D1$ and $\small D2$ representing dummy variables with values [0,1],
we first would need to include in the design matrix ($\small X$) the dummy coding for the number of cylinders, as follows:
attach(mtcars)
x1 <- wt
x2 <- cyl; x2[x2==4] <- 1; x2[!x2==1] <-0
x3 <- cyl; x3[x3==6] <- 1; x3[!x3==1] <-0
x4 <- cyl; x4[x4==8] <- 1; x4[!x4==1] <-0
X <- cbind(x1, x2, x3, x4)
colnames(X) <-c('wt','4cyl', '6cyl', '8cyl')
head(X)
wt 4cyl 6cyl 8cyl
[1,] 2.620 0 1 0
[2,] 2.875 0 1 0
[3,] 2.320 1 0 0
[4,] 3.215 0 1 0
[5,] 3.440 0 0 1
[6,] 3.460 0 1 0
If the design matrix had to parallel strictly equation $\small [*]$ (above), where the first intercept corresponds to cars of four cylinders, as in the lm without a `-1', it would require a first column of just ones, but we'll derive the same results without this intercept column.
Continuing then, to calculate the coefficients ($\small\beta$) we need the projection matrix - we project the vector of the independent variable values on to the column space of the vectors constituting the design matrix. The linear algebra is $\small ProjMatrix = \small (X^{T}X)^{-1}X^{T}$, which multiplied by the vector of the independent variable: $\small [ProjMatrix] \,[y]\, =\, [RegrCoef's]$, or $\small (X^{T}X)^{-1}X^{T}\,y = \beta$:
X_tr_X_inv <- solve(t(X) %*% X)
Proj_M <- X_tr_X_inv %*% t(X)
Proj_M %*% mpg
[,1]
wt -3.205613
4cyl 33.990794
6cyl 29.735212
8cyl 27.919934
Identical to: coef(lm(mpg ~ wt + as.factor(cyl)-1)).
Finally, to calculate the predicted values, we will need the hat matrix, which is defined as, $\small Hat Matrix = \small X(X^{T}X)^{-1}X^{T}$. This is readily calculated as:
HAT <- X %*% X_tr_X_inv %*% t(X)
And the estimated ($\hat{y}$) values as $\small X(X^{T}X)^{-1}X^{T}\,y$, in this case: y_hat <- HAT %*% mpg, which gives identical values to:
cyl <- as.factor(cyl); OLS <- lm(mpg ~ wt + cyl); predict(OLS):
y_hat <- as.numeric(y_hat)
predicted <- as.numeric(predict(OLS))
all.equal(y_hat,predicted)
[1] TRUE
|
Least Squares Regression Step-By-Step Linear Algebra Computation
The actual step-by-step calculations in R are beautifully described in the answer by Matthew Drury in this same thread. In this answer I want to walk through the process of proving to oneself that the
|
10,540
|
Econometrics textbooks?
|
Definitively Econometric Analysis, by Greene. I'm not an econometrician, but I found this book very useful and well written.
|
Econometrics textbooks?
|
Definitively Econometric Analysis, by Greene. I'm not an econometrician, but I found this book very useful and well written.
|
Econometrics textbooks?
Definitively Econometric Analysis, by Greene. I'm not an econometrician, but I found this book very useful and well written.
|
Econometrics textbooks?
Definitively Econometric Analysis, by Greene. I'm not an econometrician, but I found this book very useful and well written.
|
10,541
|
Econometrics textbooks?
|
Depends on what level you're after. At a postgraduate level, the one i've most often seen referenced and recommended, and have therefore looked at most myself, is:
Wooldridge, Jeffrey M. Econometric Analysis of Cross Section and Panel Data. MIT Press, 2001. ISBN 9780262232197
Most of what little I know about econometrics I learnt from this book. 776 pages without a single graph.
|
Econometrics textbooks?
|
Depends on what level you're after. At a postgraduate level, the one i've most often seen referenced and recommended, and have therefore looked at most myself, is:
Wooldridge, Jeffrey M. Econometric A
|
Econometrics textbooks?
Depends on what level you're after. At a postgraduate level, the one i've most often seen referenced and recommended, and have therefore looked at most myself, is:
Wooldridge, Jeffrey M. Econometric Analysis of Cross Section and Panel Data. MIT Press, 2001. ISBN 9780262232197
Most of what little I know about econometrics I learnt from this book. 776 pages without a single graph.
|
Econometrics textbooks?
Depends on what level you're after. At a postgraduate level, the one i've most often seen referenced and recommended, and have therefore looked at most myself, is:
Wooldridge, Jeffrey M. Econometric A
|
10,542
|
Econometrics textbooks?
|
"Mostly Harmless Econometrics: An Empiricist's Companion" (Angrist, Pischke 2008) is a less technical and entertaining summary of the field. I wouldn't describe it as a beginner book, but it's well worth reading once you understand the basics.
|
Econometrics textbooks?
|
"Mostly Harmless Econometrics: An Empiricist's Companion" (Angrist, Pischke 2008) is a less technical and entertaining summary of the field. I wouldn't describe it as a beginner book, but it's well w
|
Econometrics textbooks?
"Mostly Harmless Econometrics: An Empiricist's Companion" (Angrist, Pischke 2008) is a less technical and entertaining summary of the field. I wouldn't describe it as a beginner book, but it's well worth reading once you understand the basics.
|
Econometrics textbooks?
"Mostly Harmless Econometrics: An Empiricist's Companion" (Angrist, Pischke 2008) is a less technical and entertaining summary of the field. I wouldn't describe it as a beginner book, but it's well w
|
10,543
|
Econometrics textbooks?
|
It depends on what you really want, (GMM, time series, panel...) but I can recommand those two books:
Fumio Hayashi's "Econometrics" and
Davidson and McKinnon "Econometric Theory and Methods".
For a course in econometric time series, Hamilton's "Time Serie Analysis" is great.
|
Econometrics textbooks?
|
It depends on what you really want, (GMM, time series, panel...) but I can recommand those two books:
Fumio Hayashi's "Econometrics" and
Davidson and McKinnon "Econometric Theory and Methods".
For a
|
Econometrics textbooks?
It depends on what you really want, (GMM, time series, panel...) but I can recommand those two books:
Fumio Hayashi's "Econometrics" and
Davidson and McKinnon "Econometric Theory and Methods".
For a course in econometric time series, Hamilton's "Time Serie Analysis" is great.
|
Econometrics textbooks?
It depends on what you really want, (GMM, time series, panel...) but I can recommand those two books:
Fumio Hayashi's "Econometrics" and
Davidson and McKinnon "Econometric Theory and Methods".
For a
|
10,544
|
Econometrics textbooks?
|
I really like Kennedy's A Guide to Econometrics, which is unusual in its setup, since every topic is discussed on three different levels, first in a non-technical way, then going into details of application and finally going into theoretical details, although the theoretical parts are a bit superficial.
|
Econometrics textbooks?
|
I really like Kennedy's A Guide to Econometrics, which is unusual in its setup, since every topic is discussed on three different levels, first in a non-technical way, then going into details of appli
|
Econometrics textbooks?
I really like Kennedy's A Guide to Econometrics, which is unusual in its setup, since every topic is discussed on three different levels, first in a non-technical way, then going into details of application and finally going into theoretical details, although the theoretical parts are a bit superficial.
|
Econometrics textbooks?
I really like Kennedy's A Guide to Econometrics, which is unusual in its setup, since every topic is discussed on three different levels, first in a non-technical way, then going into details of appli
|
10,545
|
Econometrics textbooks?
|
I would definitely recommend M. Verbeek's A Guide to Modern Econometrics.
Woolwridge is too wordy (and this long-windedness loses the reader's focus too early in the chapters). Greene (i'm referring to the 5th edition) often gets lost in minutiae: i.e. strives to catalog formulae that are orthogonal to the main subject of the chapter (good for references, but again, not ideal for learning).
i've not read the Hayashi (thought i suspect it's a bit outdated now). Hamilton, is really focused on...TSA so it's a bit off the mark for general econometrics.
|
Econometrics textbooks?
|
I would definitely recommend M. Verbeek's A Guide to Modern Econometrics.
Woolwridge is too wordy (and this long-windedness loses the reader's focus too early in the chapters). Greene (i'm referring t
|
Econometrics textbooks?
I would definitely recommend M. Verbeek's A Guide to Modern Econometrics.
Woolwridge is too wordy (and this long-windedness loses the reader's focus too early in the chapters). Greene (i'm referring to the 5th edition) often gets lost in minutiae: i.e. strives to catalog formulae that are orthogonal to the main subject of the chapter (good for references, but again, not ideal for learning).
i've not read the Hayashi (thought i suspect it's a bit outdated now). Hamilton, is really focused on...TSA so it's a bit off the mark for general econometrics.
|
Econometrics textbooks?
I would definitely recommend M. Verbeek's A Guide to Modern Econometrics.
Woolwridge is too wordy (and this long-windedness loses the reader's focus too early in the chapters). Greene (i'm referring t
|
10,546
|
Econometrics textbooks?
|
"Applied Econometrics with R" (Kleiber, Zeileis 2008) is a good introduction using R, and is accompanied by the AER package.
|
Econometrics textbooks?
|
"Applied Econometrics with R" (Kleiber, Zeileis 2008) is a good introduction using R, and is accompanied by the AER package.
|
Econometrics textbooks?
"Applied Econometrics with R" (Kleiber, Zeileis 2008) is a good introduction using R, and is accompanied by the AER package.
|
Econometrics textbooks?
"Applied Econometrics with R" (Kleiber, Zeileis 2008) is a good introduction using R, and is accompanied by the AER package.
|
10,547
|
Econometrics textbooks?
|
(Disclaimer: I'm not an economist.) I gather you might like to have a range of possibilities listed, however, most of the answers focus on more advanced texts. Should someone want a very introductory text, I can recommend:
Gujarati, D., & Porter, D. (2008). Basic Econometrics. McGraw-Hill/Irwin.
This is very basic (i.e., little math), very comprehensive (~1k pages--there's a ~30p chapter on every conceivable topic that can be read very quickly), and very clear. Moreover, huge numbers of these seem to have been printed and used in classes a few years back, so you can pick up an old copy for a few bucks in a used book store in any college town.
|
Econometrics textbooks?
|
(Disclaimer: I'm not an economist.) I gather you might like to have a range of possibilities listed, however, most of the answers focus on more advanced texts. Should someone want a very introductor
|
Econometrics textbooks?
(Disclaimer: I'm not an economist.) I gather you might like to have a range of possibilities listed, however, most of the answers focus on more advanced texts. Should someone want a very introductory text, I can recommend:
Gujarati, D., & Porter, D. (2008). Basic Econometrics. McGraw-Hill/Irwin.
This is very basic (i.e., little math), very comprehensive (~1k pages--there's a ~30p chapter on every conceivable topic that can be read very quickly), and very clear. Moreover, huge numbers of these seem to have been printed and used in classes a few years back, so you can pick up an old copy for a few bucks in a used book store in any college town.
|
Econometrics textbooks?
(Disclaimer: I'm not an economist.) I gather you might like to have a range of possibilities listed, however, most of the answers focus on more advanced texts. Should someone want a very introductor
|
10,548
|
Econometrics textbooks?
|
I am an econometrics lecturer. Definitely, the best book depends on what you want and the level that is suitable for you. However, my first option is "Basic Econometrics" written by Gujarati. The fourth edition of that textbook provides a good and well-written overview of the subject (Gujarati, 2002). Sadly, I cannot say the same regarding the fifth one (Gujarati and Porter, 2008).
I use that textbook in undergraduate and postgraduate courses. For the undergraduate ones, I use it as the main textbook. For the postgraduate ones, I use it in addition to "Econometric Analysis" by Greene (2012). I believe necessary to do this in order to develop the econometric intuition behind the formulae. Econometrics involves intuition, art and technique. Currently, no book provides them in an adequate nor equilibrated manner.
|
Econometrics textbooks?
|
I am an econometrics lecturer. Definitely, the best book depends on what you want and the level that is suitable for you. However, my first option is "Basic Econometrics" written by Gujarati. The four
|
Econometrics textbooks?
I am an econometrics lecturer. Definitely, the best book depends on what you want and the level that is suitable for you. However, my first option is "Basic Econometrics" written by Gujarati. The fourth edition of that textbook provides a good and well-written overview of the subject (Gujarati, 2002). Sadly, I cannot say the same regarding the fifth one (Gujarati and Porter, 2008).
I use that textbook in undergraduate and postgraduate courses. For the undergraduate ones, I use it as the main textbook. For the postgraduate ones, I use it in addition to "Econometric Analysis" by Greene (2012). I believe necessary to do this in order to develop the econometric intuition behind the formulae. Econometrics involves intuition, art and technique. Currently, no book provides them in an adequate nor equilibrated manner.
|
Econometrics textbooks?
I am an econometrics lecturer. Definitely, the best book depends on what you want and the level that is suitable for you. However, my first option is "Basic Econometrics" written by Gujarati. The four
|
10,549
|
Econometrics textbooks?
|
One at a somewhat lower level of mathematical sophistication than Wooldridge (less dense, more pictures), but a bit more up to date on some of the fast-moving areas:
Murray, Michael P. Econometrics: A Modern Introduction. Addison Wesley, 2006. 976 pp. ISBN 9780321113610
Seems that it's not available for preview on the web and the publisher is out of stock, but you can view pdfs of 11 web extensions to get an idea of its style.
|
Econometrics textbooks?
|
One at a somewhat lower level of mathematical sophistication than Wooldridge (less dense, more pictures), but a bit more up to date on some of the fast-moving areas:
Murray, Michael P. Econometrics: A
|
Econometrics textbooks?
One at a somewhat lower level of mathematical sophistication than Wooldridge (less dense, more pictures), but a bit more up to date on some of the fast-moving areas:
Murray, Michael P. Econometrics: A Modern Introduction. Addison Wesley, 2006. 976 pp. ISBN 9780321113610
Seems that it's not available for preview on the web and the publisher is out of stock, but you can view pdfs of 11 web extensions to get an idea of its style.
|
Econometrics textbooks?
One at a somewhat lower level of mathematical sophistication than Wooldridge (less dense, more pictures), but a bit more up to date on some of the fast-moving areas:
Murray, Michael P. Econometrics: A
|
10,550
|
Econometrics textbooks?
|
I like Cameron and Trivedi's Microeconometrics. It strikes a nice balance between breadth, intuition, and rigor (if you follow up on the references). The target audience is the applied researcher. Their Microeconometrics Using Stata is also quite good if you're a Stata user, though it covers less ground. At advanced undergrad or MA level, I love Johnston and DiNardo's Econometric Methods. I hope there's a new edition soon. Kennedy's book is great for intuition if any of the above are tough going.
|
Econometrics textbooks?
|
I like Cameron and Trivedi's Microeconometrics. It strikes a nice balance between breadth, intuition, and rigor (if you follow up on the references). The target audience is the applied researcher. The
|
Econometrics textbooks?
I like Cameron and Trivedi's Microeconometrics. It strikes a nice balance between breadth, intuition, and rigor (if you follow up on the references). The target audience is the applied researcher. Their Microeconometrics Using Stata is also quite good if you're a Stata user, though it covers less ground. At advanced undergrad or MA level, I love Johnston and DiNardo's Econometric Methods. I hope there's a new edition soon. Kennedy's book is great for intuition if any of the above are tough going.
|
Econometrics textbooks?
I like Cameron and Trivedi's Microeconometrics. It strikes a nice balance between breadth, intuition, and rigor (if you follow up on the references). The target audience is the applied researcher. The
|
10,551
|
Econometrics textbooks?
|
Hashem Pesaran's book looks very promising. It covers such topics as dependencies in panel data and others that I haven't seen in other books.
|
Econometrics textbooks?
|
Hashem Pesaran's book looks very promising. It covers such topics as dependencies in panel data and others that I haven't seen in other books.
|
Econometrics textbooks?
Hashem Pesaran's book looks very promising. It covers such topics as dependencies in panel data and others that I haven't seen in other books.
|
Econometrics textbooks?
Hashem Pesaran's book looks very promising. It covers such topics as dependencies in panel data and others that I haven't seen in other books.
|
10,552
|
Econometrics textbooks?
|
I prefer the fourth edition of "Basic Econometrics", among other reasons, because the text is completely self-contained. The fifth edition requires to have access to the web in order to replicate the exercises contained in the text ( Users of the previous edition did not have this problem because the book was packed with an additional cd). This change, that seems a simply technological update, extremely reduces the possibility to replicate the textbook exercises for third-world students (like the ones I teach). Indeed, in my university exists the proposal to eliminate "Basic Econometrics" as textbook due to this reason among others (an additional one refers to the difficulties that local lecturers have to obtain access to the Online Learning Center).
Another problem of the fifth edition relates to the the writing style. According to the authors, one "improvement" of the new edition was the simplification of the analyses included in several chapters (see preface). However, I do not agree with them: The explanations contained in previous editions definitively are better than the ones contained in the fifth one. Evidently, I am aware that it can be argued that this situation may be the result of translation problems (I use the Spanish version of the book). However, users of the fifth edition in English have pointed out the same situation (see comments in amazon.com).
Despite all these problems, I believe that "Basic Econometrics" is the best undergraduate textbook in the market. Even the fifth edition is a good one. Hopefully the next edition may be as good as the fourth one.
|
Econometrics textbooks?
|
I prefer the fourth edition of "Basic Econometrics", among other reasons, because the text is completely self-contained. The fifth edition requires to have access to the web in order to replicate the
|
Econometrics textbooks?
I prefer the fourth edition of "Basic Econometrics", among other reasons, because the text is completely self-contained. The fifth edition requires to have access to the web in order to replicate the exercises contained in the text ( Users of the previous edition did not have this problem because the book was packed with an additional cd). This change, that seems a simply technological update, extremely reduces the possibility to replicate the textbook exercises for third-world students (like the ones I teach). Indeed, in my university exists the proposal to eliminate "Basic Econometrics" as textbook due to this reason among others (an additional one refers to the difficulties that local lecturers have to obtain access to the Online Learning Center).
Another problem of the fifth edition relates to the the writing style. According to the authors, one "improvement" of the new edition was the simplification of the analyses included in several chapters (see preface). However, I do not agree with them: The explanations contained in previous editions definitively are better than the ones contained in the fifth one. Evidently, I am aware that it can be argued that this situation may be the result of translation problems (I use the Spanish version of the book). However, users of the fifth edition in English have pointed out the same situation (see comments in amazon.com).
Despite all these problems, I believe that "Basic Econometrics" is the best undergraduate textbook in the market. Even the fifth edition is a good one. Hopefully the next edition may be as good as the fourth one.
|
Econometrics textbooks?
I prefer the fourth edition of "Basic Econometrics", among other reasons, because the text is completely self-contained. The fifth edition requires to have access to the web in order to replicate the
|
10,553
|
Visualizing Likert Item Response Data
|
I like the centered count view. This particular version removes the neutral answers (effectively treating neutral and n/a as the same) to show only the amount of agree/disagree opinions. The 0 point is where red and blue meet. The count axis is clipped out.
For comparison, here are the same five responses as stacked percentages, showing both neutral (gray) and no answer (white).
Update: Paper suggesting a similar method: Plotting Likert and Other Rating Scales (PDF)
|
Visualizing Likert Item Response Data
|
I like the centered count view. This particular version removes the neutral answers (effectively treating neutral and n/a as the same) to show only the amount of agree/disagree opinions. The 0 point i
|
Visualizing Likert Item Response Data
I like the centered count view. This particular version removes the neutral answers (effectively treating neutral and n/a as the same) to show only the amount of agree/disagree opinions. The 0 point is where red and blue meet. The count axis is clipped out.
For comparison, here are the same five responses as stacked percentages, showing both neutral (gray) and no answer (white).
Update: Paper suggesting a similar method: Plotting Likert and Other Rating Scales (PDF)
|
Visualizing Likert Item Response Data
I like the centered count view. This particular version removes the neutral answers (effectively treating neutral and n/a as the same) to show only the amount of agree/disagree opinions. The 0 point i
|
10,554
|
Visualizing Likert Item Response Data
|
Stacked barcharts are generally well understood by non-statisticians, provided they are gently introduced. It is useful to scale them on a common metric (e.g. 0-100%), with a gradual color for each category if these are ordinal item (e.g. Likert). I prefer dotchart (Cleveland dot plot), when there are not too many items and no more than 3-5 responses categories. But it is really a matter of visual clarity. I generally provide % as it is a standardized measure, and only report both % and counts with non-stacked barchart. Here is an example of what I mean:
data(Environment, package="ltm")
Environment[sample(1:nrow(Environment), 10),1] <- NA
na.count <- apply(Environment, 2, function(x) sum(is.na(x)))
tab <- apply(Environment, 2, table)/
apply(apply(Environment, 2, table), 2, sum)*100
dotchart(tab, xlim=c(0,100), xlab="Frequency (%)",
sub=paste("N", nrow(Environment), sep="="))
text(100, c(2,7,12,17,22,27), rev(na.count), cex=.8)
mtext("# NA", side=3, line=0, at=100, cex=.8)
Better rendering could be achieved with lattice or ggplot2. All items have the same response categories in this particular example, but in more general case we might expect different ones, so that showing all of them would not seem redundant as is the case here. It would be possible, however, to give the same color to each response category so as to facilitate reading.
But I would say stacked barcharts are better when all items have the same response category, as they help to appreciate the frequency of one response modality across items:
I can also think of some kind of heatmap, which is useful if there are many items with similar response category.
Missing responses (esp. when non negligible or localized on specific item/question) should be reported, ideally for each item. Generally, % of responses for each category are computed without NA. This is what is usually done in survey or psychometrics (we speak of "expressed or observed responses").
P.S.
I can think of more fancy things like the picture shown below (the first one was made by hand, the second is from ggplot2, ggfluctuation(as.table(tab))), but I don't think it convey as accurate information as dotplot or barchart since surface variations are difficult to appreciate.
|
Visualizing Likert Item Response Data
|
Stacked barcharts are generally well understood by non-statisticians, provided they are gently introduced. It is useful to scale them on a common metric (e.g. 0-100%), with a gradual color for each ca
|
Visualizing Likert Item Response Data
Stacked barcharts are generally well understood by non-statisticians, provided they are gently introduced. It is useful to scale them on a common metric (e.g. 0-100%), with a gradual color for each category if these are ordinal item (e.g. Likert). I prefer dotchart (Cleveland dot plot), when there are not too many items and no more than 3-5 responses categories. But it is really a matter of visual clarity. I generally provide % as it is a standardized measure, and only report both % and counts with non-stacked barchart. Here is an example of what I mean:
data(Environment, package="ltm")
Environment[sample(1:nrow(Environment), 10),1] <- NA
na.count <- apply(Environment, 2, function(x) sum(is.na(x)))
tab <- apply(Environment, 2, table)/
apply(apply(Environment, 2, table), 2, sum)*100
dotchart(tab, xlim=c(0,100), xlab="Frequency (%)",
sub=paste("N", nrow(Environment), sep="="))
text(100, c(2,7,12,17,22,27), rev(na.count), cex=.8)
mtext("# NA", side=3, line=0, at=100, cex=.8)
Better rendering could be achieved with lattice or ggplot2. All items have the same response categories in this particular example, but in more general case we might expect different ones, so that showing all of them would not seem redundant as is the case here. It would be possible, however, to give the same color to each response category so as to facilitate reading.
But I would say stacked barcharts are better when all items have the same response category, as they help to appreciate the frequency of one response modality across items:
I can also think of some kind of heatmap, which is useful if there are many items with similar response category.
Missing responses (esp. when non negligible or localized on specific item/question) should be reported, ideally for each item. Generally, % of responses for each category are computed without NA. This is what is usually done in survey or psychometrics (we speak of "expressed or observed responses").
P.S.
I can think of more fancy things like the picture shown below (the first one was made by hand, the second is from ggplot2, ggfluctuation(as.table(tab))), but I don't think it convey as accurate information as dotplot or barchart since surface variations are difficult to appreciate.
|
Visualizing Likert Item Response Data
Stacked barcharts are generally well understood by non-statisticians, provided they are gently introduced. It is useful to scale them on a common metric (e.g. 0-100%), with a gradual color for each ca
|
10,555
|
Visualizing Likert Item Response Data
|
I think chl's answer is great.
One thing I might add, is for the case you'd want to compare the correlation between the items. For that you can use something like a Correlation scatter-plot matrix for ordered-categorical data
(That code still needs some tweaking - but it gives the general idea...)
|
Visualizing Likert Item Response Data
|
I think chl's answer is great.
One thing I might add, is for the case you'd want to compare the correlation between the items. For that you can use something like a Correlation scatter-plot matrix fo
|
Visualizing Likert Item Response Data
I think chl's answer is great.
One thing I might add, is for the case you'd want to compare the correlation between the items. For that you can use something like a Correlation scatter-plot matrix for ordered-categorical data
(That code still needs some tweaking - but it gives the general idea...)
|
Visualizing Likert Item Response Data
I think chl's answer is great.
One thing I might add, is for the case you'd want to compare the correlation between the items. For that you can use something like a Correlation scatter-plot matrix fo
|
10,556
|
Natural interpretation for LDA hyperparameters
|
David Blei has a great talk introducing LDA to students of a summer class: http://videolectures.net/mlss09uk_blei_tm/
In the first video he covers extensively the basic idea of topic modelling and how Dirichlet distribution come into play. The plate notation is explained as if all hidden variables are observed to show the dependencies. Basically topics are distributions over words and document distributions over topics.
In the second video he shows the effect of alpha with some sample graphs. The smaller alpha the more sparse the distribution. Also, he introduces some inference approaches.
|
Natural interpretation for LDA hyperparameters
|
David Blei has a great talk introducing LDA to students of a summer class: http://videolectures.net/mlss09uk_blei_tm/
In the first video he covers extensively the basic idea of topic modelling and how
|
Natural interpretation for LDA hyperparameters
David Blei has a great talk introducing LDA to students of a summer class: http://videolectures.net/mlss09uk_blei_tm/
In the first video he covers extensively the basic idea of topic modelling and how Dirichlet distribution come into play. The plate notation is explained as if all hidden variables are observed to show the dependencies. Basically topics are distributions over words and document distributions over topics.
In the second video he shows the effect of alpha with some sample graphs. The smaller alpha the more sparse the distribution. Also, he introduces some inference approaches.
|
Natural interpretation for LDA hyperparameters
David Blei has a great talk introducing LDA to students of a summer class: http://videolectures.net/mlss09uk_blei_tm/
In the first video he covers extensively the basic idea of topic modelling and how
|
10,557
|
Natural interpretation for LDA hyperparameters
|
The answer depends on whether you are assuming the symmetric or asymmetric dirichlet distribution (or, more technically, whether the base measure is uniform). Unless something else is specified, most implementations of LDA assume the distribution is symmetric.
For the symmetric distribution, a high alpha-value means that each document is likely to contain a mixture of most of the topics, and not any single topic specifically. A low alpha value puts less such constraints on documents and means that it is more likely that a document may contain mixture of just a few, or even only one, of the topics. Likewise, a high beta-value means that each topic is likely to contain a mixture of most of the words, and not any word specifically, while a low value means that a topic may contain a mixture of just a few of the words.
If, on the other hand, the distribution is asymmetric, a high alpha-value means that a specific topic distribution (depending on the base measure) is more likely for each document. Similarly, high beta-values means each topic is more likely to contain a specific word mix defined by the base measure.
In practice, a high alpha-value will lead to documents being more similar in terms of what topics they contain. A high beta-value will similarly lead to topics being more similar in terms of what words they contain.
So, yes, the alpha-parameters specify prior beliefs about topic sparsity/uniformity in the documents. I'm not entirely sure what you mean by "mutual exclusiveness of topics in terms of words" though.
More generally, these are concentration parameters for the dirichlet distribution used in the LDA model. To gain some intuitive understanding of how this works, this presentation contains some nice illustrations, as well as a good explanation of LDA in general.
An additional comment I'll put here, since I can't comment on your original question: From what I've seen, the alpha- and beta-parameters can somewhat confusingly refer to several different parameterizations. The underlying dirichlet distribution is usually parameterized with the vector $(\alpha_1, \alpha_2, ... ,\alpha_K)$ , but this can be decomposed into the base measure $u = (u_1, u_2, ..., u_K)$ and the concentration parameter $\alpha$, such that $\alpha * \textbf{u} = (\alpha_1, \alpha_2, ... ,\alpha_K)$ . In the case where the alpha parameter is a scalar, it is usually meant the concentration parameter $\alpha$, but it can also mean the values of $(\alpha_1, \alpha_2, ... ,\alpha_K)$, since these will be equal under the symmetrical dirichlet distribution. If it's a vector, it usually refers to $(\alpha_1, \alpha_2, ... ,\alpha_K)$. I'm not sure which parametrization is most common, but in my reply I assume you meant the alpha- and beta-values as the concentration parameters.
|
Natural interpretation for LDA hyperparameters
|
The answer depends on whether you are assuming the symmetric or asymmetric dirichlet distribution (or, more technically, whether the base measure is uniform). Unless something else is specified, most
|
Natural interpretation for LDA hyperparameters
The answer depends on whether you are assuming the symmetric or asymmetric dirichlet distribution (or, more technically, whether the base measure is uniform). Unless something else is specified, most implementations of LDA assume the distribution is symmetric.
For the symmetric distribution, a high alpha-value means that each document is likely to contain a mixture of most of the topics, and not any single topic specifically. A low alpha value puts less such constraints on documents and means that it is more likely that a document may contain mixture of just a few, or even only one, of the topics. Likewise, a high beta-value means that each topic is likely to contain a mixture of most of the words, and not any word specifically, while a low value means that a topic may contain a mixture of just a few of the words.
If, on the other hand, the distribution is asymmetric, a high alpha-value means that a specific topic distribution (depending on the base measure) is more likely for each document. Similarly, high beta-values means each topic is more likely to contain a specific word mix defined by the base measure.
In practice, a high alpha-value will lead to documents being more similar in terms of what topics they contain. A high beta-value will similarly lead to topics being more similar in terms of what words they contain.
So, yes, the alpha-parameters specify prior beliefs about topic sparsity/uniformity in the documents. I'm not entirely sure what you mean by "mutual exclusiveness of topics in terms of words" though.
More generally, these are concentration parameters for the dirichlet distribution used in the LDA model. To gain some intuitive understanding of how this works, this presentation contains some nice illustrations, as well as a good explanation of LDA in general.
An additional comment I'll put here, since I can't comment on your original question: From what I've seen, the alpha- and beta-parameters can somewhat confusingly refer to several different parameterizations. The underlying dirichlet distribution is usually parameterized with the vector $(\alpha_1, \alpha_2, ... ,\alpha_K)$ , but this can be decomposed into the base measure $u = (u_1, u_2, ..., u_K)$ and the concentration parameter $\alpha$, such that $\alpha * \textbf{u} = (\alpha_1, \alpha_2, ... ,\alpha_K)$ . In the case where the alpha parameter is a scalar, it is usually meant the concentration parameter $\alpha$, but it can also mean the values of $(\alpha_1, \alpha_2, ... ,\alpha_K)$, since these will be equal under the symmetrical dirichlet distribution. If it's a vector, it usually refers to $(\alpha_1, \alpha_2, ... ,\alpha_K)$. I'm not sure which parametrization is most common, but in my reply I assume you meant the alpha- and beta-values as the concentration parameters.
|
Natural interpretation for LDA hyperparameters
The answer depends on whether you are assuming the symmetric or asymmetric dirichlet distribution (or, more technically, whether the base measure is uniform). Unless something else is specified, most
|
10,558
|
What's wrong with (some) pseudo-randomization
|
You are right to be skeptical. In general, one should use 'real' randomization, because typically one doesn't have all knowledge about relevant factors (unobservables). If one of those unobservables is correlated with the age being odd or even, then it is also correlated with whether or not they received treatment. If this is the case, we cannot identify the treatment effect: effects we observe could be due to treatment, or due to the unobserved factor(s).
This is not a problem with real randomization, where we don't expect any dependence between treatment and unobservables (though, of course, for small samples it may be there).
To construct a story why this randomization procedure might be a problem, suppose the study only included subjects that were at age 17/18 when, say, the Vietnam war started. With 17 there was no chance to be drafted (correct me if I am wrong on that), while there was that chance at 18. Assuming the chance was nonnegligible and that war experience changes people, it implies that, years later, these two groups are different, even though they are just 1 year apart. So perhaps the treatment (drug) looks like it doesn't work, but because only the group with Vietnam veterans received it, this may actually be due to the fact that it doesn't work on people with PTSD (or other factors related to being a veteran). In other words, you need both groups (treatment and control) to be identical, except for the treatment, to identify the treatment effect. With assignment by age, this is not the case.
So unless you can rule out that there is no unobserved differences between the groups (but how do you do that if it isn't observed?), real randomization is preferable.
|
What's wrong with (some) pseudo-randomization
|
You are right to be skeptical. In general, one should use 'real' randomization, because typically one doesn't have all knowledge about relevant factors (unobservables). If one of those unobservables i
|
What's wrong with (some) pseudo-randomization
You are right to be skeptical. In general, one should use 'real' randomization, because typically one doesn't have all knowledge about relevant factors (unobservables). If one of those unobservables is correlated with the age being odd or even, then it is also correlated with whether or not they received treatment. If this is the case, we cannot identify the treatment effect: effects we observe could be due to treatment, or due to the unobserved factor(s).
This is not a problem with real randomization, where we don't expect any dependence between treatment and unobservables (though, of course, for small samples it may be there).
To construct a story why this randomization procedure might be a problem, suppose the study only included subjects that were at age 17/18 when, say, the Vietnam war started. With 17 there was no chance to be drafted (correct me if I am wrong on that), while there was that chance at 18. Assuming the chance was nonnegligible and that war experience changes people, it implies that, years later, these two groups are different, even though they are just 1 year apart. So perhaps the treatment (drug) looks like it doesn't work, but because only the group with Vietnam veterans received it, this may actually be due to the fact that it doesn't work on people with PTSD (or other factors related to being a veteran). In other words, you need both groups (treatment and control) to be identical, except for the treatment, to identify the treatment effect. With assignment by age, this is not the case.
So unless you can rule out that there is no unobserved differences between the groups (but how do you do that if it isn't observed?), real randomization is preferable.
|
What's wrong with (some) pseudo-randomization
You are right to be skeptical. In general, one should use 'real' randomization, because typically one doesn't have all knowledge about relevant factors (unobservables). If one of those unobservables i
|
10,559
|
What's wrong with (some) pseudo-randomization
|
It is a good exercise to uphold contrarian views from time to time, so let me begin by offering a few reasons in favor of this form of pseudo-randomization. They are, principally, that it is little different than any other form of systematic sampling, such as obtaining samples of environmental media at points of a grid in the field or sampling every other tree in an orchard, and therefore this sampling might enjoy comparable advantages.
The analogy here is perfect: age was "gridded" by year starting at an origin of zero and assignment to the groups alternated along this (one-dimensional) grid. Some advantages of this approach are to guarantee wide, even dispersion of the sample across the field or orchard (or ages, in this case), which helps even out influences related to location (or time). This can be especially useful when theory suggests that location is the predominant factor in variation of response. Moreover, except for really tiny samples, analyzing the data as if they were a simple random sample introduces relatively little error. Furthermore, some randomization is possible: in the field we can randomly choose the origin and orientation of the grid. In the present case, we can at least randomize whether the even years are controls or treatment subjects.
Another advantage of gridded sampling is to detect localized variation. In the field, this would be "pockets" of unusual responses. Statistically, we may think of them as manifestations of spatial correlation. In the present situation, if there is some chance that a relatively narrow age range experiences unusual responses, then the gridded design is an excellent choice, because a purely randomized design can by chance contain large gaps in ages within one of the groups. (But a better design might be to stratify: use parity of age to form two analytical strata and then, independently within each stratum, randomize patients into control and treatment groups.)
Unfortunately, this defense falls apart once we come to terms with how ages are actually reported. US Census data show that (1) self-reported ages tend to be rounded to multiples of five (I have seen this in analyses of rural block group data) and (2) this tendency is associated with indicators of lower education or socioeconomic status. (It is also well known, although difficult to test, that the final digit in many self-reported ages is $9$, that people in certain fields of work, such as acting, tend to reduce their reported ages, and others will exaggerate their ages for various purposes.) Thus, at least to a slight degree in at least some areas of the US (and even more so elsewhere in the world), the parity of one's reported age is likely to be associated with factors important for the experiment. This renders the concern in the question less than hypothetical: it is real. At this point, the previous answers in this thread capably present the additional thoughts I would care to make, so I will stop and invite you to re-read them.
|
What's wrong with (some) pseudo-randomization
|
It is a good exercise to uphold contrarian views from time to time, so let me begin by offering a few reasons in favor of this form of pseudo-randomization. They are, principally, that it is little d
|
What's wrong with (some) pseudo-randomization
It is a good exercise to uphold contrarian views from time to time, so let me begin by offering a few reasons in favor of this form of pseudo-randomization. They are, principally, that it is little different than any other form of systematic sampling, such as obtaining samples of environmental media at points of a grid in the field or sampling every other tree in an orchard, and therefore this sampling might enjoy comparable advantages.
The analogy here is perfect: age was "gridded" by year starting at an origin of zero and assignment to the groups alternated along this (one-dimensional) grid. Some advantages of this approach are to guarantee wide, even dispersion of the sample across the field or orchard (or ages, in this case), which helps even out influences related to location (or time). This can be especially useful when theory suggests that location is the predominant factor in variation of response. Moreover, except for really tiny samples, analyzing the data as if they were a simple random sample introduces relatively little error. Furthermore, some randomization is possible: in the field we can randomly choose the origin and orientation of the grid. In the present case, we can at least randomize whether the even years are controls or treatment subjects.
Another advantage of gridded sampling is to detect localized variation. In the field, this would be "pockets" of unusual responses. Statistically, we may think of them as manifestations of spatial correlation. In the present situation, if there is some chance that a relatively narrow age range experiences unusual responses, then the gridded design is an excellent choice, because a purely randomized design can by chance contain large gaps in ages within one of the groups. (But a better design might be to stratify: use parity of age to form two analytical strata and then, independently within each stratum, randomize patients into control and treatment groups.)
Unfortunately, this defense falls apart once we come to terms with how ages are actually reported. US Census data show that (1) self-reported ages tend to be rounded to multiples of five (I have seen this in analyses of rural block group data) and (2) this tendency is associated with indicators of lower education or socioeconomic status. (It is also well known, although difficult to test, that the final digit in many self-reported ages is $9$, that people in certain fields of work, such as acting, tend to reduce their reported ages, and others will exaggerate their ages for various purposes.) Thus, at least to a slight degree in at least some areas of the US (and even more so elsewhere in the world), the parity of one's reported age is likely to be associated with factors important for the experiment. This renders the concern in the question less than hypothetical: it is real. At this point, the previous answers in this thread capably present the additional thoughts I would care to make, so I will stop and invite you to re-read them.
|
What's wrong with (some) pseudo-randomization
It is a good exercise to uphold contrarian views from time to time, so let me begin by offering a few reasons in favor of this form of pseudo-randomization. They are, principally, that it is little d
|
10,560
|
What's wrong with (some) pseudo-randomization
|
I agree the example you give is pretty innocuous but...
If the agents involved (either the person dealing out the intervention or the people getting the intervention) become aware of the assignment scheme they can take advantage of it. Such self selection should be fairly obvious why it is problematic in most experimental designs.
One example I am aware of in criminology goes like this; The experiment was meant to test the deterrent effect of a night in jail after a domestic dispute vs. just asking the perpetrator to leave for the night. Officers were given a booklet of sheets, and the color of the current sheet on top was meant to identify what treatment the perp. in the particular incident was supposed to receive.
What ended up happening was officers intentionally disobeyed the study design, and chose a sheet based on personal preferences for what should be done to the perp. It isn't out of the extreme to suspect similar fudging of years is at least possible in your example.
|
What's wrong with (some) pseudo-randomization
|
I agree the example you give is pretty innocuous but...
If the agents involved (either the person dealing out the intervention or the people getting the intervention) become aware of the assignment sc
|
What's wrong with (some) pseudo-randomization
I agree the example you give is pretty innocuous but...
If the agents involved (either the person dealing out the intervention or the people getting the intervention) become aware of the assignment scheme they can take advantage of it. Such self selection should be fairly obvious why it is problematic in most experimental designs.
One example I am aware of in criminology goes like this; The experiment was meant to test the deterrent effect of a night in jail after a domestic dispute vs. just asking the perpetrator to leave for the night. Officers were given a booklet of sheets, and the color of the current sheet on top was meant to identify what treatment the perp. in the particular incident was supposed to receive.
What ended up happening was officers intentionally disobeyed the study design, and chose a sheet based on personal preferences for what should be done to the perp. It isn't out of the extreme to suspect similar fudging of years is at least possible in your example.
|
What's wrong with (some) pseudo-randomization
I agree the example you give is pretty innocuous but...
If the agents involved (either the person dealing out the intervention or the people getting the intervention) become aware of the assignment sc
|
10,561
|
What's wrong with (some) pseudo-randomization
|
What you are proposing is NOT pseudo-randomization. Pseudo-randomization uses a seed to reproducibly generate a pseudo-random sequence based on the internal clock of a computer. The randomization assignment does NOT depend on patient level characteristics.
The point of randomization is to balance the distribution of predictive covariates, not just the means (and it's not guaranteed you would even have that). In other words, for any given treated patient the closest available matched control will always differ by one year of age. While it's true little is published about the characteristics of people born in even years versus odd years, you introduce a sensitivity that's otherwise moot when using traditional randomization.
If you use a deterministic criterion to randomize patients, how will you ensure an approximate 50:50 (or other) allocation of treatment vs. control? The situation gets exponentially worse if you try to stratify by, say, site.
What if you need to randomize to one of three different treatments? Or worse, what about an adaptive randomization where you begin randomizing to one of three treatment arms, and then based on safety or efficacy, you decide to drop an arm and randomize 1:1?
|
What's wrong with (some) pseudo-randomization
|
What you are proposing is NOT pseudo-randomization. Pseudo-randomization uses a seed to reproducibly generate a pseudo-random sequence based on the internal clock of a computer. The randomization assi
|
What's wrong with (some) pseudo-randomization
What you are proposing is NOT pseudo-randomization. Pseudo-randomization uses a seed to reproducibly generate a pseudo-random sequence based on the internal clock of a computer. The randomization assignment does NOT depend on patient level characteristics.
The point of randomization is to balance the distribution of predictive covariates, not just the means (and it's not guaranteed you would even have that). In other words, for any given treated patient the closest available matched control will always differ by one year of age. While it's true little is published about the characteristics of people born in even years versus odd years, you introduce a sensitivity that's otherwise moot when using traditional randomization.
If you use a deterministic criterion to randomize patients, how will you ensure an approximate 50:50 (or other) allocation of treatment vs. control? The situation gets exponentially worse if you try to stratify by, say, site.
What if you need to randomize to one of three different treatments? Or worse, what about an adaptive randomization where you begin randomizing to one of three treatment arms, and then based on safety or efficacy, you decide to drop an arm and randomize 1:1?
|
What's wrong with (some) pseudo-randomization
What you are proposing is NOT pseudo-randomization. Pseudo-randomization uses a seed to reproducibly generate a pseudo-random sequence based on the internal clock of a computer. The randomization assi
|
10,562
|
Column-wise matrix normalization in R [closed]
|
This is what sweep and scale are for.
sweep(m, 2, colSums(m), FUN="/")
scale(m, center=FALSE, scale=colSums(m))
Alternatively, you could use recycling, but you have to transpose it twice.
t(t(m)/colSums(m))
Or you could construct the full matrix you want to divide by, like you did in your question. Here's another way you might do that.
m/colSums(m)[col(m)]
And notice also caracal's addition from the comments:
m %*% diag(1/colSums(m))
|
Column-wise matrix normalization in R [closed]
|
This is what sweep and scale are for.
sweep(m, 2, colSums(m), FUN="/")
scale(m, center=FALSE, scale=colSums(m))
Alternatively, you could use recycling, but you have to transpose it twice.
t(t(m)/colS
|
Column-wise matrix normalization in R [closed]
This is what sweep and scale are for.
sweep(m, 2, colSums(m), FUN="/")
scale(m, center=FALSE, scale=colSums(m))
Alternatively, you could use recycling, but you have to transpose it twice.
t(t(m)/colSums(m))
Or you could construct the full matrix you want to divide by, like you did in your question. Here's another way you might do that.
m/colSums(m)[col(m)]
And notice also caracal's addition from the comments:
m %*% diag(1/colSums(m))
|
Column-wise matrix normalization in R [closed]
This is what sweep and scale are for.
sweep(m, 2, colSums(m), FUN="/")
scale(m, center=FALSE, scale=colSums(m))
Alternatively, you could use recycling, but you have to transpose it twice.
t(t(m)/colS
|
10,563
|
Column-wise matrix normalization in R [closed]
|
Another is prop.table(m, 2), or simply propr(m), that internally uses sweep.
It may be of interest to compare the performance of these equivalent solutions, so I did a little benchmark (using microbenchmark package).
This is the input matrix m I've used:
[,1] [,2] [,3] [,4] [,5]
A 1.831564e-02 4.978707e-02 1.353353e-01 3.678794e-01 3.678794e-01
B 3.678794e-01 1.353353e-01 4.978707e-02 1.831564e-02 6.737947e-03
C 4.539993e-05 2.061154e-09 9.357623e-14 4.248354e-18 5.242886e-22
D 1.831564e-02 4.978707e-02 1.353353e-01 3.678794e-01 3.678794e-01
E 3.678794e-01 1.353353e-01 4.978707e-02 1.831564e-02 6.737947e-03
F 4.539993e-05 2.061154e-09 9.357623e-14 4.248354e-18 5.242886e-22
G 1.831564e-02 4.978707e-02 1.353353e-01 3.678794e-01 3.678794e-01
H 3.678794e-01 1.353353e-01 4.978707e-02 1.831564e-02 6.737947e-03
I 4.539993e-05 2.061154e-09 9.357623e-14 4.248354e-18 5.242886e-22
This is the benchmark setup:
microbenchmark(
prop = prop.table(m, 2),
scale = scale(m, center=FALSE, scale=colSums(m)),
sweep = sweep(m, 2, colSums(m), FUN="/"),
t_t_colsums = t(t(m)/colSums(m)),
m_colsums_col = m/colSums(m)[col(m)],
m_mult_diag = m %*% diag(1/colSums(m)),
times = 1500L)
This are the results of the benchmark:
Unit: microseconds
expr min lq median uq max
1 m_colsums_col 29.089 32.9565 35.9870 37.5215 1547.972
2 m_mult_diag 43.278 47.6115 51.7075 53.8945 110.560
3 prop 207.070 214.3010 216.6800 219.9680 2091.913
4 scale 133.659 142.6325 145.3100 147.9195 1730.640
5 sweep 113.969 119.6315 121.3725 123.6570 1663.356
6 t_t_colsums 56.976 65.3580 67.8895 69.5130 1640.660
For completeness, this is the output:
[,1] [,2] [,3] [,4] [,5]
A 1.580677e-02 8.964714e-02 2.436862e-01 3.175247e-01 3.273379e-01
B 3.174874e-01 2.436862e-01 8.964714e-02 1.580862e-02 5.995403e-03
C 3.918106e-05 3.711336e-09 1.684944e-13 3.666847e-18 4.665103e-22
D 1.580677e-02 8.964714e-02 2.436862e-01 3.175247e-01 3.273379e-01
E 3.174874e-01 2.436862e-01 8.964714e-02 1.580862e-02 5.995403e-03
F 3.918106e-05 3.711336e-09 1.684944e-13 3.666847e-18 4.665103e-22
G 1.580677e-02 8.964714e-02 2.436862e-01 3.175247e-01 3.273379e-01
H 3.174874e-01 2.436862e-01 8.964714e-02 1.580862e-02 5.995403e-03
I 3.918106e-05 3.711336e-09 1.684944e-13 3.666847e-18 4.665103e-22
With no doubts for little matrices m / colSums(m)[col(m)] wins!
But for big matrices? In the subsequent example I've used a 1000x1000 matrix.
set.seed(42)
m <- matrix(sample(1:10, 1e6, TRUE), 1e3)
...
Unit: milliseconds
expr min lq median uq max
1 m_colsums_col 55.26442 58.94281 64.41691 102.69683 119.08685
2 m_mult_diag 34.67692 41.68494 80.05480 89.48099 99.72062
3 prop 87.95552 94.13143 99.17044 136.03669 160.51586
4 scale 52.84534 55.07107 60.57154 99.87761 156.16622
5 sweep 52.79542 55.93877 61.55066 99.67766 119.05134
6 t_t_colsums 63.09783 65.53783 68.93731 110.03691 127.89792
For big matrices m / colSums(m)[col(m)] performs well (4th position) but does not win.
For big matrices m %*% diag(1/colSums(m)) wins!
|
Column-wise matrix normalization in R [closed]
|
Another is prop.table(m, 2), or simply propr(m), that internally uses sweep.
It may be of interest to compare the performance of these equivalent solutions, so I did a little benchmark (using microben
|
Column-wise matrix normalization in R [closed]
Another is prop.table(m, 2), or simply propr(m), that internally uses sweep.
It may be of interest to compare the performance of these equivalent solutions, so I did a little benchmark (using microbenchmark package).
This is the input matrix m I've used:
[,1] [,2] [,3] [,4] [,5]
A 1.831564e-02 4.978707e-02 1.353353e-01 3.678794e-01 3.678794e-01
B 3.678794e-01 1.353353e-01 4.978707e-02 1.831564e-02 6.737947e-03
C 4.539993e-05 2.061154e-09 9.357623e-14 4.248354e-18 5.242886e-22
D 1.831564e-02 4.978707e-02 1.353353e-01 3.678794e-01 3.678794e-01
E 3.678794e-01 1.353353e-01 4.978707e-02 1.831564e-02 6.737947e-03
F 4.539993e-05 2.061154e-09 9.357623e-14 4.248354e-18 5.242886e-22
G 1.831564e-02 4.978707e-02 1.353353e-01 3.678794e-01 3.678794e-01
H 3.678794e-01 1.353353e-01 4.978707e-02 1.831564e-02 6.737947e-03
I 4.539993e-05 2.061154e-09 9.357623e-14 4.248354e-18 5.242886e-22
This is the benchmark setup:
microbenchmark(
prop = prop.table(m, 2),
scale = scale(m, center=FALSE, scale=colSums(m)),
sweep = sweep(m, 2, colSums(m), FUN="/"),
t_t_colsums = t(t(m)/colSums(m)),
m_colsums_col = m/colSums(m)[col(m)],
m_mult_diag = m %*% diag(1/colSums(m)),
times = 1500L)
This are the results of the benchmark:
Unit: microseconds
expr min lq median uq max
1 m_colsums_col 29.089 32.9565 35.9870 37.5215 1547.972
2 m_mult_diag 43.278 47.6115 51.7075 53.8945 110.560
3 prop 207.070 214.3010 216.6800 219.9680 2091.913
4 scale 133.659 142.6325 145.3100 147.9195 1730.640
5 sweep 113.969 119.6315 121.3725 123.6570 1663.356
6 t_t_colsums 56.976 65.3580 67.8895 69.5130 1640.660
For completeness, this is the output:
[,1] [,2] [,3] [,4] [,5]
A 1.580677e-02 8.964714e-02 2.436862e-01 3.175247e-01 3.273379e-01
B 3.174874e-01 2.436862e-01 8.964714e-02 1.580862e-02 5.995403e-03
C 3.918106e-05 3.711336e-09 1.684944e-13 3.666847e-18 4.665103e-22
D 1.580677e-02 8.964714e-02 2.436862e-01 3.175247e-01 3.273379e-01
E 3.174874e-01 2.436862e-01 8.964714e-02 1.580862e-02 5.995403e-03
F 3.918106e-05 3.711336e-09 1.684944e-13 3.666847e-18 4.665103e-22
G 1.580677e-02 8.964714e-02 2.436862e-01 3.175247e-01 3.273379e-01
H 3.174874e-01 2.436862e-01 8.964714e-02 1.580862e-02 5.995403e-03
I 3.918106e-05 3.711336e-09 1.684944e-13 3.666847e-18 4.665103e-22
With no doubts for little matrices m / colSums(m)[col(m)] wins!
But for big matrices? In the subsequent example I've used a 1000x1000 matrix.
set.seed(42)
m <- matrix(sample(1:10, 1e6, TRUE), 1e3)
...
Unit: milliseconds
expr min lq median uq max
1 m_colsums_col 55.26442 58.94281 64.41691 102.69683 119.08685
2 m_mult_diag 34.67692 41.68494 80.05480 89.48099 99.72062
3 prop 87.95552 94.13143 99.17044 136.03669 160.51586
4 scale 52.84534 55.07107 60.57154 99.87761 156.16622
5 sweep 52.79542 55.93877 61.55066 99.67766 119.05134
6 t_t_colsums 63.09783 65.53783 68.93731 110.03691 127.89792
For big matrices m / colSums(m)[col(m)] performs well (4th position) but does not win.
For big matrices m %*% diag(1/colSums(m)) wins!
|
Column-wise matrix normalization in R [closed]
Another is prop.table(m, 2), or simply propr(m), that internally uses sweep.
It may be of interest to compare the performance of these equivalent solutions, so I did a little benchmark (using microben
|
10,564
|
Column-wise matrix normalization in R [closed]
|
apply(m,2,norm<-function(x){return (x/sum(x)}) ?
|
Column-wise matrix normalization in R [closed]
|
apply(m,2,norm<-function(x){return (x/sum(x)}) ?
|
Column-wise matrix normalization in R [closed]
apply(m,2,norm<-function(x){return (x/sum(x)}) ?
|
Column-wise matrix normalization in R [closed]
apply(m,2,norm<-function(x){return (x/sum(x)}) ?
|
10,565
|
How to model this odd-shaped distribution (almost a reverse-J)
|
Methods of censored regression can handle data like this. They assume the residuals behave as in ordinary linear regression but have been modified so that
(Left censoring): all values smaller than a low threshold, which is independent of the data, (but can vary from one case to the other) have not been quantified; and/or
(Right censoring): all values larger than than a high threshold, which is independent of the data (but can vary from one case to the other) have not been quantified.
"Not quantified" means we know whether or not a value falls below (or above) its threshold, but that's all.
The fitting methods typically use maximum likelihood. When the model for the response $Y$ corresponding to a vector $X$ is in the form
$$Y \sim X \beta + \varepsilon$$
with iid $\varepsilon$ having a common distribution $F_\sigma$ with PDF $f_\sigma$ (where $\sigma$ are unknown "nuisance parameters"), then--in the absence of censoring--the log likelihood of observations $(x_i, y_i)$ is
$$\Lambda = \sum_{i=1}^n \log f_\sigma(y_i - x_i\beta).$$
With censoring present we may divide the cases into three (possibly empty) classes: for indexes $i=1$ to $n_1$, the $y_i$ contain the lower threshold values and represent left censored data; for indexes $i=n_1+1$ to $n_2$, the $y_i$ are quantified; and for the remaining indexes, the $y_i$ contain the upper threshold values and represent right censored data. The log likelihood is obtained in the same way as before: it is the log of the product of the probabilities.
$$\Lambda = \sum_{i=1}^{n_1} \log F_\sigma(y_i - x_i\beta) + \sum_{i=n_1+1}^{n_2} \log f_\sigma(y_i - x_i\beta) + \sum_{i=n_2+1}^n \log (1 - F_\sigma(y_i - x_i\beta)).$$
This is maximized numerically as a function of $(\beta, \sigma)$.
In my experience, such methods can work well when less than half the data are censored; otherwise, the results can be unstable.
Here is a simple R example using the censReg package to illustrate how OLS and censored results can differ (a lot) even with plenty of data. It qualitatively reproduces the data in the question.
library("censReg")
set.seed(17)
n.data <- 2960
coeff <- c(-0.001, 0.005)
sigma <- 0.005
x <- rnorm(n.data, 0.5)
y <- as.vector(coeff %*% rbind(rep(1, n.data), x) + rnorm(n.data, 0, sigma))
y.cen <- y
y.cen[y < 0] <- 0
y.cen[y > 0.01] <- 0.01
data = data.frame(list(x, y.cen))
The key things to notice are the parameters: the true slope is $0.005$, the true intercept is $-0.001$, and the true error SD is $0.005$.
Let's use both lm and censReg to fit a line:
fit <- censReg(y.cen ~ x, data=data, left=0.0, right=0.01)
summary(fit)
The results of this censored regression, given by print(fit), are
(Intercept) x sigma
-0.001028 0.004935 0.004856
Those are remarkably close to the correct values of $-0.001$, $0.005$, and $0.005$, respectively.
fit.OLS <- lm(y.cen ~ x, data=data)
summary(fit.OLS)
The OLS fit, given by print(fit.OLS), is
(Intercept) x
0.001996 0.002345
Not even remotely close! The estimated standard error reported by summary is $0.002864$, less than half the true value. These kinds of biases are typical of regressions with lots of censored data.
For comparison, let's limit the regression to the quantified data:
fit.part <- lm(y[0 <= y & y <= 0.01] ~ x[0 <= y & y <= 0.01])
summary(fit.part)
(Intercept) x[0 <= y & y <= 0.01]
0.003240 0.001461
Even worse!
A few pictures summarize the situation.
lineplot <- function() {
abline(coef(fit)[1:2], col="Red", lwd=2)
abline(coef(fit.OLS), col="Blue", lty=2, lwd=2)
abline(coef(fit.part), col=rgb(.2, .6, .2), lty=3, lwd=2)
}
par(mfrow=c(1,4))
plot(x,y, pch=19, cex=0.5, col="Gray", main="Hypothetical Data")
lineplot()
plot(x,y.cen, pch=19, cex=0.5, col="Gray", main="Censored Data")
lineplot()
hist(y.cen, breaks=50, main="Censored Data")
hist(y[0 <= y & y <= 0.01], breaks=50, main="Quantified Data")
The difference between the "hypothetical data" and "censored data" plots is that all y-values below $0$ or above $0.01$ in the former have been moved to their respective thresholds to produce the latter plot. As a result, you can see the censored data all lined up along the bottom and top.
Solid red lines are the censored fits, dashed blue lines the OLS fits, both of them based on the censored data only. The dashed green lines are the fits to the quantified data only. It is clear which is best: the blue and green lines are noticeably poor and only the red (for the censored regression fit) looks about right. The histograms at the right confirm that the $Y$ values of this synthetic dataset are indeed qualitatively like those of the question (mean = $0.0032$, SD = $0.0037$). The rightmost histogram shows the center (quantified) part of the histogram in detail.
|
How to model this odd-shaped distribution (almost a reverse-J)
|
Methods of censored regression can handle data like this. They assume the residuals behave as in ordinary linear regression but have been modified so that
(Left censoring): all values smaller than a
|
How to model this odd-shaped distribution (almost a reverse-J)
Methods of censored regression can handle data like this. They assume the residuals behave as in ordinary linear regression but have been modified so that
(Left censoring): all values smaller than a low threshold, which is independent of the data, (but can vary from one case to the other) have not been quantified; and/or
(Right censoring): all values larger than than a high threshold, which is independent of the data (but can vary from one case to the other) have not been quantified.
"Not quantified" means we know whether or not a value falls below (or above) its threshold, but that's all.
The fitting methods typically use maximum likelihood. When the model for the response $Y$ corresponding to a vector $X$ is in the form
$$Y \sim X \beta + \varepsilon$$
with iid $\varepsilon$ having a common distribution $F_\sigma$ with PDF $f_\sigma$ (where $\sigma$ are unknown "nuisance parameters"), then--in the absence of censoring--the log likelihood of observations $(x_i, y_i)$ is
$$\Lambda = \sum_{i=1}^n \log f_\sigma(y_i - x_i\beta).$$
With censoring present we may divide the cases into three (possibly empty) classes: for indexes $i=1$ to $n_1$, the $y_i$ contain the lower threshold values and represent left censored data; for indexes $i=n_1+1$ to $n_2$, the $y_i$ are quantified; and for the remaining indexes, the $y_i$ contain the upper threshold values and represent right censored data. The log likelihood is obtained in the same way as before: it is the log of the product of the probabilities.
$$\Lambda = \sum_{i=1}^{n_1} \log F_\sigma(y_i - x_i\beta) + \sum_{i=n_1+1}^{n_2} \log f_\sigma(y_i - x_i\beta) + \sum_{i=n_2+1}^n \log (1 - F_\sigma(y_i - x_i\beta)).$$
This is maximized numerically as a function of $(\beta, \sigma)$.
In my experience, such methods can work well when less than half the data are censored; otherwise, the results can be unstable.
Here is a simple R example using the censReg package to illustrate how OLS and censored results can differ (a lot) even with plenty of data. It qualitatively reproduces the data in the question.
library("censReg")
set.seed(17)
n.data <- 2960
coeff <- c(-0.001, 0.005)
sigma <- 0.005
x <- rnorm(n.data, 0.5)
y <- as.vector(coeff %*% rbind(rep(1, n.data), x) + rnorm(n.data, 0, sigma))
y.cen <- y
y.cen[y < 0] <- 0
y.cen[y > 0.01] <- 0.01
data = data.frame(list(x, y.cen))
The key things to notice are the parameters: the true slope is $0.005$, the true intercept is $-0.001$, and the true error SD is $0.005$.
Let's use both lm and censReg to fit a line:
fit <- censReg(y.cen ~ x, data=data, left=0.0, right=0.01)
summary(fit)
The results of this censored regression, given by print(fit), are
(Intercept) x sigma
-0.001028 0.004935 0.004856
Those are remarkably close to the correct values of $-0.001$, $0.005$, and $0.005$, respectively.
fit.OLS <- lm(y.cen ~ x, data=data)
summary(fit.OLS)
The OLS fit, given by print(fit.OLS), is
(Intercept) x
0.001996 0.002345
Not even remotely close! The estimated standard error reported by summary is $0.002864$, less than half the true value. These kinds of biases are typical of regressions with lots of censored data.
For comparison, let's limit the regression to the quantified data:
fit.part <- lm(y[0 <= y & y <= 0.01] ~ x[0 <= y & y <= 0.01])
summary(fit.part)
(Intercept) x[0 <= y & y <= 0.01]
0.003240 0.001461
Even worse!
A few pictures summarize the situation.
lineplot <- function() {
abline(coef(fit)[1:2], col="Red", lwd=2)
abline(coef(fit.OLS), col="Blue", lty=2, lwd=2)
abline(coef(fit.part), col=rgb(.2, .6, .2), lty=3, lwd=2)
}
par(mfrow=c(1,4))
plot(x,y, pch=19, cex=0.5, col="Gray", main="Hypothetical Data")
lineplot()
plot(x,y.cen, pch=19, cex=0.5, col="Gray", main="Censored Data")
lineplot()
hist(y.cen, breaks=50, main="Censored Data")
hist(y[0 <= y & y <= 0.01], breaks=50, main="Quantified Data")
The difference between the "hypothetical data" and "censored data" plots is that all y-values below $0$ or above $0.01$ in the former have been moved to their respective thresholds to produce the latter plot. As a result, you can see the censored data all lined up along the bottom and top.
Solid red lines are the censored fits, dashed blue lines the OLS fits, both of them based on the censored data only. The dashed green lines are the fits to the quantified data only. It is clear which is best: the blue and green lines are noticeably poor and only the red (for the censored regression fit) looks about right. The histograms at the right confirm that the $Y$ values of this synthetic dataset are indeed qualitatively like those of the question (mean = $0.0032$, SD = $0.0037$). The rightmost histogram shows the center (quantified) part of the histogram in detail.
|
How to model this odd-shaped distribution (almost a reverse-J)
Methods of censored regression can handle data like this. They assume the residuals behave as in ordinary linear regression but have been modified so that
(Left censoring): all values smaller than a
|
10,566
|
How to model this odd-shaped distribution (almost a reverse-J)
|
Are the values always between 0 and 1?
If so you might consider a beta distribution and beta regression.
But make sure to think through the process that leads to your data. You could also do a 0 and 1 inflated model (0 inflated models are common, you would probably need to extend to 1 inflated by your self). The big difference is if those spikes represent large numbers of exact 0's and 1's or just values close to 0 and 1.
It may be best to consult with a local statistician (whith a non-disclosure agreement so that you can discuss the details of where the data come from) to work out the best approach.
|
How to model this odd-shaped distribution (almost a reverse-J)
|
Are the values always between 0 and 1?
If so you might consider a beta distribution and beta regression.
But make sure to think through the process that leads to your data. You could also do a 0 and
|
How to model this odd-shaped distribution (almost a reverse-J)
Are the values always between 0 and 1?
If so you might consider a beta distribution and beta regression.
But make sure to think through the process that leads to your data. You could also do a 0 and 1 inflated model (0 inflated models are common, you would probably need to extend to 1 inflated by your self). The big difference is if those spikes represent large numbers of exact 0's and 1's or just values close to 0 and 1.
It may be best to consult with a local statistician (whith a non-disclosure agreement so that you can discuss the details of where the data come from) to work out the best approach.
|
How to model this odd-shaped distribution (almost a reverse-J)
Are the values always between 0 and 1?
If so you might consider a beta distribution and beta regression.
But make sure to think through the process that leads to your data. You could also do a 0 and
|
10,567
|
How to model this odd-shaped distribution (almost a reverse-J)
|
In concordance with Greg Snow's advice I've heard beta models are useful in such situations as well (see a Smithson & verkuilen, 2006, A Better Lemon Squeezer), as well as quantile regression (Bottai et al., 2010), but these seem like so pronounced floor and ceiling effects they may be inappropriate (especially the beta regression).
Another alternative would be to considered types of censored regression models, in particular the Tobit Model, where we consider the observed outcomes to be generated by some underlying latent variable that is continuous (and presumably normal). I'm not going to say this underlying continuous model is reasonable given your histogram, but you can find some support for it as you see the distribution (ignoring the floor) has a higher density at lower values of the instrument and slowly curtails to higher values.
Good luck though, that censoring is so dramatic it is hard to imagine recovering much useful information within the extreme buckets. It looks to me like nearly half of your sample falls within the floor and ceiling bins.
|
How to model this odd-shaped distribution (almost a reverse-J)
|
In concordance with Greg Snow's advice I've heard beta models are useful in such situations as well (see a Smithson & verkuilen, 2006, A Better Lemon Squeezer), as well as quantile regression (Bottai
|
How to model this odd-shaped distribution (almost a reverse-J)
In concordance with Greg Snow's advice I've heard beta models are useful in such situations as well (see a Smithson & verkuilen, 2006, A Better Lemon Squeezer), as well as quantile regression (Bottai et al., 2010), but these seem like so pronounced floor and ceiling effects they may be inappropriate (especially the beta regression).
Another alternative would be to considered types of censored regression models, in particular the Tobit Model, where we consider the observed outcomes to be generated by some underlying latent variable that is continuous (and presumably normal). I'm not going to say this underlying continuous model is reasonable given your histogram, but you can find some support for it as you see the distribution (ignoring the floor) has a higher density at lower values of the instrument and slowly curtails to higher values.
Good luck though, that censoring is so dramatic it is hard to imagine recovering much useful information within the extreme buckets. It looks to me like nearly half of your sample falls within the floor and ceiling bins.
|
How to model this odd-shaped distribution (almost a reverse-J)
In concordance with Greg Snow's advice I've heard beta models are useful in such situations as well (see a Smithson & verkuilen, 2006, A Better Lemon Squeezer), as well as quantile regression (Bottai
|
10,568
|
Does the Bayesian posterior need to be a proper distribution?
|
(It is somewhat of a surprise to read the previous answers, which focus on the potential impropriety of the posterior when the prior is proper, since, as far as I can tell, the question is whether or not the posterior has to be proper (i.e., integrable to one) to be a proper (i.e., acceptable for Bayesian inference) posterior.)
In Bayesian statistics, the posterior distribution has to be a probability distribution, from which one can derive moments like the posterior mean $\mathbb{E}^\pi[h(\theta)|x]$ and probability statements like the coverage of a credible region, $\mathbb{P}(\pi(\theta|x)>\kappa|x)$. If $$\int f(x|\theta)\,\pi(\theta)\,\text{d}\theta = +\infty\,,\qquad (1)$$ the posterior $\pi(\theta|x)$ cannot be normalised into a probability density and Bayesian inference simply cannot be conducted. The posterior simply does not exist in such cases.
Actually, (1) must hold for all $x$'s in the sample space and not only for the observed $x$ for, otherwise, selecting the prior would depend on the data. This means that priors like Haldane's prior, $\pi(p)\propto \{1/p(1-p)\}$, on the probability $p$ of a Binomial or a Negative Binomial variable $X$ cannot be used, since the posterior is not defined for $x=0$.
I know of one exception when one can consider "improper posteriors": it is found in "The Art of Data Augmentation" by David van Dyk and Xiao-Li Meng. The improper measure is over a so-called working parameter $\alpha$ such that the observation is produced by the marginal of an augmented distribution
$$f(x|\theta)=\int_{T(x^\text{aug})=x} f(x^\text{aug}|\theta,\alpha)\,\text{d}x^\text{aug}$$
and van Dyk and Meng put an improper prior $p(\alpha)$ on this working parameter $\alpha$ in order to speed up the simulation of $\pi(\theta|x)$ (which remains well-defined as a probability density) by MCMC.
In another perspective, somewhat related to the answer by eretmochelys, namely a perspective of Bayesian decision theory, a setting where (1) occurs could still be acceptable if it led to optimal decisions. Namely, if $L(\delta,\theta)\ge 0$ is a loss function evaluating the impact of using the decision $\delta$, a Bayesian optimal decision under the prior $\pi$ is given by
$$\delta^\star(x)=\arg\min_\delta \int L(\delta,\theta) f(x|\theta)\,\pi(\theta)\,\text{d}\theta$$ and all that matters is that this integral is not everywhere (in $\delta$) infinite. Whether or not (1) holds is secondary for the derivation of $\delta^\star(x)$, even though properties like admissibility are only guaranteed when (1) holds.
|
Does the Bayesian posterior need to be a proper distribution?
|
(It is somewhat of a surprise to read the previous answers, which focus on the potential impropriety of the posterior when the prior is proper, since, as far as I can tell, the question is whether or
|
Does the Bayesian posterior need to be a proper distribution?
(It is somewhat of a surprise to read the previous answers, which focus on the potential impropriety of the posterior when the prior is proper, since, as far as I can tell, the question is whether or not the posterior has to be proper (i.e., integrable to one) to be a proper (i.e., acceptable for Bayesian inference) posterior.)
In Bayesian statistics, the posterior distribution has to be a probability distribution, from which one can derive moments like the posterior mean $\mathbb{E}^\pi[h(\theta)|x]$ and probability statements like the coverage of a credible region, $\mathbb{P}(\pi(\theta|x)>\kappa|x)$. If $$\int f(x|\theta)\,\pi(\theta)\,\text{d}\theta = +\infty\,,\qquad (1)$$ the posterior $\pi(\theta|x)$ cannot be normalised into a probability density and Bayesian inference simply cannot be conducted. The posterior simply does not exist in such cases.
Actually, (1) must hold for all $x$'s in the sample space and not only for the observed $x$ for, otherwise, selecting the prior would depend on the data. This means that priors like Haldane's prior, $\pi(p)\propto \{1/p(1-p)\}$, on the probability $p$ of a Binomial or a Negative Binomial variable $X$ cannot be used, since the posterior is not defined for $x=0$.
I know of one exception when one can consider "improper posteriors": it is found in "The Art of Data Augmentation" by David van Dyk and Xiao-Li Meng. The improper measure is over a so-called working parameter $\alpha$ such that the observation is produced by the marginal of an augmented distribution
$$f(x|\theta)=\int_{T(x^\text{aug})=x} f(x^\text{aug}|\theta,\alpha)\,\text{d}x^\text{aug}$$
and van Dyk and Meng put an improper prior $p(\alpha)$ on this working parameter $\alpha$ in order to speed up the simulation of $\pi(\theta|x)$ (which remains well-defined as a probability density) by MCMC.
In another perspective, somewhat related to the answer by eretmochelys, namely a perspective of Bayesian decision theory, a setting where (1) occurs could still be acceptable if it led to optimal decisions. Namely, if $L(\delta,\theta)\ge 0$ is a loss function evaluating the impact of using the decision $\delta$, a Bayesian optimal decision under the prior $\pi$ is given by
$$\delta^\star(x)=\arg\min_\delta \int L(\delta,\theta) f(x|\theta)\,\pi(\theta)\,\text{d}\theta$$ and all that matters is that this integral is not everywhere (in $\delta$) infinite. Whether or not (1) holds is secondary for the derivation of $\delta^\star(x)$, even though properties like admissibility are only guaranteed when (1) holds.
|
Does the Bayesian posterior need to be a proper distribution?
(It is somewhat of a surprise to read the previous answers, which focus on the potential impropriety of the posterior when the prior is proper, since, as far as I can tell, the question is whether or
|
10,569
|
Does the Bayesian posterior need to be a proper distribution?
|
The posterior distribution need not be proper even if the prior is proper. For example,
suppose $v$ has a Gamma prior with shape 0.25 (which is proper), and we model our datum $x$ as drawn from a Gaussian distribution with mean zero and variance $v$. Suppose $x$ is observed to be zero. Then the likelihood $p(x|v)$ is proportional to $v^{-0.5}$, which makes the posterior distribution for $v$ improper, since it is proportional to $v^{-1.25} e^{-v}$. This problem arises because of the wacky nature of continuous variables.
|
Does the Bayesian posterior need to be a proper distribution?
|
The posterior distribution need not be proper even if the prior is proper. For example,
suppose $v$ has a Gamma prior with shape 0.25 (which is proper), and we model our datum $x$ as drawn from a Gau
|
Does the Bayesian posterior need to be a proper distribution?
The posterior distribution need not be proper even if the prior is proper. For example,
suppose $v$ has a Gamma prior with shape 0.25 (which is proper), and we model our datum $x$ as drawn from a Gaussian distribution with mean zero and variance $v$. Suppose $x$ is observed to be zero. Then the likelihood $p(x|v)$ is proportional to $v^{-0.5}$, which makes the posterior distribution for $v$ improper, since it is proportional to $v^{-1.25} e^{-v}$. This problem arises because of the wacky nature of continuous variables.
|
Does the Bayesian posterior need to be a proper distribution?
The posterior distribution need not be proper even if the prior is proper. For example,
suppose $v$ has a Gamma prior with shape 0.25 (which is proper), and we model our datum $x$ as drawn from a Gau
|
10,570
|
Does the Bayesian posterior need to be a proper distribution?
|
Defining the set
$$
\text{Bogus Data} = \left\{ x:\int f(x\mid \theta)\,\pi(\theta)\,d\theta = \infty \right\} \, ,
$$
we have
$$
\mathrm{Pr}\left(X\in\text{Bogus Data}\right) = \int_\text{Bogus Data} \int f(x\mid \theta)\,\pi(\theta)\,d\theta\,dx = \int_\text{Bogus Data} \infty\,dx \, .
$$
The last integral will be equal to $\infty$ if the Lebesgue measure of $\text{Bogus Data}$ is positive. But this is impossible, because this integral gives you a probability (a real number between $0$ and $1$). Hence, it follows that the Lebesgue measure of $\text{Bogus Data}$ is equal to $0$, and, of course, it also follows that $\mathrm{Pr}\left(X\in\text{Bogus Data}\right)=0$.
In words: the prior predictive probability of those sample values that make the posterior improper is equal to zero.
Moral of the story: beware of null sets, they may bite, however improbable it may be.
P.S. As pointed out by Prof. Robert in the comments, this reasoning blows up if the prior is improper.
|
Does the Bayesian posterior need to be a proper distribution?
|
Defining the set
$$
\text{Bogus Data} = \left\{ x:\int f(x\mid \theta)\,\pi(\theta)\,d\theta = \infty \right\} \, ,
$$
we have
$$
\mathrm{Pr}\left(X\in\text{Bogus Data}\right) = \int_\text{Bogus D
|
Does the Bayesian posterior need to be a proper distribution?
Defining the set
$$
\text{Bogus Data} = \left\{ x:\int f(x\mid \theta)\,\pi(\theta)\,d\theta = \infty \right\} \, ,
$$
we have
$$
\mathrm{Pr}\left(X\in\text{Bogus Data}\right) = \int_\text{Bogus Data} \int f(x\mid \theta)\,\pi(\theta)\,d\theta\,dx = \int_\text{Bogus Data} \infty\,dx \, .
$$
The last integral will be equal to $\infty$ if the Lebesgue measure of $\text{Bogus Data}$ is positive. But this is impossible, because this integral gives you a probability (a real number between $0$ and $1$). Hence, it follows that the Lebesgue measure of $\text{Bogus Data}$ is equal to $0$, and, of course, it also follows that $\mathrm{Pr}\left(X\in\text{Bogus Data}\right)=0$.
In words: the prior predictive probability of those sample values that make the posterior improper is equal to zero.
Moral of the story: beware of null sets, they may bite, however improbable it may be.
P.S. As pointed out by Prof. Robert in the comments, this reasoning blows up if the prior is improper.
|
Does the Bayesian posterior need to be a proper distribution?
Defining the set
$$
\text{Bogus Data} = \left\{ x:\int f(x\mid \theta)\,\pi(\theta)\,d\theta = \infty \right\} \, ,
$$
we have
$$
\mathrm{Pr}\left(X\in\text{Bogus Data}\right) = \int_\text{Bogus D
|
10,571
|
Does the Bayesian posterior need to be a proper distribution?
|
Any "distribution" must sum (or integrate) to 1. I can think a few examples where one might work with un-normalized distributions, but I am uncomfortable ever calling anything which marginalizes to anything but 1 a "distribution".
Given that you mentioned Bayesian posterior, I bet your question might come from a classification problem of searching for the optimal estimate of $x$ given some feature vector $d$
$$
\begin{align}
\hat{x} &= \arg \max_x P_{X|D}(x|d) \\ &= \arg \max_x \frac{P_{D|X}(d|x) P_X(x)}{P_D(d)} \\ &= \arg \max_x {P_{D|X}(d|x) P_X(x)}
\end{align}
$$
where the last equality comes from the fact that $P_D$ doesn't depend on $x$. We can then choose our $\hat{x}$ exclusively based on the value $P_{D|X}(d|x) P_X(x)$ which is proportional to our Bayesian posterior, but do not confuse it for a probability!
|
Does the Bayesian posterior need to be a proper distribution?
|
Any "distribution" must sum (or integrate) to 1. I can think a few examples where one might work with un-normalized distributions, but I am uncomfortable ever calling anything which marginalizes to a
|
Does the Bayesian posterior need to be a proper distribution?
Any "distribution" must sum (or integrate) to 1. I can think a few examples where one might work with un-normalized distributions, but I am uncomfortable ever calling anything which marginalizes to anything but 1 a "distribution".
Given that you mentioned Bayesian posterior, I bet your question might come from a classification problem of searching for the optimal estimate of $x$ given some feature vector $d$
$$
\begin{align}
\hat{x} &= \arg \max_x P_{X|D}(x|d) \\ &= \arg \max_x \frac{P_{D|X}(d|x) P_X(x)}{P_D(d)} \\ &= \arg \max_x {P_{D|X}(d|x) P_X(x)}
\end{align}
$$
where the last equality comes from the fact that $P_D$ doesn't depend on $x$. We can then choose our $\hat{x}$ exclusively based on the value $P_{D|X}(d|x) P_X(x)$ which is proportional to our Bayesian posterior, but do not confuse it for a probability!
|
Does the Bayesian posterior need to be a proper distribution?
Any "distribution" must sum (or integrate) to 1. I can think a few examples where one might work with un-normalized distributions, but I am uncomfortable ever calling anything which marginalizes to a
|
10,572
|
Does the Bayesian posterior need to be a proper distribution?
|
Later is better than never. Here is a natural and useful counterexample I believe, arising from Bayesian nonparametrics.
Suppose ${\mathbf{x}} = \left( {{x_1},...,{x_i},...{x_n}} \right) \in {\mathbb{R}^n}$ has posterior probability distribution
$p\left( {\left. {\mathbf{x}} \right|D} \right) \propto {e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}$
We want to evaluate the posterior expectation $\mathbb{E}\left. {\mathbf{x}} \right|D$.
If ${\mathbf{A}}$ is positive definite, then let
$I \triangleq \int\limits_{{\mathbb{R}^n}} {{e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}{{\text{d}}^n}{\mathbf{x}}} = \sqrt {{{\left( {2\pi } \right)}^n}{{\left| {\mathbf{A}} \right|}^{ - 1}}} {e^{\frac{1}{2}{{\mathbf{J}}^{\mathbf{T}}}{{\mathbf{A}}^{ - 1}}{\mathbf{J}}}}$
By Leibniz rule/Feynman trick, we have
$
\frac{{\partial I}}{{\partial {J_j}}} = \int\limits_{{\mathbb{R}^n}} {\frac{{\partial {e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}}}{{\partial {J_j}}}{{\text{d}}^n}{\mathbf{x}}} = \int\limits_{\,{\mathbb{R}^n}} {{x_j}{e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}{{\text{d}}^n}{\mathbf{x}}} = \\
\frac{\partial }{{\partial {J_j}}}\sqrt {{{\left( {2\pi } \right)}^n}{{\left| {\mathbf{A}} \right|}^{ - 1}}} {e^{\frac{1}{2}{{\mathbf{J}}^{\mathbf{T}}}{{\mathbf{A}}^{ - 1}}{\mathbf{J}}}} = \sqrt {{{\left( {2\pi } \right)}^n}{{\left| {\mathbf{A}} \right|}^{ - 1}}} {e^{\frac{1}{2}{{\mathbf{J}}^{\mathbf{T}}}{{\mathbf{A}}^{ - 1}}{\mathbf{J}}}}\frac{\partial }{{\partial {J_j}}}\frac{1}{2}{{\mathbf{J}}^{\mathbf{T}}}{{\mathbf{A}}^{ - 1}}{\mathbf{J}} = \\
\frac{1}{2}I\frac{\partial }{{\partial {J_j}}}{{\mathbf{J}}^{\mathbf{T}}}{{\mathbf{A}}^{ - 1}}{\mathbf{J}} = I\sum\limits_{i = 1}^n {{\mathbf{A}}_{ij}^{ - 1}{{\mathbf{J}}_i}} \\ $
Therefore
$\mathbb{E}\left. {{x_j}} \right|D = \frac{{\int\limits_{\,{\mathbb{R}^n}} {{x_j}{e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}{{\text{d}}^n}{\mathbf{x}}} }}{{\int\limits_{\,{\mathbb{R}^n}} {{e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}{{\text{d}}^n}{\mathbf{x}}} }} = \sum\limits_{i = 1}^n {{\mathbf{A}}_{ij}^{ - 1}{{\mathbf{J}}_i}} $
and
$\mathbb{E}\left. {\mathbf{x}} \right|D = {{\mathbf{A}}^{ - 1}}{\mathbf{J}}$
Now, if ${\mathbf{A}}$ is only positive semi-definite and singular, so that $p\left( {\left. {\mathbf{x}} \right|D} \right)$ is improper, degenerate and
$\int\limits_{{\mathbb{R}^n}} {p\left( {\left. {\mathbf{x}} \right|D} \right){{\text{d}}^n}{\mathbf{x}}} = + \infty $
it suffices to replace the matrix inverse ${{\mathbf{A}}^{ - 1}}$ by its Moore-Penrose pseudoinverse ${{\mathbf{A}}^ + }$ to get
$\mathbb{E}\left. {\mathbf{x}} \right|D = {{\mathbf{A}}^ + }{\mathbf{J}}$
IT WORKS. Same for higher moments. So, it seems that a Bayesian posterior does not need to be proper/non-degenerate in order to be proper, that is to yield legitimate and useful inferences.
|
Does the Bayesian posterior need to be a proper distribution?
|
Later is better than never. Here is a natural and useful counterexample I believe, arising from Bayesian nonparametrics.
Suppose ${\mathbf{x}} = \left( {{x_1},...,{x_i},...{x_n}} \right) \in {\mathbb{
|
Does the Bayesian posterior need to be a proper distribution?
Later is better than never. Here is a natural and useful counterexample I believe, arising from Bayesian nonparametrics.
Suppose ${\mathbf{x}} = \left( {{x_1},...,{x_i},...{x_n}} \right) \in {\mathbb{R}^n}$ has posterior probability distribution
$p\left( {\left. {\mathbf{x}} \right|D} \right) \propto {e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}$
We want to evaluate the posterior expectation $\mathbb{E}\left. {\mathbf{x}} \right|D$.
If ${\mathbf{A}}$ is positive definite, then let
$I \triangleq \int\limits_{{\mathbb{R}^n}} {{e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}{{\text{d}}^n}{\mathbf{x}}} = \sqrt {{{\left( {2\pi } \right)}^n}{{\left| {\mathbf{A}} \right|}^{ - 1}}} {e^{\frac{1}{2}{{\mathbf{J}}^{\mathbf{T}}}{{\mathbf{A}}^{ - 1}}{\mathbf{J}}}}$
By Leibniz rule/Feynman trick, we have
$
\frac{{\partial I}}{{\partial {J_j}}} = \int\limits_{{\mathbb{R}^n}} {\frac{{\partial {e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}}}{{\partial {J_j}}}{{\text{d}}^n}{\mathbf{x}}} = \int\limits_{\,{\mathbb{R}^n}} {{x_j}{e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}{{\text{d}}^n}{\mathbf{x}}} = \\
\frac{\partial }{{\partial {J_j}}}\sqrt {{{\left( {2\pi } \right)}^n}{{\left| {\mathbf{A}} \right|}^{ - 1}}} {e^{\frac{1}{2}{{\mathbf{J}}^{\mathbf{T}}}{{\mathbf{A}}^{ - 1}}{\mathbf{J}}}} = \sqrt {{{\left( {2\pi } \right)}^n}{{\left| {\mathbf{A}} \right|}^{ - 1}}} {e^{\frac{1}{2}{{\mathbf{J}}^{\mathbf{T}}}{{\mathbf{A}}^{ - 1}}{\mathbf{J}}}}\frac{\partial }{{\partial {J_j}}}\frac{1}{2}{{\mathbf{J}}^{\mathbf{T}}}{{\mathbf{A}}^{ - 1}}{\mathbf{J}} = \\
\frac{1}{2}I\frac{\partial }{{\partial {J_j}}}{{\mathbf{J}}^{\mathbf{T}}}{{\mathbf{A}}^{ - 1}}{\mathbf{J}} = I\sum\limits_{i = 1}^n {{\mathbf{A}}_{ij}^{ - 1}{{\mathbf{J}}_i}} \\ $
Therefore
$\mathbb{E}\left. {{x_j}} \right|D = \frac{{\int\limits_{\,{\mathbb{R}^n}} {{x_j}{e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}{{\text{d}}^n}{\mathbf{x}}} }}{{\int\limits_{\,{\mathbb{R}^n}} {{e^{ - \frac{1}{2}{{\mathbf{x}}^{\mathbf{T}}}{\mathbf{Ax}} + {{\mathbf{J}}^{\mathbf{T}}}{\mathbf{x}}}}{{\text{d}}^n}{\mathbf{x}}} }} = \sum\limits_{i = 1}^n {{\mathbf{A}}_{ij}^{ - 1}{{\mathbf{J}}_i}} $
and
$\mathbb{E}\left. {\mathbf{x}} \right|D = {{\mathbf{A}}^{ - 1}}{\mathbf{J}}$
Now, if ${\mathbf{A}}$ is only positive semi-definite and singular, so that $p\left( {\left. {\mathbf{x}} \right|D} \right)$ is improper, degenerate and
$\int\limits_{{\mathbb{R}^n}} {p\left( {\left. {\mathbf{x}} \right|D} \right){{\text{d}}^n}{\mathbf{x}}} = + \infty $
it suffices to replace the matrix inverse ${{\mathbf{A}}^{ - 1}}$ by its Moore-Penrose pseudoinverse ${{\mathbf{A}}^ + }$ to get
$\mathbb{E}\left. {\mathbf{x}} \right|D = {{\mathbf{A}}^ + }{\mathbf{J}}$
IT WORKS. Same for higher moments. So, it seems that a Bayesian posterior does not need to be proper/non-degenerate in order to be proper, that is to yield legitimate and useful inferences.
|
Does the Bayesian posterior need to be a proper distribution?
Later is better than never. Here is a natural and useful counterexample I believe, arising from Bayesian nonparametrics.
Suppose ${\mathbf{x}} = \left( {{x_1},...,{x_i},...{x_n}} \right) \in {\mathbb{
|
10,573
|
Does the Bayesian posterior need to be a proper distribution?
|
Improper posterior distribution only arises when you're having an improper prior distribution. The implication of this is that the asymptotic results do not hold.
As an example, consider a binomial data consisting of $n$ success and 0 failures, if using $Beta(0,0)$ as the prior distribution, then the posterior will be improper. In this situation, the best is to think of a proper prior distribution to substitute your improper prior.
|
Does the Bayesian posterior need to be a proper distribution?
|
Improper posterior distribution only arises when you're having an improper prior distribution. The implication of this is that the asymptotic results do not hold.
As an example, consider a binomial d
|
Does the Bayesian posterior need to be a proper distribution?
Improper posterior distribution only arises when you're having an improper prior distribution. The implication of this is that the asymptotic results do not hold.
As an example, consider a binomial data consisting of $n$ success and 0 failures, if using $Beta(0,0)$ as the prior distribution, then the posterior will be improper. In this situation, the best is to think of a proper prior distribution to substitute your improper prior.
|
Does the Bayesian posterior need to be a proper distribution?
Improper posterior distribution only arises when you're having an improper prior distribution. The implication of this is that the asymptotic results do not hold.
As an example, consider a binomial d
|
10,574
|
Dropout makes performance worse
|
Dropout is a regularization technique, and is most effective at preventing overfitting. However, there are several places when dropout can hurt performance.
Right before the last layer. This is generally a bad place to apply dropout, because the network has no ability to "correct" errors induced by dropout before the classification happens. If I read correctly, you might have put dropout right before the softmax in the iris MLP.
When the network is small relative to the dataset, regularization is usually unnecessary. If the model capacity is already low, lowering it further by adding regularization will hurt performance. I noticed most of your networks were relatively small and shallow.
When training time is limited. It's unclear if this is the case here, but if you don't train until convergence, dropout may give worse results. Usually dropout hurts performance at the start of training, but results in the final ''converged'' error being lower. Therefore, if you don't plan to train until convergence, you may not want to use dropout.
Finally, I want to mention that as far as I know, dropout is rarely used nowaways, having been supplanted by a technique known as batch normalization. Of course, that's not to say dropout isn't a valid and effective tool to try out.
|
Dropout makes performance worse
|
Dropout is a regularization technique, and is most effective at preventing overfitting. However, there are several places when dropout can hurt performance.
Right before the last layer. This is gener
|
Dropout makes performance worse
Dropout is a regularization technique, and is most effective at preventing overfitting. However, there are several places when dropout can hurt performance.
Right before the last layer. This is generally a bad place to apply dropout, because the network has no ability to "correct" errors induced by dropout before the classification happens. If I read correctly, you might have put dropout right before the softmax in the iris MLP.
When the network is small relative to the dataset, regularization is usually unnecessary. If the model capacity is already low, lowering it further by adding regularization will hurt performance. I noticed most of your networks were relatively small and shallow.
When training time is limited. It's unclear if this is the case here, but if you don't train until convergence, dropout may give worse results. Usually dropout hurts performance at the start of training, but results in the final ''converged'' error being lower. Therefore, if you don't plan to train until convergence, you may not want to use dropout.
Finally, I want to mention that as far as I know, dropout is rarely used nowaways, having been supplanted by a technique known as batch normalization. Of course, that's not to say dropout isn't a valid and effective tool to try out.
|
Dropout makes performance worse
Dropout is a regularization technique, and is most effective at preventing overfitting. However, there are several places when dropout can hurt performance.
Right before the last layer. This is gener
|
10,575
|
What's the difference between mathematical statistics and statistics?
|
There are three types of statisticians;
those that (prefer to) work with real data,
those that (prefer to) work with simulated data,
those that (prefer to) work with the symbol $X$.
math stat types would be (3). Typically, type
(1) statisticians have some prefix attached to
make clear the source of the data they work with
(biostatistics, econometrics, psychometrics,....)
because these fields have implicit shared
assumptions about the data they use and some commonly accepted ordering
of the plausibility of these assumptions.
|
What's the difference between mathematical statistics and statistics?
|
There are three types of statisticians;
those that (prefer to) work with real data,
those that (prefer to) work with simulated data,
those that (prefer to) work with the symbol $X$.
math stat types
|
What's the difference between mathematical statistics and statistics?
There are three types of statisticians;
those that (prefer to) work with real data,
those that (prefer to) work with simulated data,
those that (prefer to) work with the symbol $X$.
math stat types would be (3). Typically, type
(1) statisticians have some prefix attached to
make clear the source of the data they work with
(biostatistics, econometrics, psychometrics,....)
because these fields have implicit shared
assumptions about the data they use and some commonly accepted ordering
of the plausibility of these assumptions.
|
What's the difference between mathematical statistics and statistics?
There are three types of statisticians;
those that (prefer to) work with real data,
those that (prefer to) work with simulated data,
those that (prefer to) work with the symbol $X$.
math stat types
|
10,576
|
What's the difference between mathematical statistics and statistics?
|
Mathematical statistics concentrates on theorems and proofs and mathematical rigor, like other branches of math. It tends to be studied in math departments, and mathematical statisticians often try to derive new theorems.
"Statistics" includes mathematical statistics, but the other parts of the field tend to concentrate on more practical problems of data analysis and so on.
|
What's the difference between mathematical statistics and statistics?
|
Mathematical statistics concentrates on theorems and proofs and mathematical rigor, like other branches of math. It tends to be studied in math departments, and mathematical statisticians often try to
|
What's the difference between mathematical statistics and statistics?
Mathematical statistics concentrates on theorems and proofs and mathematical rigor, like other branches of math. It tends to be studied in math departments, and mathematical statisticians often try to derive new theorems.
"Statistics" includes mathematical statistics, but the other parts of the field tend to concentrate on more practical problems of data analysis and so on.
|
What's the difference between mathematical statistics and statistics?
Mathematical statistics concentrates on theorems and proofs and mathematical rigor, like other branches of math. It tends to be studied in math departments, and mathematical statisticians often try to
|
10,577
|
What's the difference between mathematical statistics and statistics?
|
The boundaries are always very blurry but I would say that mathematical statistics is more focused on the mathematical foundations of statistics, whereas statistics in general is more driven by the data and its analysis.
|
What's the difference between mathematical statistics and statistics?
|
The boundaries are always very blurry but I would say that mathematical statistics is more focused on the mathematical foundations of statistics, whereas statistics in general is more driven by the da
|
What's the difference between mathematical statistics and statistics?
The boundaries are always very blurry but I would say that mathematical statistics is more focused on the mathematical foundations of statistics, whereas statistics in general is more driven by the data and its analysis.
|
What's the difference between mathematical statistics and statistics?
The boundaries are always very blurry but I would say that mathematical statistics is more focused on the mathematical foundations of statistics, whereas statistics in general is more driven by the da
|
10,578
|
What's the difference between mathematical statistics and statistics?
|
There is no difference. The science of Statistics as it is taught in academic institutions throughout the world is basically short for "Mathematical Statistics". This is divided into "Applied (mathematical) Statistics" and "Theoretical (mathematical) Statistics". In both cases, Statistics is a subfield of math (or applied math if you will) while all its principles and theorems are derived from pure math.
"Non-mathematical" statistics, for lack of a better term, would be (for me) something like the percentage of ball possession of a football team after a game, i.e. the act to register and report some real-world statistic(s).
|
What's the difference between mathematical statistics and statistics?
|
There is no difference. The science of Statistics as it is taught in academic institutions throughout the world is basically short for "Mathematical Statistics". This is divided into "Applied (mathema
|
What's the difference between mathematical statistics and statistics?
There is no difference. The science of Statistics as it is taught in academic institutions throughout the world is basically short for "Mathematical Statistics". This is divided into "Applied (mathematical) Statistics" and "Theoretical (mathematical) Statistics". In both cases, Statistics is a subfield of math (or applied math if you will) while all its principles and theorems are derived from pure math.
"Non-mathematical" statistics, for lack of a better term, would be (for me) something like the percentage of ball possession of a football team after a game, i.e. the act to register and report some real-world statistic(s).
|
What's the difference between mathematical statistics and statistics?
There is no difference. The science of Statistics as it is taught in academic institutions throughout the world is basically short for "Mathematical Statistics". This is divided into "Applied (mathema
|
10,579
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
|
The point is that sometimes, different models (for the same data) can lead to likelihood functions which differ by a multiplicative constant, but the information content must clearly be the same. An example:
We model $n$ independent Bernoulli experiments, leading to data $X_1, \dots, X_n$, each with a Bernoulli distribution with (probability) parameter $p$. This leads to the likelihood function
$$
\prod_{i=1}^n p^{x_i} (1-p)^{1-x_i}
$$
Or we can summarize the data by the binomially distributed variable $Y=X_1+X_2+\dotsm+X_n$, which has a binomial distribution, leading to the likelihood function
$$
\binom{n}{y} p^y (1-p)^{n-y}
$$
which, as a function of the unknown parameter $p$, is proportional to the former likelihood function. The two likelihood functions clearly contains the same information, and should lead to the same inferences!
And indeed, by definition, they are considered the same likelihood function.
Another viewpoint: observe that when the likelihood functions are used in Bayes theorem, as needed for bayesian analysis, such multiplicative constants simply cancel! so they are clearly irrelevant to bayesian inference. Likewise, it will cancel when calculating likelihood ratios, as used in optimal hypothesis tests (Neyman-Pearson lemma.) And it will have no influence on the value of maximum likelihood estimators. So we can see that in much of frequentist inference it cannot play a role.
We can argue from still another viewpoint. The Bernoulli probability function (hereafter we use the term "density") above is really a density with respect to counting measure, that is, the measure on the non-negative integers with mass one for each non-negative integer. But we could have defined a density with respect to some other dominating measure. In this example this will seem (and is) artificial, but in larger spaces (function spaces) it is really fundamental! Let us, for the purpose of illustration, use the specific geometric distribution, written $\lambda$, with $\lambda(0)=1/2$, $\lambda(1)=1/4$, $\lambda(2)=1/8$ and so on. Then the density of the Bernoulli distribution with respect to $\lambda$ is given by
$$
f_{\lambda}(x) = p^x (1-p)^{1-x}\cdot 2^{x+1}
$$
meaning that $$
P(X=x)= f_\lambda(x) \cdot \lambda(x)
$$
With this new, dominating, measure, the likelihood function becomes (with notation from above)
$$
\prod_{i=1}^n p^{x_i} (1-p)^{1-x_i} 2^{x_i+1} = p^y (1-p)^{n-y} 2^{y+n}
$$
note the extra factor $2^{y+n}$. So when changing the dominating measure used in the definition of the likelihood function, there arises a new multiplicative constant, which does not depend on the unknown parameter $p$, and is clearly irrelevant. That is another way to see how multiplicative constants must be irrelevant. This argument can be generalized using Radon-Nikodym derivatives (as the argument above is an example of.)
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
|
The point is that sometimes, different models (for the same data) can lead to likelihood functions which differ by a multiplicative constant, but the information content must clearly be the same. An e
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
The point is that sometimes, different models (for the same data) can lead to likelihood functions which differ by a multiplicative constant, but the information content must clearly be the same. An example:
We model $n$ independent Bernoulli experiments, leading to data $X_1, \dots, X_n$, each with a Bernoulli distribution with (probability) parameter $p$. This leads to the likelihood function
$$
\prod_{i=1}^n p^{x_i} (1-p)^{1-x_i}
$$
Or we can summarize the data by the binomially distributed variable $Y=X_1+X_2+\dotsm+X_n$, which has a binomial distribution, leading to the likelihood function
$$
\binom{n}{y} p^y (1-p)^{n-y}
$$
which, as a function of the unknown parameter $p$, is proportional to the former likelihood function. The two likelihood functions clearly contains the same information, and should lead to the same inferences!
And indeed, by definition, they are considered the same likelihood function.
Another viewpoint: observe that when the likelihood functions are used in Bayes theorem, as needed for bayesian analysis, such multiplicative constants simply cancel! so they are clearly irrelevant to bayesian inference. Likewise, it will cancel when calculating likelihood ratios, as used in optimal hypothesis tests (Neyman-Pearson lemma.) And it will have no influence on the value of maximum likelihood estimators. So we can see that in much of frequentist inference it cannot play a role.
We can argue from still another viewpoint. The Bernoulli probability function (hereafter we use the term "density") above is really a density with respect to counting measure, that is, the measure on the non-negative integers with mass one for each non-negative integer. But we could have defined a density with respect to some other dominating measure. In this example this will seem (and is) artificial, but in larger spaces (function spaces) it is really fundamental! Let us, for the purpose of illustration, use the specific geometric distribution, written $\lambda$, with $\lambda(0)=1/2$, $\lambda(1)=1/4$, $\lambda(2)=1/8$ and so on. Then the density of the Bernoulli distribution with respect to $\lambda$ is given by
$$
f_{\lambda}(x) = p^x (1-p)^{1-x}\cdot 2^{x+1}
$$
meaning that $$
P(X=x)= f_\lambda(x) \cdot \lambda(x)
$$
With this new, dominating, measure, the likelihood function becomes (with notation from above)
$$
\prod_{i=1}^n p^{x_i} (1-p)^{1-x_i} 2^{x_i+1} = p^y (1-p)^{n-y} 2^{y+n}
$$
note the extra factor $2^{y+n}$. So when changing the dominating measure used in the definition of the likelihood function, there arises a new multiplicative constant, which does not depend on the unknown parameter $p$, and is clearly irrelevant. That is another way to see how multiplicative constants must be irrelevant. This argument can be generalized using Radon-Nikodym derivatives (as the argument above is an example of.)
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
The point is that sometimes, different models (for the same data) can lead to likelihood functions which differ by a multiplicative constant, but the information content must clearly be the same. An e
|
10,580
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
|
It basically means that only relative value of the PDF matters. For instance, the standard normal (Gaussian) PDF is: $f(x)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$, your book is saying that they could use $g(x)=e^{-x^2/2}$ instead, because they don't care for the scale, i.e. $c=\frac{1}{\sqrt{2\pi}}$.
This happens because they maximize likelihood function, and $c\cdot g(x)$ and $g(x)$ will have the same maximum. Hence, maximum of $e^{-x^2/2}$ will be the same as of $f(x)$. So, they don't bother about the scale.
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
|
It basically means that only relative value of the PDF matters. For instance, the standard normal (Gaussian) PDF is: $f(x)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$, your book is saying that they could use $g(
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
It basically means that only relative value of the PDF matters. For instance, the standard normal (Gaussian) PDF is: $f(x)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$, your book is saying that they could use $g(x)=e^{-x^2/2}$ instead, because they don't care for the scale, i.e. $c=\frac{1}{\sqrt{2\pi}}$.
This happens because they maximize likelihood function, and $c\cdot g(x)$ and $g(x)$ will have the same maximum. Hence, maximum of $e^{-x^2/2}$ will be the same as of $f(x)$. So, they don't bother about the scale.
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
It basically means that only relative value of the PDF matters. For instance, the standard normal (Gaussian) PDF is: $f(x)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$, your book is saying that they could use $g(
|
10,581
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
|
I cannot explain the meaning of the quotation, but for maximum-likelihood estimation, it does not matter whether we choose to find the maximum of
the likelihood function $L(\mathbf x; \theta)$ (regarded as a function
of $\theta$ or the maximum of
$aL(\mathbf x; \theta)$ where $a$ is some constant.
This is because we are not interested in the maximum value of
$L(\mathbf x; \theta)$ but rather the value $\theta_{\text{ML}}$
where this maximum occurs, and both $L(\mathbf x; \theta)$
and $aL(\mathbf x; \theta)$ achieve their maximum value at the same
$\theta_{\text{ML}}$. So, multiplicative constants can be ignored.
Similarly, we could choose to consider any monotone function $g(\cdot)$
(such as the logarithm) of the likelihood function $L(\mathbf x; \theta)$, determine
the maximum of $g(L(\mathbf x;\theta))$, and infer the value of
$\theta_{\text{ML}}$ from this. For the logarithm, the multipliative constant
$a$ becomes the additive constant $\ln(a)$ and this too can be ignored in
the process of finding the location of the maximum:
$\ln(a)+\ln(L(\mathbf x; \theta)$
is maximized at the same point as $\ln(L(\mathbf x; \theta)$.
Turning to maximum a posteriori probability (MAP) estimation,
$\theta$ is regarded as a realization of a random variable $\Theta$ with
a priori density function $f_{\Theta}(\theta)$,
the data $\mathbf x$ is regarded as a
realization of a random variable $\mathbf X$, and the likelihood function is considered to be the value of the conditional density
$f_{\mathbf X\mid \Theta}(\mathbf x\mid \Theta=\theta)$
of $\mathbf X$ conditioned on $\Theta = \theta$; said
conditional density function being evaluated at $\mathbf x$.
The a posteriori density of $\Theta$ is
$$f_{\Theta\mid \mathbf X}(\theta \mid \mathbf x)
= \frac{f_{\mathbf X\mid \Theta}(\mathbf x\mid \Theta=\theta)f_\Theta(\theta)}{f_{\mathbf X}(\mathbf x)} \tag{1}$$
in which we recognize the numerator as the joint density
$f_{\mathbf X, \Theta}(\mathbf x, \theta)$ of the data and the parameter
being estimated. The point $\theta_{\text{MAP}}$ where
$f_{\Theta\mid \mathbf X}(\theta \mid \mathbf x)$ attains
its maximum value is the MAP estimate of $\theta$, and,
using the same arguments as in the paragraph, we see that
we can ignore $[f_{\mathbf X}(\mathbf x)]^{-1}$ on the
right side of $(1)$ as a multiplicative constant just
as we can ignore multiplicative constants in both
$f_{\mathbf X\mid \Theta}(\mathbf x\mid \Theta=\theta)$ and in
$f_\Theta(\theta)$. Similarly when log-likelihoods are being
used, we can ignore additive constants.
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
|
I cannot explain the meaning of the quotation, but for maximum-likelihood estimation, it does not matter whether we choose to find the maximum of
the likelihood function $L(\mathbf x; \theta)$ (rega
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
I cannot explain the meaning of the quotation, but for maximum-likelihood estimation, it does not matter whether we choose to find the maximum of
the likelihood function $L(\mathbf x; \theta)$ (regarded as a function
of $\theta$ or the maximum of
$aL(\mathbf x; \theta)$ where $a$ is some constant.
This is because we are not interested in the maximum value of
$L(\mathbf x; \theta)$ but rather the value $\theta_{\text{ML}}$
where this maximum occurs, and both $L(\mathbf x; \theta)$
and $aL(\mathbf x; \theta)$ achieve their maximum value at the same
$\theta_{\text{ML}}$. So, multiplicative constants can be ignored.
Similarly, we could choose to consider any monotone function $g(\cdot)$
(such as the logarithm) of the likelihood function $L(\mathbf x; \theta)$, determine
the maximum of $g(L(\mathbf x;\theta))$, and infer the value of
$\theta_{\text{ML}}$ from this. For the logarithm, the multipliative constant
$a$ becomes the additive constant $\ln(a)$ and this too can be ignored in
the process of finding the location of the maximum:
$\ln(a)+\ln(L(\mathbf x; \theta)$
is maximized at the same point as $\ln(L(\mathbf x; \theta)$.
Turning to maximum a posteriori probability (MAP) estimation,
$\theta$ is regarded as a realization of a random variable $\Theta$ with
a priori density function $f_{\Theta}(\theta)$,
the data $\mathbf x$ is regarded as a
realization of a random variable $\mathbf X$, and the likelihood function is considered to be the value of the conditional density
$f_{\mathbf X\mid \Theta}(\mathbf x\mid \Theta=\theta)$
of $\mathbf X$ conditioned on $\Theta = \theta$; said
conditional density function being evaluated at $\mathbf x$.
The a posteriori density of $\Theta$ is
$$f_{\Theta\mid \mathbf X}(\theta \mid \mathbf x)
= \frac{f_{\mathbf X\mid \Theta}(\mathbf x\mid \Theta=\theta)f_\Theta(\theta)}{f_{\mathbf X}(\mathbf x)} \tag{1}$$
in which we recognize the numerator as the joint density
$f_{\mathbf X, \Theta}(\mathbf x, \theta)$ of the data and the parameter
being estimated. The point $\theta_{\text{MAP}}$ where
$f_{\Theta\mid \mathbf X}(\theta \mid \mathbf x)$ attains
its maximum value is the MAP estimate of $\theta$, and,
using the same arguments as in the paragraph, we see that
we can ignore $[f_{\mathbf X}(\mathbf x)]^{-1}$ on the
right side of $(1)$ as a multiplicative constant just
as we can ignore multiplicative constants in both
$f_{\mathbf X\mid \Theta}(\mathbf x\mid \Theta=\theta)$ and in
$f_\Theta(\theta)$. Similarly when log-likelihoods are being
used, we can ignore additive constants.
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
I cannot explain the meaning of the quotation, but for maximum-likelihood estimation, it does not matter whether we choose to find the maximum of
the likelihood function $L(\mathbf x; \theta)$ (rega
|
10,582
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
|
In layman's terms, you'll often look for the maximum likelihood and $f(x)$ and $kf(x)$ share the same critical points.
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
|
In layman's terms, you'll often look for the maximum likelihood and $f(x)$ and $kf(x)$ share the same critical points.
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
In layman's terms, you'll often look for the maximum likelihood and $f(x)$ and $kf(x)$ share the same critical points.
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
In layman's terms, you'll often look for the maximum likelihood and $f(x)$ and $kf(x)$ share the same critical points.
|
10,583
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
|
I would suggest not to drop from sight any constant terms in the likelihood function (i.e. terms that do not include the parameters). In usual circumstances, they do not affect the $\text {argmax}$ of the likelihood, as already mentioned. But:
There may be unusual circumstances when you will have to maximize the likelihood subject to a ceiling -and then you should "remember" to include any constants in the calculation of its value.
Also, you may be performing model selection tests for non-nested models, using the value of the likelihood in the process -and since the models are non-nested the two likelihoods will have different constants.
Apart from these, the sentence
"Because the likelihood is only defined up to a multiplicative
constant of proportionality (or an additive constant for the
log-likelihood)"
is wrong, because the likelihood is first a joint probability density function, not just "any" objective function to be maximized.
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
|
I would suggest not to drop from sight any constant terms in the likelihood function (i.e. terms that do not include the parameters). In usual circumstances, they do not affect the $\text {argmax}$ of
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
I would suggest not to drop from sight any constant terms in the likelihood function (i.e. terms that do not include the parameters). In usual circumstances, they do not affect the $\text {argmax}$ of the likelihood, as already mentioned. But:
There may be unusual circumstances when you will have to maximize the likelihood subject to a ceiling -and then you should "remember" to include any constants in the calculation of its value.
Also, you may be performing model selection tests for non-nested models, using the value of the likelihood in the process -and since the models are non-nested the two likelihoods will have different constants.
Apart from these, the sentence
"Because the likelihood is only defined up to a multiplicative
constant of proportionality (or an additive constant for the
log-likelihood)"
is wrong, because the likelihood is first a joint probability density function, not just "any" objective function to be maximized.
|
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
I would suggest not to drop from sight any constant terms in the likelihood function (i.e. terms that do not include the parameters). In usual circumstances, they do not affect the $\text {argmax}$ of
|
10,584
|
What is the manifold assumption in semi-supervised learning?
|
Imagine that you have a bunch of seeds fastened on a glass plate, which is resting horizontally on a table. Because of the way we typically think about space, it would be safe to say that these seeds live in a two-dimensional space, more or less, because each seed can be identified by the two numbers that give that seed's coordinates on the surface of the glass.
Now imagine that you take the plate and tilt it diagonally upwards, so that the surface of the glass is no longer horizontal with respect to the ground. Now, if you wanted to locate one of the seeds, you have a couple of options. If you decide to ignore the glass, then each seed would appear to be floating in the three-dimensional space above the table, and so you'd need to describe each seed's location using three numbers, one for each spatial direction. But just by tilting the glass, you haven't changed the fact that the seeds still live on a two-dimensional surface. So you could describe how the surface of the glass lies in three-dimensional space, and then you could describe the locations of the seeds on the glass using your original two dimensions.
In this thought experiment, the glass surface is akin to a low-dimensional manifold that exists in a higher-dimensional space : no matter how you rotate the plate in three dimensions, the seeds still live along the surface of a two-dimensional plane.
Examples
More generally, a low-dimensional manifold embedded in a higher-dimensional space is just a set of points that, for whatever reason, are considered to be connected or part of the same set. Notably, the manifold might be contorted somehow in the higher-dimensional space (e.g., perhaps the surface of the glass is warped into a bowl shape instead of a plate shape), but the manifold is still basically low-dimensional. Especially in high-dimensional space, this manifold could take many different forms and shapes, but because we live in a three-dimensional world, it's difficult to imagine examples that have more than three dimensions. Just as a sample, though, consider these examples :
a piece of glass (planar, two-dimensional) in physical space (three-dimensional)
a single thread (one-dimensional) in a piece of fabric (two-dimensional)
a piece of fabric (two-dimensional) crumpled up in the washing machine (three-dimensional)
Common examples of manifolds in machine learning (or at least sets that are hypothesized to live along low-dimensional manifolds) include :
images of natural scenes (typically you do not see images of white noise, for instance, meaning that "natural" images do not occupy the entire space of possible pixel configurations)
natural sounds (similar argument)
human movements (the human body has hundreds of degrees of freedom, but movements appear to live in a space that can be represented effectively using ~10 dimensions)
Learning the manifold
The manifold assumption in machine learning is that, instead of assuming that data in the world could come from every part of the possible space (e.g., the space of all possible 1-megapixel images, including white noise), it makes more sense to assume that training data come from relatively low-dimensional manifolds (like the glass plate with the seeds). Then learning the structure of the manifold becomes an important task; additionally, this learning task seems to be possible without the use of labeled training data.
There are many, many different ways of learning the structure of a low-dimensional manifold. One of the most widely used approaches is PCA, which assumes that the manifold consists of a single ellipsoidal "blob" like a pancake or cigar shape, embedded in a higher-dimensional space. More complicated techniques like isomap, ICA, or sparse coding relax some of these assumptions in various ways.
Semi-supervised learning
The reason the manifold assumption is important in semi-supervised learning is two-fold. For many realistic tasks (e.g., determining whether the pixels in an image show a 4 or a 5), there is much more data available in the world without labels (e.g., images that might have digits in them) than with labels (e.g., images that are explicitly labeled "4" or "5"). In addition, there are many orders of magnitude more information available in the pixels of the images than there are in the labels of the images that have labels. But, like I described above, natural images aren't actually sampled from the uniform distribution over pixel configurations, so it seems likely that there is some manifold that captures the structure of natural images. But if we assume further that the images containing 4s all lie on their own manifold, while the images containing 5s likewise lie on a different but nearby manifold, then we can try to develop representations for each of these manifolds using just the pixel data, hoping that the different manifolds will be represented using different learned features of the data. Then, later, when we have a few bits of label data available, we can use those bits to simply apply labels to the already-identified manifolds.
Most of this explanation comes from work in the deep and feature learning literature. Yoshua Bengio and Yann LeCun -- see the Energy Based Learning Tutorial have particularly accessible arguments in this area.
|
What is the manifold assumption in semi-supervised learning?
|
Imagine that you have a bunch of seeds fastened on a glass plate, which is resting horizontally on a table. Because of the way we typically think about space, it would be safe to say that these seeds
|
What is the manifold assumption in semi-supervised learning?
Imagine that you have a bunch of seeds fastened on a glass plate, which is resting horizontally on a table. Because of the way we typically think about space, it would be safe to say that these seeds live in a two-dimensional space, more or less, because each seed can be identified by the two numbers that give that seed's coordinates on the surface of the glass.
Now imagine that you take the plate and tilt it diagonally upwards, so that the surface of the glass is no longer horizontal with respect to the ground. Now, if you wanted to locate one of the seeds, you have a couple of options. If you decide to ignore the glass, then each seed would appear to be floating in the three-dimensional space above the table, and so you'd need to describe each seed's location using three numbers, one for each spatial direction. But just by tilting the glass, you haven't changed the fact that the seeds still live on a two-dimensional surface. So you could describe how the surface of the glass lies in three-dimensional space, and then you could describe the locations of the seeds on the glass using your original two dimensions.
In this thought experiment, the glass surface is akin to a low-dimensional manifold that exists in a higher-dimensional space : no matter how you rotate the plate in three dimensions, the seeds still live along the surface of a two-dimensional plane.
Examples
More generally, a low-dimensional manifold embedded in a higher-dimensional space is just a set of points that, for whatever reason, are considered to be connected or part of the same set. Notably, the manifold might be contorted somehow in the higher-dimensional space (e.g., perhaps the surface of the glass is warped into a bowl shape instead of a plate shape), but the manifold is still basically low-dimensional. Especially in high-dimensional space, this manifold could take many different forms and shapes, but because we live in a three-dimensional world, it's difficult to imagine examples that have more than three dimensions. Just as a sample, though, consider these examples :
a piece of glass (planar, two-dimensional) in physical space (three-dimensional)
a single thread (one-dimensional) in a piece of fabric (two-dimensional)
a piece of fabric (two-dimensional) crumpled up in the washing machine (three-dimensional)
Common examples of manifolds in machine learning (or at least sets that are hypothesized to live along low-dimensional manifolds) include :
images of natural scenes (typically you do not see images of white noise, for instance, meaning that "natural" images do not occupy the entire space of possible pixel configurations)
natural sounds (similar argument)
human movements (the human body has hundreds of degrees of freedom, but movements appear to live in a space that can be represented effectively using ~10 dimensions)
Learning the manifold
The manifold assumption in machine learning is that, instead of assuming that data in the world could come from every part of the possible space (e.g., the space of all possible 1-megapixel images, including white noise), it makes more sense to assume that training data come from relatively low-dimensional manifolds (like the glass plate with the seeds). Then learning the structure of the manifold becomes an important task; additionally, this learning task seems to be possible without the use of labeled training data.
There are many, many different ways of learning the structure of a low-dimensional manifold. One of the most widely used approaches is PCA, which assumes that the manifold consists of a single ellipsoidal "blob" like a pancake or cigar shape, embedded in a higher-dimensional space. More complicated techniques like isomap, ICA, or sparse coding relax some of these assumptions in various ways.
Semi-supervised learning
The reason the manifold assumption is important in semi-supervised learning is two-fold. For many realistic tasks (e.g., determining whether the pixels in an image show a 4 or a 5), there is much more data available in the world without labels (e.g., images that might have digits in them) than with labels (e.g., images that are explicitly labeled "4" or "5"). In addition, there are many orders of magnitude more information available in the pixels of the images than there are in the labels of the images that have labels. But, like I described above, natural images aren't actually sampled from the uniform distribution over pixel configurations, so it seems likely that there is some manifold that captures the structure of natural images. But if we assume further that the images containing 4s all lie on their own manifold, while the images containing 5s likewise lie on a different but nearby manifold, then we can try to develop representations for each of these manifolds using just the pixel data, hoping that the different manifolds will be represented using different learned features of the data. Then, later, when we have a few bits of label data available, we can use those bits to simply apply labels to the already-identified manifolds.
Most of this explanation comes from work in the deep and feature learning literature. Yoshua Bengio and Yann LeCun -- see the Energy Based Learning Tutorial have particularly accessible arguments in this area.
|
What is the manifold assumption in semi-supervised learning?
Imagine that you have a bunch of seeds fastened on a glass plate, which is resting horizontally on a table. Because of the way we typically think about space, it would be safe to say that these seeds
|
10,585
|
What is the manifold assumption in semi-supervised learning?
|
First, make sure that you understand what an embedding is. It's borrowed from mathematics. Roughly speaking, it is a mapping of the data into another space (often called embedding space or feature space), preserving some structure or properties of the data. Note that its dimensionality can be bigger or smaller than the input space. In practice, the mapping is complex and highly non-linear. A few examples:
A real-valued "word vector" to represent a word, such as word2vec
The activations of a layer of a convnet, such as the FC7 layer AlexNet (FC7 is the 7th fully-conected layer)
To illustrate, I'll take an example of this paper from Josh Tenenbaum:
Fig. 1 illustrates the feature discovery problem with an example from
visual perception. The set of views of a face from all possible
viewpoints is an extremely high-dimensional data set when represented
as image arrays in a computer or on a retina; for example, 32 x 32
pixel grey-scale images can be thought of as points in a
1,024-dimensional observation space [input space] . The perceptually meaningful
structure of these images [feature space], however, is of much lower dimensionality;
all of the images in Fig. 1 lie on a two-dimensional manifold
parameterized by viewing angle
Josh Tenenbaum then discusses the difficulties of learning such a mapping from input to feature space. But let's go back to the question: we are interested in how the input and feature spaces are related.
The 32*32 array of grey pixel values is the input space
The [x1=elevation, x2=azimuth] space is the feature space (although simplistic, it can be thought as a valid embedding space).
Re-stating the manifold hypothesis (quoting from this great article):
The manifold hypothesis is that natural data forms lower-dimensional
manifolds in its embedding space
With this example, it is clear that the dimensionality of the embedding space is way less that the input space: 2 vs 1024. (This distinction will hold even for choices higher dimensional, less simplistic embedding spaces).
To convince yourself that the embedding forms a manifold, I invite you to read the rest of the Tenenbaum paper paper or the Colah article.
Note: this is just an illustration of what the manifold hypothesis means, not an argument of why it happens.
Related: Explanation of word vectors, word2vec paper
|
What is the manifold assumption in semi-supervised learning?
|
First, make sure that you understand what an embedding is. It's borrowed from mathematics. Roughly speaking, it is a mapping of the data into another space (often called embedding space or feature spa
|
What is the manifold assumption in semi-supervised learning?
First, make sure that you understand what an embedding is. It's borrowed from mathematics. Roughly speaking, it is a mapping of the data into another space (often called embedding space or feature space), preserving some structure or properties of the data. Note that its dimensionality can be bigger or smaller than the input space. In practice, the mapping is complex and highly non-linear. A few examples:
A real-valued "word vector" to represent a word, such as word2vec
The activations of a layer of a convnet, such as the FC7 layer AlexNet (FC7 is the 7th fully-conected layer)
To illustrate, I'll take an example of this paper from Josh Tenenbaum:
Fig. 1 illustrates the feature discovery problem with an example from
visual perception. The set of views of a face from all possible
viewpoints is an extremely high-dimensional data set when represented
as image arrays in a computer or on a retina; for example, 32 x 32
pixel grey-scale images can be thought of as points in a
1,024-dimensional observation space [input space] . The perceptually meaningful
structure of these images [feature space], however, is of much lower dimensionality;
all of the images in Fig. 1 lie on a two-dimensional manifold
parameterized by viewing angle
Josh Tenenbaum then discusses the difficulties of learning such a mapping from input to feature space. But let's go back to the question: we are interested in how the input and feature spaces are related.
The 32*32 array of grey pixel values is the input space
The [x1=elevation, x2=azimuth] space is the feature space (although simplistic, it can be thought as a valid embedding space).
Re-stating the manifold hypothesis (quoting from this great article):
The manifold hypothesis is that natural data forms lower-dimensional
manifolds in its embedding space
With this example, it is clear that the dimensionality of the embedding space is way less that the input space: 2 vs 1024. (This distinction will hold even for choices higher dimensional, less simplistic embedding spaces).
To convince yourself that the embedding forms a manifold, I invite you to read the rest of the Tenenbaum paper paper or the Colah article.
Note: this is just an illustration of what the manifold hypothesis means, not an argument of why it happens.
Related: Explanation of word vectors, word2vec paper
|
What is the manifold assumption in semi-supervised learning?
First, make sure that you understand what an embedding is. It's borrowed from mathematics. Roughly speaking, it is a mapping of the data into another space (often called embedding space or feature spa
|
10,586
|
Seeking certain type of ARIMA explanation
|
My suggested reading for an intro to ARIMA modelling would be
Applied Time Series Analysis for the Social Sciences 1980
by R McCleary ; R A Hay ; E E Meidinger ; D McDowall
This is aimed at social scientists so the mathematical demands are not too rigorous. Also for shorter treatments I would suggest two Sage Green Books (although they are entirely redundant with the McCleary book),
Interrupted Time Series Analysis
by David McDowall, Richard McCleary,
Errol Meidinger, and Richard A. Hay,
Jr
Time Series Analysis by Charles
W. Ostrom
The Ostrom text is only ARMA modelling and does not discuss forecasting. I don't think they would meet your requirement for graphing forecast error either. I'm sure you could dig up more useful resources by examining questions tagged with time-series on this forum as well.
|
Seeking certain type of ARIMA explanation
|
My suggested reading for an intro to ARIMA modelling would be
Applied Time Series Analysis for the Social Sciences 1980
by R McCleary ; R A Hay ; E E Meidinger ; D McDowall
This is aimed at social sci
|
Seeking certain type of ARIMA explanation
My suggested reading for an intro to ARIMA modelling would be
Applied Time Series Analysis for the Social Sciences 1980
by R McCleary ; R A Hay ; E E Meidinger ; D McDowall
This is aimed at social scientists so the mathematical demands are not too rigorous. Also for shorter treatments I would suggest two Sage Green Books (although they are entirely redundant with the McCleary book),
Interrupted Time Series Analysis
by David McDowall, Richard McCleary,
Errol Meidinger, and Richard A. Hay,
Jr
Time Series Analysis by Charles
W. Ostrom
The Ostrom text is only ARMA modelling and does not discuss forecasting. I don't think they would meet your requirement for graphing forecast error either. I'm sure you could dig up more useful resources by examining questions tagged with time-series on this forum as well.
|
Seeking certain type of ARIMA explanation
My suggested reading for an intro to ARIMA modelling would be
Applied Time Series Analysis for the Social Sciences 1980
by R McCleary ; R A Hay ; E E Meidinger ; D McDowall
This is aimed at social sci
|
10,587
|
Seeking certain type of ARIMA explanation
|
I will try and respond to the gentle urging of whuber to simply “respond to the question” and stay on topic. We are given 144 monthly readings of a series called “The Airline Series” . Box and Jenkins were widely criticized for providing a forecast that was wildly on the high side due to the “explosive nature” of a reverse logged transformation.
Visually we get the impression that the variance of the original series increases with the level of the series suggesting a need for a transformation. However we know that one the requirements for a useful model is that the variance of the “model errors” needs to be homogenous. No assumptions are necessary about the variance of the original series. They are identical if the model is simply a constant i.e. y(t)=u . As https://stats.stackexchange.com/users/2392/probabilityislogic stated so clearly in his response to Advice on explaining heterogeneity / heteroscedasticty “one thing which I always find amusing is this "non-normality of the data" that people worry about. The data does not need to be normally distributed, but the error term does”
Early work in time series often erroneously jumped to conclusions about unwarranted transformations. We will discover here that the remedial transformation for this data is to simply add three indicator dummy series to the ARIMA model reflecting an adjustment for three unusual data points.
Following is the plot of the autocorrelation function suggesting a strong autocorrelation at lag 12 (.76) and at lag 1 (.948). Autocorrelations are simply regression coefficients in a model where y is the dependent variable being predicted by a lag of y.
!
The analysis above suggests that one model the first differences of the series and study that “residual series” which is identical to the first differences first for it’s properties.
This analysis reconfirms the idea that a strong seasonal pattern exists in the data that could be remedied or modeled by a model that contained two differencing operators .
This simple double differencing yields a set of residual a.k.a an adjusted series or loosely speaking a transformed series that evidences non-constant variance but the reason for the non-constant variance is the non-constant mean of the residuals.Here is a plot of the doubly differenced series , suggesting three anomalies at the end of the series. The Autocorrelation of this series falsely indicates that “all is well” and there might be a need for any Ma(1) adjustment. Care should be taken as there is a suggestion of anomalies in the data thus the acf is biased downwards. This is known as the “Alice in Wonderland Effect” i.e. accepting the null hypothesis of no evidented structure when that structure is being masked by a violation of one of the assumptions.
We visually detect three unusual points ( 117,135,136)
This step of detecting the outliers is called Intervention Detection and can be easily , or not so easily, programmed following the following the work of Tsay.
If we add three indicators to the model, we get
We can then estimate
And receive a plot of the residuals and the acf
This acf suggests that we add potentially two moving average coefficients to the model . Thus the next estimated model might be.
Yielding
One could then delete the non-significant constant and get a refined model :
We note that no power transformations were needed whatsoever to obtain a set of residuals that constant variance. Note that the forecasts are non-explosive.
In terms of a simple weighted sum , we have: 13 weights ; 3 non-zero and equal to (1.0.1,0.,-1.0)
This material was presented in a way that was non-automatic and consequentially required user interaction in terms of making modeling decisions.
|
Seeking certain type of ARIMA explanation
|
I will try and respond to the gentle urging of whuber to simply “respond to the question” and stay on topic. We are given 144 monthly readings of a series called “The Airline Series” . Box and Jenkins
|
Seeking certain type of ARIMA explanation
I will try and respond to the gentle urging of whuber to simply “respond to the question” and stay on topic. We are given 144 monthly readings of a series called “The Airline Series” . Box and Jenkins were widely criticized for providing a forecast that was wildly on the high side due to the “explosive nature” of a reverse logged transformation.
Visually we get the impression that the variance of the original series increases with the level of the series suggesting a need for a transformation. However we know that one the requirements for a useful model is that the variance of the “model errors” needs to be homogenous. No assumptions are necessary about the variance of the original series. They are identical if the model is simply a constant i.e. y(t)=u . As https://stats.stackexchange.com/users/2392/probabilityislogic stated so clearly in his response to Advice on explaining heterogeneity / heteroscedasticty “one thing which I always find amusing is this "non-normality of the data" that people worry about. The data does not need to be normally distributed, but the error term does”
Early work in time series often erroneously jumped to conclusions about unwarranted transformations. We will discover here that the remedial transformation for this data is to simply add three indicator dummy series to the ARIMA model reflecting an adjustment for three unusual data points.
Following is the plot of the autocorrelation function suggesting a strong autocorrelation at lag 12 (.76) and at lag 1 (.948). Autocorrelations are simply regression coefficients in a model where y is the dependent variable being predicted by a lag of y.
!
The analysis above suggests that one model the first differences of the series and study that “residual series” which is identical to the first differences first for it’s properties.
This analysis reconfirms the idea that a strong seasonal pattern exists in the data that could be remedied or modeled by a model that contained two differencing operators .
This simple double differencing yields a set of residual a.k.a an adjusted series or loosely speaking a transformed series that evidences non-constant variance but the reason for the non-constant variance is the non-constant mean of the residuals.Here is a plot of the doubly differenced series , suggesting three anomalies at the end of the series. The Autocorrelation of this series falsely indicates that “all is well” and there might be a need for any Ma(1) adjustment. Care should be taken as there is a suggestion of anomalies in the data thus the acf is biased downwards. This is known as the “Alice in Wonderland Effect” i.e. accepting the null hypothesis of no evidented structure when that structure is being masked by a violation of one of the assumptions.
We visually detect three unusual points ( 117,135,136)
This step of detecting the outliers is called Intervention Detection and can be easily , or not so easily, programmed following the following the work of Tsay.
If we add three indicators to the model, we get
We can then estimate
And receive a plot of the residuals and the acf
This acf suggests that we add potentially two moving average coefficients to the model . Thus the next estimated model might be.
Yielding
One could then delete the non-significant constant and get a refined model :
We note that no power transformations were needed whatsoever to obtain a set of residuals that constant variance. Note that the forecasts are non-explosive.
In terms of a simple weighted sum , we have: 13 weights ; 3 non-zero and equal to (1.0.1,0.,-1.0)
This material was presented in a way that was non-automatic and consequentially required user interaction in terms of making modeling decisions.
|
Seeking certain type of ARIMA explanation
I will try and respond to the gentle urging of whuber to simply “respond to the question” and stay on topic. We are given 144 monthly readings of a series called “The Airline Series” . Box and Jenkins
|
10,588
|
Seeking certain type of ARIMA explanation
|
I tried to do that in chapter 7 of my 1998 textbook with Makridakis & Wheelwright. Whether I succeeded or not I'll leave others to judge. You can read some of the chapter online via Amazon (from p311). Search for "ARIMA" in the book to persuade Amazon to show you the relevant pages.
Update: I have a new book which is free and online. The ARIMA chapter is here.
|
Seeking certain type of ARIMA explanation
|
I tried to do that in chapter 7 of my 1998 textbook with Makridakis & Wheelwright. Whether I succeeded or not I'll leave others to judge. You can read some of the chapter online via Amazon (from p311)
|
Seeking certain type of ARIMA explanation
I tried to do that in chapter 7 of my 1998 textbook with Makridakis & Wheelwright. Whether I succeeded or not I'll leave others to judge. You can read some of the chapter online via Amazon (from p311). Search for "ARIMA" in the book to persuade Amazon to show you the relevant pages.
Update: I have a new book which is free and online. The ARIMA chapter is here.
|
Seeking certain type of ARIMA explanation
I tried to do that in chapter 7 of my 1998 textbook with Makridakis & Wheelwright. Whether I succeeded or not I'll leave others to judge. You can read some of the chapter online via Amazon (from p311)
|
10,589
|
Seeking certain type of ARIMA explanation
|
I would recommend Forecasting with Univariate Box - Jenkins Models: Concepts and Cases by Alan Pankratz. This classic book has all the features that you asked for:
uses minimal math
extends the discussion beyond building a model into using that model to forecast specific cases
uses graphics as well as numerical results to characterize the fit between forecasted and actual values.
The only disadvantage is it was printed in 1983 and might not have some recent developments. The publisher is coming with a 2nd edition in Jan 2014 with updates.
|
Seeking certain type of ARIMA explanation
|
I would recommend Forecasting with Univariate Box - Jenkins Models: Concepts and Cases by Alan Pankratz. This classic book has all the features that you asked for:
uses minimal math
extends the discu
|
Seeking certain type of ARIMA explanation
I would recommend Forecasting with Univariate Box - Jenkins Models: Concepts and Cases by Alan Pankratz. This classic book has all the features that you asked for:
uses minimal math
extends the discussion beyond building a model into using that model to forecast specific cases
uses graphics as well as numerical results to characterize the fit between forecasted and actual values.
The only disadvantage is it was printed in 1983 and might not have some recent developments. The publisher is coming with a 2nd edition in Jan 2014 with updates.
|
Seeking certain type of ARIMA explanation
I would recommend Forecasting with Univariate Box - Jenkins Models: Concepts and Cases by Alan Pankratz. This classic book has all the features that you asked for:
uses minimal math
extends the discu
|
10,590
|
Seeking certain type of ARIMA explanation
|
An ARIMA model is simply a weighted average. It answers the double question;
How many period (k )should I use to compute a weighted average
and
Precisely what are the k weights
It answers the maiden's prayer to determine how to adjust to previous values ( and previous values ALONE ) in order to project the series ( which is really being caused by unspecified causal variables ) Thus an ARIMA model is a poor man's causal model .
|
Seeking certain type of ARIMA explanation
|
An ARIMA model is simply a weighted average. It answers the double question;
How many period (k )should I use to compute a weighted average
and
Precisely what are the k weights
It answers the ma
|
Seeking certain type of ARIMA explanation
An ARIMA model is simply a weighted average. It answers the double question;
How many period (k )should I use to compute a weighted average
and
Precisely what are the k weights
It answers the maiden's prayer to determine how to adjust to previous values ( and previous values ALONE ) in order to project the series ( which is really being caused by unspecified causal variables ) Thus an ARIMA model is a poor man's causal model .
|
Seeking certain type of ARIMA explanation
An ARIMA model is simply a weighted average. It answers the double question;
How many period (k )should I use to compute a weighted average
and
Precisely what are the k weights
It answers the ma
|
10,591
|
Transform Data to Desired Mean and Standard Deviation
|
Suppose you start $\{x_i\}$ with mean $m_1$ and non-zero standard deviation $s_1$ and you want to arrive at a similar set with mean $m_2$ and standard deviation $s_2$.
Then multiplying all your values by $\frac{s_2}{s_1}$ will give a set with mean $m_1 \times \frac{s_2}{s_1}$ and standard deviation $s_2$.
Now adding $m_2 - m_1 \times \frac{s_2}{s_1}$ will give a set with mean $m_2$ and standard deviation $s_2$.
So a new set $\{y_i\}$ with $$y_i= m_2+ (x_i- m_1) \times \frac{s_2}{s_1} $$ has mean $m_2$ and standard deviation $s_2$.
You would get the same result with the three steps: translate the mean to $0$, scale to the desired standard deviation; translate to the desired mean.
|
Transform Data to Desired Mean and Standard Deviation
|
Suppose you start $\{x_i\}$ with mean $m_1$ and non-zero standard deviation $s_1$ and you want to arrive at a similar set with mean $m_2$ and standard deviation $s_2$.
Then multiplying all your value
|
Transform Data to Desired Mean and Standard Deviation
Suppose you start $\{x_i\}$ with mean $m_1$ and non-zero standard deviation $s_1$ and you want to arrive at a similar set with mean $m_2$ and standard deviation $s_2$.
Then multiplying all your values by $\frac{s_2}{s_1}$ will give a set with mean $m_1 \times \frac{s_2}{s_1}$ and standard deviation $s_2$.
Now adding $m_2 - m_1 \times \frac{s_2}{s_1}$ will give a set with mean $m_2$ and standard deviation $s_2$.
So a new set $\{y_i\}$ with $$y_i= m_2+ (x_i- m_1) \times \frac{s_2}{s_1} $$ has mean $m_2$ and standard deviation $s_2$.
You would get the same result with the three steps: translate the mean to $0$, scale to the desired standard deviation; translate to the desired mean.
|
Transform Data to Desired Mean and Standard Deviation
Suppose you start $\{x_i\}$ with mean $m_1$ and non-zero standard deviation $s_1$ and you want to arrive at a similar set with mean $m_2$ and standard deviation $s_2$.
Then multiplying all your value
|
10,592
|
Transform Data to Desired Mean and Standard Deviation
|
Let’s consider the z-score calculation of data $x_i$ with mean $\bar{x}$ and standard deviation $s_x$.
$$z_i = \dfrac{x_i-\bar{x}}{s_x}$$
This means that, given some data $(x_i)$, we can transform to data with a mean of $0$ and standard deviation of $1$.
Rearranging, we get:
$$x_i = z_i s_x+ \bar{x}$$
This gives us back our original data with the original mean $\bar{x}$ and standard deviation $s_x$. But we could’ve gone to data $y_i$ with any mean $\bar{y}$ and standard deviation $s_y$.
$$y_i = z_i s_y +\bar{y}$$
Now combine the two transformations, first to $z_i$ and then to $y_i$.
$$y_i = \dfrac{x_i-\bar{x}}{s_x}s_y + \bar{y}$$
This is the same as what Henry posted, but I do think it is helpful to see that we get there by first going to standardized data and then transforming to data with the mean and standard deviation values we desire.
|
Transform Data to Desired Mean and Standard Deviation
|
Let’s consider the z-score calculation of data $x_i$ with mean $\bar{x}$ and standard deviation $s_x$.
$$z_i = \dfrac{x_i-\bar{x}}{s_x}$$
This means that, given some data $(x_i)$, we can transform to
|
Transform Data to Desired Mean and Standard Deviation
Let’s consider the z-score calculation of data $x_i$ with mean $\bar{x}$ and standard deviation $s_x$.
$$z_i = \dfrac{x_i-\bar{x}}{s_x}$$
This means that, given some data $(x_i)$, we can transform to data with a mean of $0$ and standard deviation of $1$.
Rearranging, we get:
$$x_i = z_i s_x+ \bar{x}$$
This gives us back our original data with the original mean $\bar{x}$ and standard deviation $s_x$. But we could’ve gone to data $y_i$ with any mean $\bar{y}$ and standard deviation $s_y$.
$$y_i = z_i s_y +\bar{y}$$
Now combine the two transformations, first to $z_i$ and then to $y_i$.
$$y_i = \dfrac{x_i-\bar{x}}{s_x}s_y + \bar{y}$$
This is the same as what Henry posted, but I do think it is helpful to see that we get there by first going to standardized data and then transforming to data with the mean and standard deviation values we desire.
|
Transform Data to Desired Mean and Standard Deviation
Let’s consider the z-score calculation of data $x_i$ with mean $\bar{x}$ and standard deviation $s_x$.
$$z_i = \dfrac{x_i-\bar{x}}{s_x}$$
This means that, given some data $(x_i)$, we can transform to
|
10,593
|
Averaging correlation values
|
The simple way is to add a categorical variable $z$ to identify the different experimental conditions and include it in your model along with an "interaction" with $x$; that is, $y \sim z + x\#z$. This conducts all five regressions at once. Its $R^2$ is what you want.
To see why averaging individual $R$ values may be wrong, suppose the direction of the slope is reversed in some of the experimental conditions. You would average a bunch of 1's and -1's out to around 0, which wouldn't reflect the quality of any of the fits. To see why averaging $R^2$ (or any fixed transformation thereof) is not right, suppose that in most experimental conditions you had only two observations, so that their $R^2$ all equal $1$, but in one experiment you had a hundred observations with $R^2=0$. The average $R^2$ of almost 1 would not correctly reflect the situation.
|
Averaging correlation values
|
The simple way is to add a categorical variable $z$ to identify the different experimental conditions and include it in your model along with an "interaction" with $x$; that is, $y \sim z + x\#z$. Th
|
Averaging correlation values
The simple way is to add a categorical variable $z$ to identify the different experimental conditions and include it in your model along with an "interaction" with $x$; that is, $y \sim z + x\#z$. This conducts all five regressions at once. Its $R^2$ is what you want.
To see why averaging individual $R$ values may be wrong, suppose the direction of the slope is reversed in some of the experimental conditions. You would average a bunch of 1's and -1's out to around 0, which wouldn't reflect the quality of any of the fits. To see why averaging $R^2$ (or any fixed transformation thereof) is not right, suppose that in most experimental conditions you had only two observations, so that their $R^2$ all equal $1$, but in one experiment you had a hundred observations with $R^2=0$. The average $R^2$ of almost 1 would not correctly reflect the situation.
|
Averaging correlation values
The simple way is to add a categorical variable $z$ to identify the different experimental conditions and include it in your model along with an "interaction" with $x$; that is, $y \sim z + x\#z$. Th
|
10,594
|
Averaging correlation values
|
For Pearson correlation coefficients, it is generally appropriate to transform the r values using a Fisher z transformation. Then average the z-values and convert the average back to an r value.
I imagine it would be fine for a Spearman coefficient as well.
Here's a paper and the wikipedia entry.
|
Averaging correlation values
|
For Pearson correlation coefficients, it is generally appropriate to transform the r values using a Fisher z transformation. Then average the z-values and convert the average back to an r value.
I im
|
Averaging correlation values
For Pearson correlation coefficients, it is generally appropriate to transform the r values using a Fisher z transformation. Then average the z-values and convert the average back to an r value.
I imagine it would be fine for a Spearman coefficient as well.
Here's a paper and the wikipedia entry.
|
Averaging correlation values
For Pearson correlation coefficients, it is generally appropriate to transform the r values using a Fisher z transformation. Then average the z-values and convert the average back to an r value.
I im
|
10,595
|
Averaging correlation values
|
The average correlation can be meaningul. Also consider the distribution of correlations (for example, plot a histogram).
But as I understand it, for each individual you have some ranking of $n$ items plus predicted rankings of those items for that individual, and you're looking at the correlation between an individual's rankings and the predicted ones.
In this case, it may be that the correlation is not the best measure of how well the algorithm is making predictions. For example, imagine that the algorithm gets the first 100 items perfectly and the next 200 items totally messed up, vs the opposite. It could be that you care only about the quality of the top rankings. In this case, you might look at the sum of the absolute differences between the individual's ranking and the predicted ranking, but only among the individual's top $m$ items.
|
Averaging correlation values
|
The average correlation can be meaningul. Also consider the distribution of correlations (for example, plot a histogram).
But as I understand it, for each individual you have some ranking of $n$ item
|
Averaging correlation values
The average correlation can be meaningul. Also consider the distribution of correlations (for example, plot a histogram).
But as I understand it, for each individual you have some ranking of $n$ items plus predicted rankings of those items for that individual, and you're looking at the correlation between an individual's rankings and the predicted ones.
In this case, it may be that the correlation is not the best measure of how well the algorithm is making predictions. For example, imagine that the algorithm gets the first 100 items perfectly and the next 200 items totally messed up, vs the opposite. It could be that you care only about the quality of the top rankings. In this case, you might look at the sum of the absolute differences between the individual's ranking and the predicted ranking, but only among the individual's top $m$ items.
|
Averaging correlation values
The average correlation can be meaningul. Also consider the distribution of correlations (for example, plot a histogram).
But as I understand it, for each individual you have some ranking of $n$ item
|
10,596
|
Averaging correlation values
|
What about using mean squared predicted eror (MSPE) for the performance of the algorithm? This is a standard approach to what you are trying to do, if you are trying to compare predictive performance among a set of algorithms.
|
Averaging correlation values
|
What about using mean squared predicted eror (MSPE) for the performance of the algorithm? This is a standard approach to what you are trying to do, if you are trying to compare predictive performance
|
Averaging correlation values
What about using mean squared predicted eror (MSPE) for the performance of the algorithm? This is a standard approach to what you are trying to do, if you are trying to compare predictive performance among a set of algorithms.
|
Averaging correlation values
What about using mean squared predicted eror (MSPE) for the performance of the algorithm? This is a standard approach to what you are trying to do, if you are trying to compare predictive performance
|
10,597
|
Is there a "hello, world" for statistical graphics?
|
Two thoughts:
A. When I try to get at the essence of "Hello World", it's the minimum that must be done in the programming language to generate a valid program that prints a single line of text. That suggests to me that your "Hello World" should be a univariate data set, the most basic thing you could plug into a statistical or graphics program.
B. I'm unaware of any graphing "Hello World". The closest I can come is typical datasets that are included in various statistical packages, such as R's AirPassengers. In R, a Hello World graphing statement would be:
plot (AirPassengers) # Base graphics, prints line graph
or
qplot (AirPassengers) # ggplot2, prints a bar chart
or
xyplot (AirPassengers) # lattice, which doesn't have a generic plot
Personally, I think the simplest graph is a line graph where you have N items in Y and X ranges from 1:N. But that's not a standard.
|
Is there a "hello, world" for statistical graphics?
|
Two thoughts:
A. When I try to get at the essence of "Hello World", it's the minimum that must be done in the programming language to generate a valid program that prints a single line of text. That s
|
Is there a "hello, world" for statistical graphics?
Two thoughts:
A. When I try to get at the essence of "Hello World", it's the minimum that must be done in the programming language to generate a valid program that prints a single line of text. That suggests to me that your "Hello World" should be a univariate data set, the most basic thing you could plug into a statistical or graphics program.
B. I'm unaware of any graphing "Hello World". The closest I can come is typical datasets that are included in various statistical packages, such as R's AirPassengers. In R, a Hello World graphing statement would be:
plot (AirPassengers) # Base graphics, prints line graph
or
qplot (AirPassengers) # ggplot2, prints a bar chart
or
xyplot (AirPassengers) # lattice, which doesn't have a generic plot
Personally, I think the simplest graph is a line graph where you have N items in Y and X ranges from 1:N. But that's not a standard.
|
Is there a "hello, world" for statistical graphics?
Two thoughts:
A. When I try to get at the essence of "Hello World", it's the minimum that must be done in the programming language to generate a valid program that prints a single line of text. That s
|
10,598
|
Is there a "hello, world" for statistical graphics?
|
I would probably start with scatterplots and demonstrate the four ugly correlations.
|
Is there a "hello, world" for statistical graphics?
|
I would probably start with scatterplots and demonstrate the four ugly correlations.
|
Is there a "hello, world" for statistical graphics?
I would probably start with scatterplots and demonstrate the four ugly correlations.
|
Is there a "hello, world" for statistical graphics?
I would probably start with scatterplots and demonstrate the four ugly correlations.
|
10,599
|
Is there a "hello, world" for statistical graphics?
|
The histogram of a sample of a normally distributed random variable.
|
Is there a "hello, world" for statistical graphics?
|
The histogram of a sample of a normally distributed random variable.
|
Is there a "hello, world" for statistical graphics?
The histogram of a sample of a normally distributed random variable.
|
Is there a "hello, world" for statistical graphics?
The histogram of a sample of a normally distributed random variable.
|
10,600
|
Is there a "hello, world" for statistical graphics?
|
I think the answer is "no". That is, there is no generally agreed upon answer to your question.
@StasK points to the scatterplot.
But I'd consider what plot does in R: It depends on the data!
You could argue that univariate statistics are simpler than bivariate ones. So... perhaps the most basic thing is a histogram; or perhaps a bar plot; maybe a density plot.
If the point of "Hello, World!" is to show that you can make the computer do something then I'd say any plot would do.
|
Is there a "hello, world" for statistical graphics?
|
I think the answer is "no". That is, there is no generally agreed upon answer to your question.
@StasK points to the scatterplot.
But I'd consider what plot does in R: It depends on the data!
You cou
|
Is there a "hello, world" for statistical graphics?
I think the answer is "no". That is, there is no generally agreed upon answer to your question.
@StasK points to the scatterplot.
But I'd consider what plot does in R: It depends on the data!
You could argue that univariate statistics are simpler than bivariate ones. So... perhaps the most basic thing is a histogram; or perhaps a bar plot; maybe a density plot.
If the point of "Hello, World!" is to show that you can make the computer do something then I'd say any plot would do.
|
Is there a "hello, world" for statistical graphics?
I think the answer is "no". That is, there is no generally agreed upon answer to your question.
@StasK points to the scatterplot.
But I'd consider what plot does in R: It depends on the data!
You cou
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.