idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
10,501
What is a feasible sequence length for an RNN to model?
It totally depends on the nature of your data and the inner correlations, there is no rule of thumb. However, given that you have a large amount of data a 2-layer LSTM can model a large body of time series problems / benchmarks. Furthermore, you don't backpropagate-through-time to the whole series but usually to (200-3...
What is a feasible sequence length for an RNN to model?
It totally depends on the nature of your data and the inner correlations, there is no rule of thumb. However, given that you have a large amount of data a 2-layer LSTM can model a large body of time s
What is a feasible sequence length for an RNN to model? It totally depends on the nature of your data and the inner correlations, there is no rule of thumb. However, given that you have a large amount of data a 2-layer LSTM can model a large body of time series problems / benchmarks. Furthermore, you don't backpropagat...
What is a feasible sequence length for an RNN to model? It totally depends on the nature of your data and the inner correlations, there is no rule of thumb. However, given that you have a large amount of data a 2-layer LSTM can model a large body of time s
10,502
Is there a result that provides the bootstrap is valid if and only if the statistic is smooth?
$\blacksquare$ (1)Why quantile estimators are not Frechet differentiable but their bootstrap estimator is still consistent? You need Hadamard differentialbility(or compact differentiability depending on your reference source) as a sufficient condition to make bootstrap work in that case, the median and any quantile is ...
Is there a result that provides the bootstrap is valid if and only if the statistic is smooth?
$\blacksquare$ (1)Why quantile estimators are not Frechet differentiable but their bootstrap estimator is still consistent? You need Hadamard differentialbility(or compact differentiability depending
Is there a result that provides the bootstrap is valid if and only if the statistic is smooth? $\blacksquare$ (1)Why quantile estimators are not Frechet differentiable but their bootstrap estimator is still consistent? You need Hadamard differentialbility(or compact differentiability depending on your reference source)...
Is there a result that provides the bootstrap is valid if and only if the statistic is smooth? $\blacksquare$ (1)Why quantile estimators are not Frechet differentiable but their bootstrap estimator is still consistent? You need Hadamard differentialbility(or compact differentiability depending
10,503
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
From the stanfords note on NN: Real-world example. The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it used neurons with receptive field size F=11, stride S=4 and no zero padding P=0. Since (227 - 11)/4 + 1 = 55, and since...
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
From the stanfords note on NN: Real-world example. The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer? From the stanfords note on NN: Real-world example. The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it used neurons with receptive field size F=11, stri...
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer? From the stanfords note on NN: Real-world example. The Krizhevsky et al. architecture that won the ImageNet challenge in 2012 accepted images of size [227x227x3]. On the first Convolutional Layer, it
10,504
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
This paper is really confusing. First off, the input size of images is incorrect 224x224 does not give an output of 55. Those neurons are simply just like grouped pixels in one, so the output is a 2D image of random values (neuron values). So basically the number of neurons = widthxheightxdepth, no secrets are there to...
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer?
This paper is really confusing. First off, the input size of images is incorrect 224x224 does not give an output of 55. Those neurons are simply just like grouped pixels in one, so the output is a 2D
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer? This paper is really confusing. First off, the input size of images is incorrect 224x224 does not give an output of 55. Those neurons are simply just like grouped pixels in one, so the output is a 2D image of random values (neuron values). So basical...
How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer? This paper is really confusing. First off, the input size of images is incorrect 224x224 does not give an output of 55. Those neurons are simply just like grouped pixels in one, so the output is a 2D
10,505
How to treat categorical predictors in LASSO
When dealing with categorical variables in LASSO regression, it is usual to use a grouped LASSO that keeps the dummy variables corresponding to a particular categorical variable together (i.e., you cannot exclude only some of the dummy variables from the model). A useful method is the Modified Group LASSO (MGL) descri...
How to treat categorical predictors in LASSO
When dealing with categorical variables in LASSO regression, it is usual to use a grouped LASSO that keeps the dummy variables corresponding to a particular categorical variable together (i.e., you ca
How to treat categorical predictors in LASSO When dealing with categorical variables in LASSO regression, it is usual to use a grouped LASSO that keeps the dummy variables corresponding to a particular categorical variable together (i.e., you cannot exclude only some of the dummy variables from the model). A useful me...
How to treat categorical predictors in LASSO When dealing with categorical variables in LASSO regression, it is usual to use a grouped LASSO that keeps the dummy variables corresponding to a particular categorical variable together (i.e., you ca
10,506
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
The following is quoted from the Scholarpedia website: State space model (SSM) refers to a class of probabilistic graphical model (Koller and Friedman, 2009) that describes the probabilistic dependence between the latent state variable and the observed measurement. The state or the measurement can be either continuou...
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
The following is quoted from the Scholarpedia website: State space model (SSM) refers to a class of probabilistic graphical model (Koller and Friedman, 2009) that describes the probabilistic depende
Hidden Markov Model vs Markov Transition Model vs State-Space Model...? The following is quoted from the Scholarpedia website: State space model (SSM) refers to a class of probabilistic graphical model (Koller and Friedman, 2009) that describes the probabilistic dependence between the latent state variable and the ob...
Hidden Markov Model vs Markov Transition Model vs State-Space Model...? The following is quoted from the Scholarpedia website: State space model (SSM) refers to a class of probabilistic graphical model (Koller and Friedman, 2009) that describes the probabilistic depende
10,507
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
I and Alan Hawkes have written quite a lot about aggregated Markov processes with discrete states in continuous time. Our stuff has been about the problem of interpreting observations of single ion channel molecules, and includes an exact treatment of missed short events. Similar theory works in reliability theory too...
Hidden Markov Model vs Markov Transition Model vs State-Space Model...?
I and Alan Hawkes have written quite a lot about aggregated Markov processes with discrete states in continuous time. Our stuff has been about the problem of interpreting observations of single ion c
Hidden Markov Model vs Markov Transition Model vs State-Space Model...? I and Alan Hawkes have written quite a lot about aggregated Markov processes with discrete states in continuous time. Our stuff has been about the problem of interpreting observations of single ion channel molecules, and includes an exact treatmen...
Hidden Markov Model vs Markov Transition Model vs State-Space Model...? I and Alan Hawkes have written quite a lot about aggregated Markov processes with discrete states in continuous time. Our stuff has been about the problem of interpreting observations of single ion c
10,508
Common statistical tests as linear models
Not an exhaustive list but if you include generalized linear models, the scope of this problem becomes substantially larger. For instance: The Cochran-Armitage test of trend can be formulated by: $$E[\mbox{logit} (p) | t] = \beta_0 + \beta_1 t \qquad \mathcal{H}_0: \beta_1 = 0$$ The Pearson Chi-Square test of independe...
Common statistical tests as linear models
Not an exhaustive list but if you include generalized linear models, the scope of this problem becomes substantially larger. For instance: The Cochran-Armitage test of trend can be formulated by: $$E[
Common statistical tests as linear models Not an exhaustive list but if you include generalized linear models, the scope of this problem becomes substantially larger. For instance: The Cochran-Armitage test of trend can be formulated by: $$E[\mbox{logit} (p) | t] = \beta_0 + \beta_1 t \qquad \mathcal{H}_0: \beta_1 = 0$...
Common statistical tests as linear models Not an exhaustive list but if you include generalized linear models, the scope of this problem becomes substantially larger. For instance: The Cochran-Armitage test of trend can be formulated by: $$E[
10,509
Random forests for multivariate regression
Here's an example of a multi-output regression problem undertaken with facial recognition. It includes a coding sample as well, it should give you a start with your methodology. http://scikit-learn.org/stable/auto_examples/plot_multioutput_face_completion.html
Random forests for multivariate regression
Here's an example of a multi-output regression problem undertaken with facial recognition. It includes a coding sample as well, it should give you a start with your methodology. http://scikit-learn.or
Random forests for multivariate regression Here's an example of a multi-output regression problem undertaken with facial recognition. It includes a coding sample as well, it should give you a start with your methodology. http://scikit-learn.org/stable/auto_examples/plot_multioutput_face_completion.html
Random forests for multivariate regression Here's an example of a multi-output regression problem undertaken with facial recognition. It includes a coding sample as well, it should give you a start with your methodology. http://scikit-learn.or
10,510
Random forests for multivariate regression
There is a new package specifically for that (not personally tested) https://cran.r-project.org/package=MultivariateRandomForest
Random forests for multivariate regression
There is a new package specifically for that (not personally tested) https://cran.r-project.org/package=MultivariateRandomForest
Random forests for multivariate regression There is a new package specifically for that (not personally tested) https://cran.r-project.org/package=MultivariateRandomForest
Random forests for multivariate regression There is a new package specifically for that (not personally tested) https://cran.r-project.org/package=MultivariateRandomForest
10,511
Is there a Bayesian approach to density estimation
Since you want a bayesian approach, you need to assume some prior knowledge about the thing you want to estimate. This will be in the form of a distribution. Now, there's the issue that this is now a distribution over distributions. However, this is no problem if you assume that the candidate distributions come from so...
Is there a Bayesian approach to density estimation
Since you want a bayesian approach, you need to assume some prior knowledge about the thing you want to estimate. This will be in the form of a distribution. Now, there's the issue that this is now a
Is there a Bayesian approach to density estimation Since you want a bayesian approach, you need to assume some prior knowledge about the thing you want to estimate. This will be in the form of a distribution. Now, there's the issue that this is now a distribution over distributions. However, this is no problem if you a...
Is there a Bayesian approach to density estimation Since you want a bayesian approach, you need to assume some prior knowledge about the thing you want to estimate. This will be in the form of a distribution. Now, there's the issue that this is now a
10,512
Is there a Bayesian approach to density estimation
For density estimation purposes what you need is not $\theta_{n+1}|x_{1},\ldots,x_{n}$. The formula in notes $\theta_{n+1}|\theta_{1},\ldots,\theta_{n}$ reffers to the predictive distribution of the Dirichlet process. For density estimation you actually have to sample from the predictive distribution $$ \pi(dx_{n+1}|x_...
Is there a Bayesian approach to density estimation
For density estimation purposes what you need is not $\theta_{n+1}|x_{1},\ldots,x_{n}$. The formula in notes $\theta_{n+1}|\theta_{1},\ldots,\theta_{n}$ reffers to the predictive distribution of the D
Is there a Bayesian approach to density estimation For density estimation purposes what you need is not $\theta_{n+1}|x_{1},\ldots,x_{n}$. The formula in notes $\theta_{n+1}|\theta_{1},\ldots,\theta_{n}$ reffers to the predictive distribution of the Dirichlet process. For density estimation you actually have to sample ...
Is there a Bayesian approach to density estimation For density estimation purposes what you need is not $\theta_{n+1}|x_{1},\ldots,x_{n}$. The formula in notes $\theta_{n+1}|\theta_{1},\ldots,\theta_{n}$ reffers to the predictive distribution of the D
10,513
Is there a Bayesian approach to density estimation
Is there some approach to update F based on my new readings? There is something precisely for that. It's pretty much the main idea of Bayesian inference. $p(\theta | y) \propto p(y|\theta)p(\theta)$ The $p(\theta)$ is your prior, what you call $F$. The $p(y|\theta)$ is what Bayesians call the "likelihood" and it is ...
Is there a Bayesian approach to density estimation
Is there some approach to update F based on my new readings? There is something precisely for that. It's pretty much the main idea of Bayesian inference. $p(\theta | y) \propto p(y|\theta)p(\theta)$
Is there a Bayesian approach to density estimation Is there some approach to update F based on my new readings? There is something precisely for that. It's pretty much the main idea of Bayesian inference. $p(\theta | y) \propto p(y|\theta)p(\theta)$ The $p(\theta)$ is your prior, what you call $F$. The $p(y|\theta)$...
Is there a Bayesian approach to density estimation Is there some approach to update F based on my new readings? There is something precisely for that. It's pretty much the main idea of Bayesian inference. $p(\theta | y) \propto p(y|\theta)p(\theta)$
10,514
Appropriate residual degrees of freedom after dropping terms from a model
Do you disagree with @FrankHarrel's answer that parsimony comes with some ugly scientific trade-offs, anyways? I love the link provided in @MikeWiezbicki's comment to Doug Bates' rationale. If someone disagrees with your analysis, they can do it their way, and this is a fun way to start a scientific discussion about...
Appropriate residual degrees of freedom after dropping terms from a model
Do you disagree with @FrankHarrel's answer that parsimony comes with some ugly scientific trade-offs, anyways? I love the link provided in @MikeWiezbicki's comment to Doug Bates' rationale. If some
Appropriate residual degrees of freedom after dropping terms from a model Do you disagree with @FrankHarrel's answer that parsimony comes with some ugly scientific trade-offs, anyways? I love the link provided in @MikeWiezbicki's comment to Doug Bates' rationale. If someone disagrees with your analysis, they can do ...
Appropriate residual degrees of freedom after dropping terms from a model Do you disagree with @FrankHarrel's answer that parsimony comes with some ugly scientific trade-offs, anyways? I love the link provided in @MikeWiezbicki's comment to Doug Bates' rationale. If some
10,515
Backpropagation on a convolutional layer
Could you not simply say that the backpropagation on a convolutional layer is the sum of the backpropagation on each part, sliding window, of the image/tensor that the convolution covers? This is important as it connects to the fact that the weights are shared over multiple pixels and thus weights should reflect genera...
Backpropagation on a convolutional layer
Could you not simply say that the backpropagation on a convolutional layer is the sum of the backpropagation on each part, sliding window, of the image/tensor that the convolution covers? This is impo
Backpropagation on a convolutional layer Could you not simply say that the backpropagation on a convolutional layer is the sum of the backpropagation on each part, sliding window, of the image/tensor that the convolution covers? This is important as it connects to the fact that the weights are shared over multiple pixe...
Backpropagation on a convolutional layer Could you not simply say that the backpropagation on a convolutional layer is the sum of the backpropagation on each part, sliding window, of the image/tensor that the convolution covers? This is impo
10,516
Why is logistic regression called a machine learning algorithm?
Machine Learning is not a well defined term. In fact, if you Google "Machine Learning Definition" the first two things you get are quite different. From WhatIs.com, Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Mac...
Why is logistic regression called a machine learning algorithm?
Machine Learning is not a well defined term. In fact, if you Google "Machine Learning Definition" the first two things you get are quite different. From WhatIs.com, Machine learning is a type of art
Why is logistic regression called a machine learning algorithm? Machine Learning is not a well defined term. In fact, if you Google "Machine Learning Definition" the first two things you get are quite different. From WhatIs.com, Machine learning is a type of artificial intelligence (AI) that provides computers with...
Why is logistic regression called a machine learning algorithm? Machine Learning is not a well defined term. In fact, if you Google "Machine Learning Definition" the first two things you get are quite different. From WhatIs.com, Machine learning is a type of art
10,517
Why is logistic regression called a machine learning algorithm?
Machine Learning is hot and is where the money is. People call things they're trying to sell whatever is hot at the moment and therefore "sells". That can be selling software. That can be selling themselves as current employees trying to get promoted, as prospective employees, as consultants, etc. That can be a mana...
Why is logistic regression called a machine learning algorithm?
Machine Learning is hot and is where the money is. People call things they're trying to sell whatever is hot at the moment and therefore "sells". That can be selling software. That can be selling th
Why is logistic regression called a machine learning algorithm? Machine Learning is hot and is where the money is. People call things they're trying to sell whatever is hot at the moment and therefore "sells". That can be selling software. That can be selling themselves as current employees trying to get promoted, as...
Why is logistic regression called a machine learning algorithm? Machine Learning is hot and is where the money is. People call things they're trying to sell whatever is hot at the moment and therefore "sells". That can be selling software. That can be selling th
10,518
Why is logistic regression called a machine learning algorithm?
As others have mentioned already, there's no clear separation between statistics, machine learning, artificial intelligence and so on so take any definition with a grain of salt. Logistic regression is probably more often labeled as statistics rather than machine learning, while neural networks are typically labeled as...
Why is logistic regression called a machine learning algorithm?
As others have mentioned already, there's no clear separation between statistics, machine learning, artificial intelligence and so on so take any definition with a grain of salt. Logistic regression i
Why is logistic regression called a machine learning algorithm? As others have mentioned already, there's no clear separation between statistics, machine learning, artificial intelligence and so on so take any definition with a grain of salt. Logistic regression is probably more often labeled as statistics rather than ...
Why is logistic regression called a machine learning algorithm? As others have mentioned already, there's no clear separation between statistics, machine learning, artificial intelligence and so on so take any definition with a grain of salt. Logistic regression i
10,519
Why is logistic regression called a machine learning algorithm?
Logistic regression was invented by statistician DR Cox in 1958 and so predates the field of machine learning. Logistic regression is not a classification method, thank goodness. It is a direct probability model. If you think that an algorithm has to have two phases (initial guess, then "correct" the prediction "erro...
Why is logistic regression called a machine learning algorithm?
Logistic regression was invented by statistician DR Cox in 1958 and so predates the field of machine learning. Logistic regression is not a classification method, thank goodness. It is a direct prob
Why is logistic regression called a machine learning algorithm? Logistic regression was invented by statistician DR Cox in 1958 and so predates the field of machine learning. Logistic regression is not a classification method, thank goodness. It is a direct probability model. If you think that an algorithm has to hav...
Why is logistic regression called a machine learning algorithm? Logistic regression was invented by statistician DR Cox in 1958 and so predates the field of machine learning. Logistic regression is not a classification method, thank goodness. It is a direct prob
10,520
Why is logistic regression called a machine learning algorithm?
I'll have to disagree with most of the answers here and claim that Machine Learning has a very precise scope and a clear cut distinction from Statistics. ML is a sub-field of computer science with a long history, which only in recent years has found applications outside its domain. ML's paternal field and application d...
Why is logistic regression called a machine learning algorithm?
I'll have to disagree with most of the answers here and claim that Machine Learning has a very precise scope and a clear cut distinction from Statistics. ML is a sub-field of computer science with a l
Why is logistic regression called a machine learning algorithm? I'll have to disagree with most of the answers here and claim that Machine Learning has a very precise scope and a clear cut distinction from Statistics. ML is a sub-field of computer science with a long history, which only in recent years has found applic...
Why is logistic regression called a machine learning algorithm? I'll have to disagree with most of the answers here and claim that Machine Learning has a very precise scope and a clear cut distinction from Statistics. ML is a sub-field of computer science with a l
10,521
Why is logistic regression called a machine learning algorithm?
I finally figured it out. I now know the difference between statistical model fitting and machine learning. If you fit a model (regression), that's statistical model fitting If you learn a model (regression), that's machine learning So if you learn a logistic regression, that is a machine learning algorithm. Comment:...
Why is logistic regression called a machine learning algorithm?
I finally figured it out. I now know the difference between statistical model fitting and machine learning. If you fit a model (regression), that's statistical model fitting If you learn a model (reg
Why is logistic regression called a machine learning algorithm? I finally figured it out. I now know the difference between statistical model fitting and machine learning. If you fit a model (regression), that's statistical model fitting If you learn a model (regression), that's machine learning So if you learn a log...
Why is logistic regression called a machine learning algorithm? I finally figured it out. I now know the difference between statistical model fitting and machine learning. If you fit a model (regression), that's statistical model fitting If you learn a model (reg
10,522
Why is logistic regression called a machine learning algorithm?
Machine learning is pretty loosely defined and you're correct in thinking that regression models--and not just logistic regression ones--also "learn" from the data. I'm not really sure if this means machine learning is really statistics or statistics is really machine learning--or if any of this matters at all. However...
Why is logistic regression called a machine learning algorithm?
Machine learning is pretty loosely defined and you're correct in thinking that regression models--and not just logistic regression ones--also "learn" from the data. I'm not really sure if this means m
Why is logistic regression called a machine learning algorithm? Machine learning is pretty loosely defined and you're correct in thinking that regression models--and not just logistic regression ones--also "learn" from the data. I'm not really sure if this means machine learning is really statistics or statistics is re...
Why is logistic regression called a machine learning algorithm? Machine learning is pretty loosely defined and you're correct in thinking that regression models--and not just logistic regression ones--also "learn" from the data. I'm not really sure if this means m
10,523
Why is logistic regression called a machine learning algorithm?
Logistic regression (and more generally, GLM) does NOT belong to Machine Learning! Rather, these methods belongs to parametric modeling. Both parametric and algorithmic (ML) models use the data, but in different ways. Algorithmic models learn from the data how predictors map to the predictand, but they do not make any...
Why is logistic regression called a machine learning algorithm?
Logistic regression (and more generally, GLM) does NOT belong to Machine Learning! Rather, these methods belongs to parametric modeling. Both parametric and algorithmic (ML) models use the data, but
Why is logistic regression called a machine learning algorithm? Logistic regression (and more generally, GLM) does NOT belong to Machine Learning! Rather, these methods belongs to parametric modeling. Both parametric and algorithmic (ML) models use the data, but in different ways. Algorithmic models learn from the dat...
Why is logistic regression called a machine learning algorithm? Logistic regression (and more generally, GLM) does NOT belong to Machine Learning! Rather, these methods belongs to parametric modeling. Both parametric and algorithmic (ML) models use the data, but
10,524
Why is logistic regression called a machine learning algorithm?
I think the other answers do a good job at identifying more or less what Machine Learning is (as they indicate, it can be a fuzzy thing). I will add that Logistic Regression (and its more general multinomial version) is very commonly used as a means of performing classification in artificial neural networks (which I th...
Why is logistic regression called a machine learning algorithm?
I think the other answers do a good job at identifying more or less what Machine Learning is (as they indicate, it can be a fuzzy thing). I will add that Logistic Regression (and its more general mult
Why is logistic regression called a machine learning algorithm? I think the other answers do a good job at identifying more or less what Machine Learning is (as they indicate, it can be a fuzzy thing). I will add that Logistic Regression (and its more general multinomial version) is very commonly used as a means of per...
Why is logistic regression called a machine learning algorithm? I think the other answers do a good job at identifying more or less what Machine Learning is (as they indicate, it can be a fuzzy thing). I will add that Logistic Regression (and its more general mult
10,525
Why is logistic regression called a machine learning algorithm?
I think any procedure which is "iterative" can be considered a case of machine learning. Regression can be considered machine learning. We could do it by hand, but it would take a long time, if at all possible. So now we have these programs, machines, which do the iterations for us. It gets closer and closer to a s...
Why is logistic regression called a machine learning algorithm?
I think any procedure which is "iterative" can be considered a case of machine learning. Regression can be considered machine learning. We could do it by hand, but it would take a long time, if at a
Why is logistic regression called a machine learning algorithm? I think any procedure which is "iterative" can be considered a case of machine learning. Regression can be considered machine learning. We could do it by hand, but it would take a long time, if at all possible. So now we have these programs, machines, w...
Why is logistic regression called a machine learning algorithm? I think any procedure which is "iterative" can be considered a case of machine learning. Regression can be considered machine learning. We could do it by hand, but it would take a long time, if at a
10,526
Why is logistic regression called a machine learning algorithm?
It is a very common mistake that most people do and i can see it here also (done by almost everyone). Let me explain it in detail... Logistic Regression and linear Regression model, both are parametric model as well as Machine Learning Technique. It just depends on the method you are using to estimate the model paramet...
Why is logistic regression called a machine learning algorithm?
It is a very common mistake that most people do and i can see it here also (done by almost everyone). Let me explain it in detail... Logistic Regression and linear Regression model, both are parametri
Why is logistic regression called a machine learning algorithm? It is a very common mistake that most people do and i can see it here also (done by almost everyone). Let me explain it in detail... Logistic Regression and linear Regression model, both are parametric model as well as Machine Learning Technique. It just d...
Why is logistic regression called a machine learning algorithm? It is a very common mistake that most people do and i can see it here also (done by almost everyone). Let me explain it in detail... Logistic Regression and linear Regression model, both are parametri
10,527
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
All/other things being equal (when?) machine learning models require similar quantities of data as statistical models. In general statistical models tend to have more assumptions than machine learning models and it is these additional assumptions that give you more power (assuming they are true/valid), which means that...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
All/other things being equal (when?) machine learning models require similar quantities of data as statistical models. In general statistical models tend to have more assumptions than machine learning
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data? All/other things being equal (when?) machine learning models require similar quantities of data as statistical models. In general statistical models tend to have more assumptions than machine learning models an...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set All/other things being equal (when?) machine learning models require similar quantities of data as statistical models. In general statistical models tend to have more assumptions than machine learning
10,528
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
Well, you could do inference with a small amount of data. We just have concepts like statistical power to tell us when our results would be reliable and when they would not be. In general, lots of data is needed in machine learning to overcome the variance in estimators/models. Trees, as an example, are incredibly hi...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
Well, you could do inference with a small amount of data. We just have concepts like statistical power to tell us when our results would be reliable and when they would not be. In general, lots of da
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data? Well, you could do inference with a small amount of data. We just have concepts like statistical power to tell us when our results would be reliable and when they would not be. In general, lots of data is need...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set Well, you could do inference with a small amount of data. We just have concepts like statistical power to tell us when our results would be reliable and when they would not be. In general, lots of da
10,529
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
Machine learning does not require large amounts of data, it is just that the current bandwagon is for models that work on big data (mainly deep neural networks, which have been around since the 1990s, but before that it was SVMs and before that "shallow" neural nets), but research on other forms of machine learning has...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
Machine learning does not require large amounts of data, it is just that the current bandwagon is for models that work on big data (mainly deep neural networks, which have been around since the 1990s,
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data? Machine learning does not require large amounts of data, it is just that the current bandwagon is for models that work on big data (mainly deep neural networks, which have been around since the 1990s, but befor...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set Machine learning does not require large amounts of data, it is just that the current bandwagon is for models that work on big data (mainly deep neural networks, which have been around since the 1990s,
10,530
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
Machine learning (often) needs a lot of data because it doesn't start with a well defined model and uses (additional) data to define or improve the model. As a consequence there are often a lot of additional parameters to be estimated, parameters or settings that are already defined a-priori in non-machine-learning met...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
Machine learning (often) needs a lot of data because it doesn't start with a well defined model and uses (additional) data to define or improve the model. As a consequence there are often a lot of add
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data? Machine learning (often) needs a lot of data because it doesn't start with a well defined model and uses (additional) data to define or improve the model. As a consequence there are often a lot of additional pa...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set Machine learning (often) needs a lot of data because it doesn't start with a well defined model and uses (additional) data to define or improve the model. As a consequence there are often a lot of add
10,531
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
A typical machine learning model contains thousands to millions of parameters, while statistical modelling is typically limited to a handful parameters. As a rule of thumb, the minimum an amount of samples you need is proportional to the amount of parameters you want to estimate. So for statistical modelling of a handf...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
A typical machine learning model contains thousands to millions of parameters, while statistical modelling is typically limited to a handful parameters. As a rule of thumb, the minimum an amount of sa
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data? A typical machine learning model contains thousands to millions of parameters, while statistical modelling is typically limited to a handful parameters. As a rule of thumb, the minimum an amount of samples you ...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set A typical machine learning model contains thousands to millions of parameters, while statistical modelling is typically limited to a handful parameters. As a rule of thumb, the minimum an amount of sa
10,532
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data?
Machine Learning and Statistical inference deal with different type of problems and are not comparable in this point of view. Statistical inference is used in problems that are inherently statistic, for example, if there was ten days raining then next day will more probable (using Bayesian approach) be raining as well,...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set
Machine Learning and Statistical inference deal with different type of problems and are not comparable in this point of view. Statistical inference is used in problems that are inherently statistic, f
Why does Machine Learning need a lot of data while one can do statistical inference with a small set of data? Machine Learning and Statistical inference deal with different type of problems and are not comparable in this point of view. Statistical inference is used in problems that are inherently statistic, for example...
Why does Machine Learning need a lot of data while one can do statistical inference with a small set Machine Learning and Statistical inference deal with different type of problems and are not comparable in this point of view. Statistical inference is used in problems that are inherently statistic, f
10,533
When is quantile regression worse than OLS?
If you are interested in the mean, use OLS, if in the median, use quantile. One big difference is that the mean is more affected by outliers and other extreme data. Sometimes, that is what you want. One example is if your dependent variable is the social capital in a neighborhood. The presence of a single person with a...
When is quantile regression worse than OLS?
If you are interested in the mean, use OLS, if in the median, use quantile. One big difference is that the mean is more affected by outliers and other extreme data. Sometimes, that is what you want. O
When is quantile regression worse than OLS? If you are interested in the mean, use OLS, if in the median, use quantile. One big difference is that the mean is more affected by outliers and other extreme data. Sometimes, that is what you want. One example is if your dependent variable is the social capital in a neighbor...
When is quantile regression worse than OLS? If you are interested in the mean, use OLS, if in the median, use quantile. One big difference is that the mean is more affected by outliers and other extreme data. Sometimes, that is what you want. O
10,534
When is quantile regression worse than OLS?
There seems to be a confusion in the premise of the question. In the second paragraph it says, "we could just use median regression as the OLS substitute". Note that regressing the conditional median on X is (a form of) quantile regression. If the error in the underlying data generating process is normally distribu...
When is quantile regression worse than OLS?
There seems to be a confusion in the premise of the question. In the second paragraph it says, "we could just use median regression as the OLS substitute". Note that regressing the conditional media
When is quantile regression worse than OLS? There seems to be a confusion in the premise of the question. In the second paragraph it says, "we could just use median regression as the OLS substitute". Note that regressing the conditional median on X is (a form of) quantile regression. If the error in the underlying ...
When is quantile regression worse than OLS? There seems to be a confusion in the premise of the question. In the second paragraph it says, "we could just use median regression as the OLS substitute". Note that regressing the conditional media
10,535
When is quantile regression worse than OLS?
Both OLS and quantile regression (QR) are estimation techniques for estimating the coefficient vector $\beta$ in a linear regression model $$ y = X\beta + \varepsilon $$ (for the case of QR see Koenker (1978), p. 33, second paragraph). For certain error distributions (e.g. those with heavy tails), the QR estimator $\h...
When is quantile regression worse than OLS?
Both OLS and quantile regression (QR) are estimation techniques for estimating the coefficient vector $\beta$ in a linear regression model $$ y = X\beta + \varepsilon $$ (for the case of QR see Koenk
When is quantile regression worse than OLS? Both OLS and quantile regression (QR) are estimation techniques for estimating the coefficient vector $\beta$ in a linear regression model $$ y = X\beta + \varepsilon $$ (for the case of QR see Koenker (1978), p. 33, second paragraph). For certain error distributions (e.g. t...
When is quantile regression worse than OLS? Both OLS and quantile regression (QR) are estimation techniques for estimating the coefficient vector $\beta$ in a linear regression model $$ y = X\beta + \varepsilon $$ (for the case of QR see Koenk
10,536
When is quantile regression worse than OLS?
To say what some of the excellent responses above said, but in a slightly different way, quantile regression makes fewer assumptions. On the right hand side of the model the assumptions are the same as with OLS, but on the left hand side the only assumption is continuity of the distribution of $Y$ (few ties). One cou...
When is quantile regression worse than OLS?
To say what some of the excellent responses above said, but in a slightly different way, quantile regression makes fewer assumptions. On the right hand side of the model the assumptions are the same
When is quantile regression worse than OLS? To say what some of the excellent responses above said, but in a slightly different way, quantile regression makes fewer assumptions. On the right hand side of the model the assumptions are the same as with OLS, but on the left hand side the only assumption is continuity of ...
When is quantile regression worse than OLS? To say what some of the excellent responses above said, but in a slightly different way, quantile regression makes fewer assumptions. On the right hand side of the model the assumptions are the same
10,537
When is quantile regression worse than OLS?
Peter Flom had a great and concise answer, I just want to expand it. The most important part of the question is how to define "worse". In order to define worse, we need to have some metrics, and the function to calculate how good or bad the fittings are called loss functions. We can have different definitions of the l...
When is quantile regression worse than OLS?
Peter Flom had a great and concise answer, I just want to expand it. The most important part of the question is how to define "worse". In order to define worse, we need to have some metrics, and the
When is quantile regression worse than OLS? Peter Flom had a great and concise answer, I just want to expand it. The most important part of the question is how to define "worse". In order to define worse, we need to have some metrics, and the function to calculate how good or bad the fittings are called loss functions...
When is quantile regression worse than OLS? Peter Flom had a great and concise answer, I just want to expand it. The most important part of the question is how to define "worse". In order to define worse, we need to have some metrics, and the
10,538
Least Squares Regression Step-By-Step Linear Algebra Computation
Note: I've posted an expanded version of this answer on my website. Would you kindly consider posting a similar answer with the actual R engine exposed? Sure! Down the rabbit hole we go. The first layer is lm, the interface exposed to the R programmer. You can look at the source for this by just typing lm at the R...
Least Squares Regression Step-By-Step Linear Algebra Computation
Note: I've posted an expanded version of this answer on my website. Would you kindly consider posting a similar answer with the actual R engine exposed? Sure! Down the rabbit hole we go. The first
Least Squares Regression Step-By-Step Linear Algebra Computation Note: I've posted an expanded version of this answer on my website. Would you kindly consider posting a similar answer with the actual R engine exposed? Sure! Down the rabbit hole we go. The first layer is lm, the interface exposed to the R programmer...
Least Squares Regression Step-By-Step Linear Algebra Computation Note: I've posted an expanded version of this answer on my website. Would you kindly consider posting a similar answer with the actual R engine exposed? Sure! Down the rabbit hole we go. The first
10,539
Least Squares Regression Step-By-Step Linear Algebra Computation
The actual step-by-step calculations in R are beautifully described in the answer by Matthew Drury in this same thread. In this answer I want to walk through the process of proving to oneself that the results in R with a simple example can be reached following the linear algebra of projections onto the column space, an...
Least Squares Regression Step-By-Step Linear Algebra Computation
The actual step-by-step calculations in R are beautifully described in the answer by Matthew Drury in this same thread. In this answer I want to walk through the process of proving to oneself that the
Least Squares Regression Step-By-Step Linear Algebra Computation The actual step-by-step calculations in R are beautifully described in the answer by Matthew Drury in this same thread. In this answer I want to walk through the process of proving to oneself that the results in R with a simple example can be reached foll...
Least Squares Regression Step-By-Step Linear Algebra Computation The actual step-by-step calculations in R are beautifully described in the answer by Matthew Drury in this same thread. In this answer I want to walk through the process of proving to oneself that the
10,540
Econometrics textbooks?
Definitively Econometric Analysis, by Greene. I'm not an econometrician, but I found this book very useful and well written.
Econometrics textbooks?
Definitively Econometric Analysis, by Greene. I'm not an econometrician, but I found this book very useful and well written.
Econometrics textbooks? Definitively Econometric Analysis, by Greene. I'm not an econometrician, but I found this book very useful and well written.
Econometrics textbooks? Definitively Econometric Analysis, by Greene. I'm not an econometrician, but I found this book very useful and well written.
10,541
Econometrics textbooks?
Depends on what level you're after. At a postgraduate level, the one i've most often seen referenced and recommended, and have therefore looked at most myself, is: Wooldridge, Jeffrey M. Econometric Analysis of Cross Section and Panel Data. MIT Press, 2001. ISBN 9780262232197 Most of what little I know about econometri...
Econometrics textbooks?
Depends on what level you're after. At a postgraduate level, the one i've most often seen referenced and recommended, and have therefore looked at most myself, is: Wooldridge, Jeffrey M. Econometric A
Econometrics textbooks? Depends on what level you're after. At a postgraduate level, the one i've most often seen referenced and recommended, and have therefore looked at most myself, is: Wooldridge, Jeffrey M. Econometric Analysis of Cross Section and Panel Data. MIT Press, 2001. ISBN 9780262232197 Most of what little...
Econometrics textbooks? Depends on what level you're after. At a postgraduate level, the one i've most often seen referenced and recommended, and have therefore looked at most myself, is: Wooldridge, Jeffrey M. Econometric A
10,542
Econometrics textbooks?
"Mostly Harmless Econometrics: An Empiricist's Companion" (Angrist, Pischke 2008) is a less technical and entertaining summary of the field. I wouldn't describe it as a beginner book, but it's well worth reading once you understand the basics.
Econometrics textbooks?
"Mostly Harmless Econometrics: An Empiricist's Companion" (Angrist, Pischke 2008) is a less technical and entertaining summary of the field. I wouldn't describe it as a beginner book, but it's well w
Econometrics textbooks? "Mostly Harmless Econometrics: An Empiricist's Companion" (Angrist, Pischke 2008) is a less technical and entertaining summary of the field. I wouldn't describe it as a beginner book, but it's well worth reading once you understand the basics.
Econometrics textbooks? "Mostly Harmless Econometrics: An Empiricist's Companion" (Angrist, Pischke 2008) is a less technical and entertaining summary of the field. I wouldn't describe it as a beginner book, but it's well w
10,543
Econometrics textbooks?
It depends on what you really want, (GMM, time series, panel...) but I can recommand those two books: Fumio Hayashi's "Econometrics" and Davidson and McKinnon "Econometric Theory and Methods". For a course in econometric time series, Hamilton's "Time Serie Analysis" is great.
Econometrics textbooks?
It depends on what you really want, (GMM, time series, panel...) but I can recommand those two books: Fumio Hayashi's "Econometrics" and Davidson and McKinnon "Econometric Theory and Methods". For a
Econometrics textbooks? It depends on what you really want, (GMM, time series, panel...) but I can recommand those two books: Fumio Hayashi's "Econometrics" and Davidson and McKinnon "Econometric Theory and Methods". For a course in econometric time series, Hamilton's "Time Serie Analysis" is great.
Econometrics textbooks? It depends on what you really want, (GMM, time series, panel...) but I can recommand those two books: Fumio Hayashi's "Econometrics" and Davidson and McKinnon "Econometric Theory and Methods". For a
10,544
Econometrics textbooks?
I really like Kennedy's A Guide to Econometrics, which is unusual in its setup, since every topic is discussed on three different levels, first in a non-technical way, then going into details of application and finally going into theoretical details, although the theoretical parts are a bit superficial.
Econometrics textbooks?
I really like Kennedy's A Guide to Econometrics, which is unusual in its setup, since every topic is discussed on three different levels, first in a non-technical way, then going into details of appli
Econometrics textbooks? I really like Kennedy's A Guide to Econometrics, which is unusual in its setup, since every topic is discussed on three different levels, first in a non-technical way, then going into details of application and finally going into theoretical details, although the theoretical parts are a bit supe...
Econometrics textbooks? I really like Kennedy's A Guide to Econometrics, which is unusual in its setup, since every topic is discussed on three different levels, first in a non-technical way, then going into details of appli
10,545
Econometrics textbooks?
I would definitely recommend M. Verbeek's A Guide to Modern Econometrics. Woolwridge is too wordy (and this long-windedness loses the reader's focus too early in the chapters). Greene (i'm referring to the 5th edition) often gets lost in minutiae: i.e. strives to catalog formulae that are orthogonal to the main subject...
Econometrics textbooks?
I would definitely recommend M. Verbeek's A Guide to Modern Econometrics. Woolwridge is too wordy (and this long-windedness loses the reader's focus too early in the chapters). Greene (i'm referring t
Econometrics textbooks? I would definitely recommend M. Verbeek's A Guide to Modern Econometrics. Woolwridge is too wordy (and this long-windedness loses the reader's focus too early in the chapters). Greene (i'm referring to the 5th edition) often gets lost in minutiae: i.e. strives to catalog formulae that are orthog...
Econometrics textbooks? I would definitely recommend M. Verbeek's A Guide to Modern Econometrics. Woolwridge is too wordy (and this long-windedness loses the reader's focus too early in the chapters). Greene (i'm referring t
10,546
Econometrics textbooks?
"Applied Econometrics with R" (Kleiber, Zeileis 2008) is a good introduction using R, and is accompanied by the AER package.
Econometrics textbooks?
"Applied Econometrics with R" (Kleiber, Zeileis 2008) is a good introduction using R, and is accompanied by the AER package.
Econometrics textbooks? "Applied Econometrics with R" (Kleiber, Zeileis 2008) is a good introduction using R, and is accompanied by the AER package.
Econometrics textbooks? "Applied Econometrics with R" (Kleiber, Zeileis 2008) is a good introduction using R, and is accompanied by the AER package.
10,547
Econometrics textbooks?
(Disclaimer: I'm not an economist.) I gather you might like to have a range of possibilities listed, however, most of the answers focus on more advanced texts. Should someone want a very introductory text, I can recommend: Gujarati, D., & Porter, D. (2008). Basic Econometrics. McGraw-Hill/Irwin. This is very bas...
Econometrics textbooks?
(Disclaimer: I'm not an economist.) I gather you might like to have a range of possibilities listed, however, most of the answers focus on more advanced texts. Should someone want a very introductor
Econometrics textbooks? (Disclaimer: I'm not an economist.) I gather you might like to have a range of possibilities listed, however, most of the answers focus on more advanced texts. Should someone want a very introductory text, I can recommend: Gujarati, D., & Porter, D. (2008). Basic Econometrics. McGraw-Hill/I...
Econometrics textbooks? (Disclaimer: I'm not an economist.) I gather you might like to have a range of possibilities listed, however, most of the answers focus on more advanced texts. Should someone want a very introductor
10,548
Econometrics textbooks?
I am an econometrics lecturer. Definitely, the best book depends on what you want and the level that is suitable for you. However, my first option is "Basic Econometrics" written by Gujarati. The fourth edition of that textbook provides a good and well-written overview of the subject (Gujarati, 2002). Sadly, I cannot ...
Econometrics textbooks?
I am an econometrics lecturer. Definitely, the best book depends on what you want and the level that is suitable for you. However, my first option is "Basic Econometrics" written by Gujarati. The four
Econometrics textbooks? I am an econometrics lecturer. Definitely, the best book depends on what you want and the level that is suitable for you. However, my first option is "Basic Econometrics" written by Gujarati. The fourth edition of that textbook provides a good and well-written overview of the subject (Gujarati,...
Econometrics textbooks? I am an econometrics lecturer. Definitely, the best book depends on what you want and the level that is suitable for you. However, my first option is "Basic Econometrics" written by Gujarati. The four
10,549
Econometrics textbooks?
One at a somewhat lower level of mathematical sophistication than Wooldridge (less dense, more pictures), but a bit more up to date on some of the fast-moving areas: Murray, Michael P. Econometrics: A Modern Introduction. Addison Wesley, 2006. 976 pp. ISBN 9780321113610 Seems that it's not available for preview on the ...
Econometrics textbooks?
One at a somewhat lower level of mathematical sophistication than Wooldridge (less dense, more pictures), but a bit more up to date on some of the fast-moving areas: Murray, Michael P. Econometrics: A
Econometrics textbooks? One at a somewhat lower level of mathematical sophistication than Wooldridge (less dense, more pictures), but a bit more up to date on some of the fast-moving areas: Murray, Michael P. Econometrics: A Modern Introduction. Addison Wesley, 2006. 976 pp. ISBN 9780321113610 Seems that it's not avail...
Econometrics textbooks? One at a somewhat lower level of mathematical sophistication than Wooldridge (less dense, more pictures), but a bit more up to date on some of the fast-moving areas: Murray, Michael P. Econometrics: A
10,550
Econometrics textbooks?
I like Cameron and Trivedi's Microeconometrics. It strikes a nice balance between breadth, intuition, and rigor (if you follow up on the references). The target audience is the applied researcher. Their Microeconometrics Using Stata is also quite good if you're a Stata user, though it covers less ground. At advanced un...
Econometrics textbooks?
I like Cameron and Trivedi's Microeconometrics. It strikes a nice balance between breadth, intuition, and rigor (if you follow up on the references). The target audience is the applied researcher. The
Econometrics textbooks? I like Cameron and Trivedi's Microeconometrics. It strikes a nice balance between breadth, intuition, and rigor (if you follow up on the references). The target audience is the applied researcher. Their Microeconometrics Using Stata is also quite good if you're a Stata user, though it covers les...
Econometrics textbooks? I like Cameron and Trivedi's Microeconometrics. It strikes a nice balance between breadth, intuition, and rigor (if you follow up on the references). The target audience is the applied researcher. The
10,551
Econometrics textbooks?
Hashem Pesaran's book looks very promising. It covers such topics as dependencies in panel data and others that I haven't seen in other books.
Econometrics textbooks?
Hashem Pesaran's book looks very promising. It covers such topics as dependencies in panel data and others that I haven't seen in other books.
Econometrics textbooks? Hashem Pesaran's book looks very promising. It covers such topics as dependencies in panel data and others that I haven't seen in other books.
Econometrics textbooks? Hashem Pesaran's book looks very promising. It covers such topics as dependencies in panel data and others that I haven't seen in other books.
10,552
Econometrics textbooks?
I prefer the fourth edition of "Basic Econometrics", among other reasons, because the text is completely self-contained. The fifth edition requires to have access to the web in order to replicate the exercises contained in the text ( Users of the previous edition did not have this problem because the book was packed wi...
Econometrics textbooks?
I prefer the fourth edition of "Basic Econometrics", among other reasons, because the text is completely self-contained. The fifth edition requires to have access to the web in order to replicate the
Econometrics textbooks? I prefer the fourth edition of "Basic Econometrics", among other reasons, because the text is completely self-contained. The fifth edition requires to have access to the web in order to replicate the exercises contained in the text ( Users of the previous edition did not have this problem becaus...
Econometrics textbooks? I prefer the fourth edition of "Basic Econometrics", among other reasons, because the text is completely self-contained. The fifth edition requires to have access to the web in order to replicate the
10,553
Visualizing Likert Item Response Data
I like the centered count view. This particular version removes the neutral answers (effectively treating neutral and n/a as the same) to show only the amount of agree/disagree opinions. The 0 point is where red and blue meet. The count axis is clipped out. For comparison, here are the same five responses as stacked p...
Visualizing Likert Item Response Data
I like the centered count view. This particular version removes the neutral answers (effectively treating neutral and n/a as the same) to show only the amount of agree/disagree opinions. The 0 point i
Visualizing Likert Item Response Data I like the centered count view. This particular version removes the neutral answers (effectively treating neutral and n/a as the same) to show only the amount of agree/disagree opinions. The 0 point is where red and blue meet. The count axis is clipped out. For comparison, here ar...
Visualizing Likert Item Response Data I like the centered count view. This particular version removes the neutral answers (effectively treating neutral and n/a as the same) to show only the amount of agree/disagree opinions. The 0 point i
10,554
Visualizing Likert Item Response Data
Stacked barcharts are generally well understood by non-statisticians, provided they are gently introduced. It is useful to scale them on a common metric (e.g. 0-100%), with a gradual color for each category if these are ordinal item (e.g. Likert). I prefer dotchart (Cleveland dot plot), when there are not too many item...
Visualizing Likert Item Response Data
Stacked barcharts are generally well understood by non-statisticians, provided they are gently introduced. It is useful to scale them on a common metric (e.g. 0-100%), with a gradual color for each ca
Visualizing Likert Item Response Data Stacked barcharts are generally well understood by non-statisticians, provided they are gently introduced. It is useful to scale them on a common metric (e.g. 0-100%), with a gradual color for each category if these are ordinal item (e.g. Likert). I prefer dotchart (Cleveland dot p...
Visualizing Likert Item Response Data Stacked barcharts are generally well understood by non-statisticians, provided they are gently introduced. It is useful to scale them on a common metric (e.g. 0-100%), with a gradual color for each ca
10,555
Visualizing Likert Item Response Data
I think chl's answer is great. One thing I might add, is for the case you'd want to compare the correlation between the items. For that you can use something like a Correlation scatter-plot matrix for ordered-categorical data (That code still needs some tweaking - but it gives the general idea...)
Visualizing Likert Item Response Data
I think chl's answer is great. One thing I might add, is for the case you'd want to compare the correlation between the items. For that you can use something like a Correlation scatter-plot matrix fo
Visualizing Likert Item Response Data I think chl's answer is great. One thing I might add, is for the case you'd want to compare the correlation between the items. For that you can use something like a Correlation scatter-plot matrix for ordered-categorical data (That code still needs some tweaking - but it gives th...
Visualizing Likert Item Response Data I think chl's answer is great. One thing I might add, is for the case you'd want to compare the correlation between the items. For that you can use something like a Correlation scatter-plot matrix fo
10,556
Natural interpretation for LDA hyperparameters
David Blei has a great talk introducing LDA to students of a summer class: http://videolectures.net/mlss09uk_blei_tm/ In the first video he covers extensively the basic idea of topic modelling and how Dirichlet distribution come into play. The plate notation is explained as if all hidden variables are observed to show ...
Natural interpretation for LDA hyperparameters
David Blei has a great talk introducing LDA to students of a summer class: http://videolectures.net/mlss09uk_blei_tm/ In the first video he covers extensively the basic idea of topic modelling and how
Natural interpretation for LDA hyperparameters David Blei has a great talk introducing LDA to students of a summer class: http://videolectures.net/mlss09uk_blei_tm/ In the first video he covers extensively the basic idea of topic modelling and how Dirichlet distribution come into play. The plate notation is explained a...
Natural interpretation for LDA hyperparameters David Blei has a great talk introducing LDA to students of a summer class: http://videolectures.net/mlss09uk_blei_tm/ In the first video he covers extensively the basic idea of topic modelling and how
10,557
Natural interpretation for LDA hyperparameters
The answer depends on whether you are assuming the symmetric or asymmetric dirichlet distribution (or, more technically, whether the base measure is uniform). Unless something else is specified, most implementations of LDA assume the distribution is symmetric. For the symmetric distribution, a high alpha-value means th...
Natural interpretation for LDA hyperparameters
The answer depends on whether you are assuming the symmetric or asymmetric dirichlet distribution (or, more technically, whether the base measure is uniform). Unless something else is specified, most
Natural interpretation for LDA hyperparameters The answer depends on whether you are assuming the symmetric or asymmetric dirichlet distribution (or, more technically, whether the base measure is uniform). Unless something else is specified, most implementations of LDA assume the distribution is symmetric. For the symm...
Natural interpretation for LDA hyperparameters The answer depends on whether you are assuming the symmetric or asymmetric dirichlet distribution (or, more technically, whether the base measure is uniform). Unless something else is specified, most
10,558
What's wrong with (some) pseudo-randomization
You are right to be skeptical. In general, one should use 'real' randomization, because typically one doesn't have all knowledge about relevant factors (unobservables). If one of those unobservables is correlated with the age being odd or even, then it is also correlated with whether or not they received treatment. If ...
What's wrong with (some) pseudo-randomization
You are right to be skeptical. In general, one should use 'real' randomization, because typically one doesn't have all knowledge about relevant factors (unobservables). If one of those unobservables i
What's wrong with (some) pseudo-randomization You are right to be skeptical. In general, one should use 'real' randomization, because typically one doesn't have all knowledge about relevant factors (unobservables). If one of those unobservables is correlated with the age being odd or even, then it is also correlated wi...
What's wrong with (some) pseudo-randomization You are right to be skeptical. In general, one should use 'real' randomization, because typically one doesn't have all knowledge about relevant factors (unobservables). If one of those unobservables i
10,559
What's wrong with (some) pseudo-randomization
It is a good exercise to uphold contrarian views from time to time, so let me begin by offering a few reasons in favor of this form of pseudo-randomization. They are, principally, that it is little different than any other form of systematic sampling, such as obtaining samples of environmental media at points of a gri...
What's wrong with (some) pseudo-randomization
It is a good exercise to uphold contrarian views from time to time, so let me begin by offering a few reasons in favor of this form of pseudo-randomization. They are, principally, that it is little d
What's wrong with (some) pseudo-randomization It is a good exercise to uphold contrarian views from time to time, so let me begin by offering a few reasons in favor of this form of pseudo-randomization. They are, principally, that it is little different than any other form of systematic sampling, such as obtaining sam...
What's wrong with (some) pseudo-randomization It is a good exercise to uphold contrarian views from time to time, so let me begin by offering a few reasons in favor of this form of pseudo-randomization. They are, principally, that it is little d
10,560
What's wrong with (some) pseudo-randomization
I agree the example you give is pretty innocuous but... If the agents involved (either the person dealing out the intervention or the people getting the intervention) become aware of the assignment scheme they can take advantage of it. Such self selection should be fairly obvious why it is problematic in most experimen...
What's wrong with (some) pseudo-randomization
I agree the example you give is pretty innocuous but... If the agents involved (either the person dealing out the intervention or the people getting the intervention) become aware of the assignment sc
What's wrong with (some) pseudo-randomization I agree the example you give is pretty innocuous but... If the agents involved (either the person dealing out the intervention or the people getting the intervention) become aware of the assignment scheme they can take advantage of it. Such self selection should be fairly o...
What's wrong with (some) pseudo-randomization I agree the example you give is pretty innocuous but... If the agents involved (either the person dealing out the intervention or the people getting the intervention) become aware of the assignment sc
10,561
What's wrong with (some) pseudo-randomization
What you are proposing is NOT pseudo-randomization. Pseudo-randomization uses a seed to reproducibly generate a pseudo-random sequence based on the internal clock of a computer. The randomization assignment does NOT depend on patient level characteristics. The point of randomization is to balance the distribution of p...
What's wrong with (some) pseudo-randomization
What you are proposing is NOT pseudo-randomization. Pseudo-randomization uses a seed to reproducibly generate a pseudo-random sequence based on the internal clock of a computer. The randomization assi
What's wrong with (some) pseudo-randomization What you are proposing is NOT pseudo-randomization. Pseudo-randomization uses a seed to reproducibly generate a pseudo-random sequence based on the internal clock of a computer. The randomization assignment does NOT depend on patient level characteristics. The point of ran...
What's wrong with (some) pseudo-randomization What you are proposing is NOT pseudo-randomization. Pseudo-randomization uses a seed to reproducibly generate a pseudo-random sequence based on the internal clock of a computer. The randomization assi
10,562
Column-wise matrix normalization in R [closed]
This is what sweep and scale are for. sweep(m, 2, colSums(m), FUN="/") scale(m, center=FALSE, scale=colSums(m)) Alternatively, you could use recycling, but you have to transpose it twice. t(t(m)/colSums(m)) Or you could construct the full matrix you want to divide by, like you did in your question. Here's another wa...
Column-wise matrix normalization in R [closed]
This is what sweep and scale are for. sweep(m, 2, colSums(m), FUN="/") scale(m, center=FALSE, scale=colSums(m)) Alternatively, you could use recycling, but you have to transpose it twice. t(t(m)/colS
Column-wise matrix normalization in R [closed] This is what sweep and scale are for. sweep(m, 2, colSums(m), FUN="/") scale(m, center=FALSE, scale=colSums(m)) Alternatively, you could use recycling, but you have to transpose it twice. t(t(m)/colSums(m)) Or you could construct the full matrix you want to divide by, li...
Column-wise matrix normalization in R [closed] This is what sweep and scale are for. sweep(m, 2, colSums(m), FUN="/") scale(m, center=FALSE, scale=colSums(m)) Alternatively, you could use recycling, but you have to transpose it twice. t(t(m)/colS
10,563
Column-wise matrix normalization in R [closed]
Another is prop.table(m, 2), or simply propr(m), that internally uses sweep. It may be of interest to compare the performance of these equivalent solutions, so I did a little benchmark (using microbenchmark package). This is the input matrix m I've used: [,1] [,2] [,3] [,4] [,5...
Column-wise matrix normalization in R [closed]
Another is prop.table(m, 2), or simply propr(m), that internally uses sweep. It may be of interest to compare the performance of these equivalent solutions, so I did a little benchmark (using microben
Column-wise matrix normalization in R [closed] Another is prop.table(m, 2), or simply propr(m), that internally uses sweep. It may be of interest to compare the performance of these equivalent solutions, so I did a little benchmark (using microbenchmark package). This is the input matrix m I've used: [,1] ...
Column-wise matrix normalization in R [closed] Another is prop.table(m, 2), or simply propr(m), that internally uses sweep. It may be of interest to compare the performance of these equivalent solutions, so I did a little benchmark (using microben
10,564
Column-wise matrix normalization in R [closed]
apply(m,2,norm<-function(x){return (x/sum(x)}) ?
Column-wise matrix normalization in R [closed]
apply(m,2,norm<-function(x){return (x/sum(x)}) ?
Column-wise matrix normalization in R [closed] apply(m,2,norm<-function(x){return (x/sum(x)}) ?
Column-wise matrix normalization in R [closed] apply(m,2,norm<-function(x){return (x/sum(x)}) ?
10,565
How to model this odd-shaped distribution (almost a reverse-J)
Methods of censored regression can handle data like this. They assume the residuals behave as in ordinary linear regression but have been modified so that (Left censoring): all values smaller than a low threshold, which is independent of the data, (but can vary from one case to the other) have not been quantified; an...
How to model this odd-shaped distribution (almost a reverse-J)
Methods of censored regression can handle data like this. They assume the residuals behave as in ordinary linear regression but have been modified so that (Left censoring): all values smaller than a
How to model this odd-shaped distribution (almost a reverse-J) Methods of censored regression can handle data like this. They assume the residuals behave as in ordinary linear regression but have been modified so that (Left censoring): all values smaller than a low threshold, which is independent of the data, (but ca...
How to model this odd-shaped distribution (almost a reverse-J) Methods of censored regression can handle data like this. They assume the residuals behave as in ordinary linear regression but have been modified so that (Left censoring): all values smaller than a
10,566
How to model this odd-shaped distribution (almost a reverse-J)
Are the values always between 0 and 1? If so you might consider a beta distribution and beta regression. But make sure to think through the process that leads to your data. You could also do a 0 and 1 inflated model (0 inflated models are common, you would probably need to extend to 1 inflated by your self). The big ...
How to model this odd-shaped distribution (almost a reverse-J)
Are the values always between 0 and 1? If so you might consider a beta distribution and beta regression. But make sure to think through the process that leads to your data. You could also do a 0 and
How to model this odd-shaped distribution (almost a reverse-J) Are the values always between 0 and 1? If so you might consider a beta distribution and beta regression. But make sure to think through the process that leads to your data. You could also do a 0 and 1 inflated model (0 inflated models are common, you would...
How to model this odd-shaped distribution (almost a reverse-J) Are the values always between 0 and 1? If so you might consider a beta distribution and beta regression. But make sure to think through the process that leads to your data. You could also do a 0 and
10,567
How to model this odd-shaped distribution (almost a reverse-J)
In concordance with Greg Snow's advice I've heard beta models are useful in such situations as well (see a Smithson & verkuilen, 2006, A Better Lemon Squeezer), as well as quantile regression (Bottai et al., 2010), but these seem like so pronounced floor and ceiling effects they may be inappropriate (especially the bet...
How to model this odd-shaped distribution (almost a reverse-J)
In concordance with Greg Snow's advice I've heard beta models are useful in such situations as well (see a Smithson & verkuilen, 2006, A Better Lemon Squeezer), as well as quantile regression (Bottai
How to model this odd-shaped distribution (almost a reverse-J) In concordance with Greg Snow's advice I've heard beta models are useful in such situations as well (see a Smithson & verkuilen, 2006, A Better Lemon Squeezer), as well as quantile regression (Bottai et al., 2010), but these seem like so pronounced floor an...
How to model this odd-shaped distribution (almost a reverse-J) In concordance with Greg Snow's advice I've heard beta models are useful in such situations as well (see a Smithson & verkuilen, 2006, A Better Lemon Squeezer), as well as quantile regression (Bottai
10,568
Does the Bayesian posterior need to be a proper distribution?
(It is somewhat of a surprise to read the previous answers, which focus on the potential impropriety of the posterior when the prior is proper, since, as far as I can tell, the question is whether or not the posterior has to be proper (i.e., integrable to one) to be a proper (i.e., acceptable for Bayesian inference) po...
Does the Bayesian posterior need to be a proper distribution?
(It is somewhat of a surprise to read the previous answers, which focus on the potential impropriety of the posterior when the prior is proper, since, as far as I can tell, the question is whether or
Does the Bayesian posterior need to be a proper distribution? (It is somewhat of a surprise to read the previous answers, which focus on the potential impropriety of the posterior when the prior is proper, since, as far as I can tell, the question is whether or not the posterior has to be proper (i.e., integrable to on...
Does the Bayesian posterior need to be a proper distribution? (It is somewhat of a surprise to read the previous answers, which focus on the potential impropriety of the posterior when the prior is proper, since, as far as I can tell, the question is whether or
10,569
Does the Bayesian posterior need to be a proper distribution?
The posterior distribution need not be proper even if the prior is proper. For example, suppose $v$ has a Gamma prior with shape 0.25 (which is proper), and we model our datum $x$ as drawn from a Gaussian distribution with mean zero and variance $v$. Suppose $x$ is observed to be zero. Then the likelihood $p(x|v)$ i...
Does the Bayesian posterior need to be a proper distribution?
The posterior distribution need not be proper even if the prior is proper. For example, suppose $v$ has a Gamma prior with shape 0.25 (which is proper), and we model our datum $x$ as drawn from a Gau
Does the Bayesian posterior need to be a proper distribution? The posterior distribution need not be proper even if the prior is proper. For example, suppose $v$ has a Gamma prior with shape 0.25 (which is proper), and we model our datum $x$ as drawn from a Gaussian distribution with mean zero and variance $v$. Suppo...
Does the Bayesian posterior need to be a proper distribution? The posterior distribution need not be proper even if the prior is proper. For example, suppose $v$ has a Gamma prior with shape 0.25 (which is proper), and we model our datum $x$ as drawn from a Gau
10,570
Does the Bayesian posterior need to be a proper distribution?
Defining the set $$ \text{Bogus Data} = \left\{ x:\int f(x\mid \theta)\,\pi(\theta)\,d\theta = \infty \right\} \, , $$ we have $$ \mathrm{Pr}\left(X\in\text{Bogus Data}\right) = \int_\text{Bogus Data} \int f(x\mid \theta)\,\pi(\theta)\,d\theta\,dx = \int_\text{Bogus Data} \infty\,dx \, . $$ The last integral will b...
Does the Bayesian posterior need to be a proper distribution?
Defining the set $$ \text{Bogus Data} = \left\{ x:\int f(x\mid \theta)\,\pi(\theta)\,d\theta = \infty \right\} \, , $$ we have $$ \mathrm{Pr}\left(X\in\text{Bogus Data}\right) = \int_\text{Bogus D
Does the Bayesian posterior need to be a proper distribution? Defining the set $$ \text{Bogus Data} = \left\{ x:\int f(x\mid \theta)\,\pi(\theta)\,d\theta = \infty \right\} \, , $$ we have $$ \mathrm{Pr}\left(X\in\text{Bogus Data}\right) = \int_\text{Bogus Data} \int f(x\mid \theta)\,\pi(\theta)\,d\theta\,dx = \int...
Does the Bayesian posterior need to be a proper distribution? Defining the set $$ \text{Bogus Data} = \left\{ x:\int f(x\mid \theta)\,\pi(\theta)\,d\theta = \infty \right\} \, , $$ we have $$ \mathrm{Pr}\left(X\in\text{Bogus Data}\right) = \int_\text{Bogus D
10,571
Does the Bayesian posterior need to be a proper distribution?
Any "distribution" must sum (or integrate) to 1. I can think a few examples where one might work with un-normalized distributions, but I am uncomfortable ever calling anything which marginalizes to anything but 1 a "distribution". Given that you mentioned Bayesian posterior, I bet your question might come from a class...
Does the Bayesian posterior need to be a proper distribution?
Any "distribution" must sum (or integrate) to 1. I can think a few examples where one might work with un-normalized distributions, but I am uncomfortable ever calling anything which marginalizes to a
Does the Bayesian posterior need to be a proper distribution? Any "distribution" must sum (or integrate) to 1. I can think a few examples where one might work with un-normalized distributions, but I am uncomfortable ever calling anything which marginalizes to anything but 1 a "distribution". Given that you mentioned B...
Does the Bayesian posterior need to be a proper distribution? Any "distribution" must sum (or integrate) to 1. I can think a few examples where one might work with un-normalized distributions, but I am uncomfortable ever calling anything which marginalizes to a
10,572
Does the Bayesian posterior need to be a proper distribution?
Later is better than never. Here is a natural and useful counterexample I believe, arising from Bayesian nonparametrics. Suppose ${\mathbf{x}} = \left( {{x_1},...,{x_i},...{x_n}} \right) \in {\mathbb{R}^n}$ has posterior probability distribution $p\left( {\left. {\mathbf{x}} \right|D} \right) \propto {e^{ - \frac{1}{2}...
Does the Bayesian posterior need to be a proper distribution?
Later is better than never. Here is a natural and useful counterexample I believe, arising from Bayesian nonparametrics. Suppose ${\mathbf{x}} = \left( {{x_1},...,{x_i},...{x_n}} \right) \in {\mathbb{
Does the Bayesian posterior need to be a proper distribution? Later is better than never. Here is a natural and useful counterexample I believe, arising from Bayesian nonparametrics. Suppose ${\mathbf{x}} = \left( {{x_1},...,{x_i},...{x_n}} \right) \in {\mathbb{R}^n}$ has posterior probability distribution $p\left( {\l...
Does the Bayesian posterior need to be a proper distribution? Later is better than never. Here is a natural and useful counterexample I believe, arising from Bayesian nonparametrics. Suppose ${\mathbf{x}} = \left( {{x_1},...,{x_i},...{x_n}} \right) \in {\mathbb{
10,573
Does the Bayesian posterior need to be a proper distribution?
Improper posterior distribution only arises when you're having an improper prior distribution. The implication of this is that the asymptotic results do not hold. As an example, consider a binomial data consisting of $n$ success and 0 failures, if using $Beta(0,0)$ as the prior distribution, then the posterior will be...
Does the Bayesian posterior need to be a proper distribution?
Improper posterior distribution only arises when you're having an improper prior distribution. The implication of this is that the asymptotic results do not hold. As an example, consider a binomial d
Does the Bayesian posterior need to be a proper distribution? Improper posterior distribution only arises when you're having an improper prior distribution. The implication of this is that the asymptotic results do not hold. As an example, consider a binomial data consisting of $n$ success and 0 failures, if using $Be...
Does the Bayesian posterior need to be a proper distribution? Improper posterior distribution only arises when you're having an improper prior distribution. The implication of this is that the asymptotic results do not hold. As an example, consider a binomial d
10,574
Dropout makes performance worse
Dropout is a regularization technique, and is most effective at preventing overfitting. However, there are several places when dropout can hurt performance. Right before the last layer. This is generally a bad place to apply dropout, because the network has no ability to "correct" errors induced by dropout before the ...
Dropout makes performance worse
Dropout is a regularization technique, and is most effective at preventing overfitting. However, there are several places when dropout can hurt performance. Right before the last layer. This is gener
Dropout makes performance worse Dropout is a regularization technique, and is most effective at preventing overfitting. However, there are several places when dropout can hurt performance. Right before the last layer. This is generally a bad place to apply dropout, because the network has no ability to "correct" error...
Dropout makes performance worse Dropout is a regularization technique, and is most effective at preventing overfitting. However, there are several places when dropout can hurt performance. Right before the last layer. This is gener
10,575
What's the difference between mathematical statistics and statistics?
There are three types of statisticians; those that (prefer to) work with real data, those that (prefer to) work with simulated data, those that (prefer to) work with the symbol $X$. math stat types would be (3). Typically, type (1) statisticians have some prefix attached to make clear the source of the data they wo...
What's the difference between mathematical statistics and statistics?
There are three types of statisticians; those that (prefer to) work with real data, those that (prefer to) work with simulated data, those that (prefer to) work with the symbol $X$. math stat types
What's the difference between mathematical statistics and statistics? There are three types of statisticians; those that (prefer to) work with real data, those that (prefer to) work with simulated data, those that (prefer to) work with the symbol $X$. math stat types would be (3). Typically, type (1) statisticians h...
What's the difference between mathematical statistics and statistics? There are three types of statisticians; those that (prefer to) work with real data, those that (prefer to) work with simulated data, those that (prefer to) work with the symbol $X$. math stat types
10,576
What's the difference between mathematical statistics and statistics?
Mathematical statistics concentrates on theorems and proofs and mathematical rigor, like other branches of math. It tends to be studied in math departments, and mathematical statisticians often try to derive new theorems. "Statistics" includes mathematical statistics, but the other parts of the field tend to concentrat...
What's the difference between mathematical statistics and statistics?
Mathematical statistics concentrates on theorems and proofs and mathematical rigor, like other branches of math. It tends to be studied in math departments, and mathematical statisticians often try to
What's the difference between mathematical statistics and statistics? Mathematical statistics concentrates on theorems and proofs and mathematical rigor, like other branches of math. It tends to be studied in math departments, and mathematical statisticians often try to derive new theorems. "Statistics" includes mathem...
What's the difference between mathematical statistics and statistics? Mathematical statistics concentrates on theorems and proofs and mathematical rigor, like other branches of math. It tends to be studied in math departments, and mathematical statisticians often try to
10,577
What's the difference between mathematical statistics and statistics?
The boundaries are always very blurry but I would say that mathematical statistics is more focused on the mathematical foundations of statistics, whereas statistics in general is more driven by the data and its analysis.
What's the difference between mathematical statistics and statistics?
The boundaries are always very blurry but I would say that mathematical statistics is more focused on the mathematical foundations of statistics, whereas statistics in general is more driven by the da
What's the difference between mathematical statistics and statistics? The boundaries are always very blurry but I would say that mathematical statistics is more focused on the mathematical foundations of statistics, whereas statistics in general is more driven by the data and its analysis.
What's the difference between mathematical statistics and statistics? The boundaries are always very blurry but I would say that mathematical statistics is more focused on the mathematical foundations of statistics, whereas statistics in general is more driven by the da
10,578
What's the difference between mathematical statistics and statistics?
There is no difference. The science of Statistics as it is taught in academic institutions throughout the world is basically short for "Mathematical Statistics". This is divided into "Applied (mathematical) Statistics" and "Theoretical (mathematical) Statistics". In both cases, Statistics is a subfield of math (or appl...
What's the difference between mathematical statistics and statistics?
There is no difference. The science of Statistics as it is taught in academic institutions throughout the world is basically short for "Mathematical Statistics". This is divided into "Applied (mathema
What's the difference between mathematical statistics and statistics? There is no difference. The science of Statistics as it is taught in academic institutions throughout the world is basically short for "Mathematical Statistics". This is divided into "Applied (mathematical) Statistics" and "Theoretical (mathematical)...
What's the difference between mathematical statistics and statistics? There is no difference. The science of Statistics as it is taught in academic institutions throughout the world is basically short for "Mathematical Statistics". This is divided into "Applied (mathema
10,579
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
The point is that sometimes, different models (for the same data) can lead to likelihood functions which differ by a multiplicative constant, but the information content must clearly be the same. An example: We model $n$ independent Bernoulli experiments, leading to data $X_1, \dots, X_n$, each with a Bernoulli distrib...
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
The point is that sometimes, different models (for the same data) can lead to likelihood functions which differ by a multiplicative constant, but the information content must clearly be the same. An e
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice? The point is that sometimes, different models (for the same data) can lead to likelihood functions which differ by a multiplicative constant, but the information content must clearly be the same. An example: We m...
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr The point is that sometimes, different models (for the same data) can lead to likelihood functions which differ by a multiplicative constant, but the information content must clearly be the same. An e
10,580
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
It basically means that only relative value of the PDF matters. For instance, the standard normal (Gaussian) PDF is: $f(x)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$, your book is saying that they could use $g(x)=e^{-x^2/2}$ instead, because they don't care for the scale, i.e. $c=\frac{1}{\sqrt{2\pi}}$. This happens because they...
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
It basically means that only relative value of the PDF matters. For instance, the standard normal (Gaussian) PDF is: $f(x)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$, your book is saying that they could use $g(
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice? It basically means that only relative value of the PDF matters. For instance, the standard normal (Gaussian) PDF is: $f(x)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$, your book is saying that they could use $g(x)=e^{-x^2/2...
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr It basically means that only relative value of the PDF matters. For instance, the standard normal (Gaussian) PDF is: $f(x)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$, your book is saying that they could use $g(
10,581
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
I cannot explain the meaning of the quotation, but for maximum-likelihood estimation, it does not matter whether we choose to find the maximum of the likelihood function $L(\mathbf x; \theta)$ (regarded as a function of $\theta$ or the maximum of $aL(\mathbf x; \theta)$ where $a$ is some constant. This is because w...
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
I cannot explain the meaning of the quotation, but for maximum-likelihood estimation, it does not matter whether we choose to find the maximum of the likelihood function $L(\mathbf x; \theta)$ (rega
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice? I cannot explain the meaning of the quotation, but for maximum-likelihood estimation, it does not matter whether we choose to find the maximum of the likelihood function $L(\mathbf x; \theta)$ (regarded as a fu...
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr I cannot explain the meaning of the quotation, but for maximum-likelihood estimation, it does not matter whether we choose to find the maximum of the likelihood function $L(\mathbf x; \theta)$ (rega
10,582
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
In layman's terms, you'll often look for the maximum likelihood and $f(x)$ and $kf(x)$ share the same critical points.
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
In layman's terms, you'll often look for the maximum likelihood and $f(x)$ and $kf(x)$ share the same critical points.
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice? In layman's terms, you'll often look for the maximum likelihood and $f(x)$ and $kf(x)$ share the same critical points.
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr In layman's terms, you'll often look for the maximum likelihood and $f(x)$ and $kf(x)$ share the same critical points.
10,583
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice?
I would suggest not to drop from sight any constant terms in the likelihood function (i.e. terms that do not include the parameters). In usual circumstances, they do not affect the $\text {argmax}$ of the likelihood, as already mentioned. But: There may be unusual circumstances when you will have to maximize the like...
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr
I would suggest not to drop from sight any constant terms in the likelihood function (i.e. terms that do not include the parameters). In usual circumstances, they do not affect the $\text {argmax}$ of
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in practice? I would suggest not to drop from sight any constant terms in the likelihood function (i.e. terms that do not include the parameters). In usual circumstances, they do not affect the $\text {argmax}$ of the likelih...
What does "likelihood is only defined up to a multiplicative constant of proportionality" mean in pr I would suggest not to drop from sight any constant terms in the likelihood function (i.e. terms that do not include the parameters). In usual circumstances, they do not affect the $\text {argmax}$ of
10,584
What is the manifold assumption in semi-supervised learning?
Imagine that you have a bunch of seeds fastened on a glass plate, which is resting horizontally on a table. Because of the way we typically think about space, it would be safe to say that these seeds live in a two-dimensional space, more or less, because each seed can be identified by the two numbers that give that see...
What is the manifold assumption in semi-supervised learning?
Imagine that you have a bunch of seeds fastened on a glass plate, which is resting horizontally on a table. Because of the way we typically think about space, it would be safe to say that these seeds
What is the manifold assumption in semi-supervised learning? Imagine that you have a bunch of seeds fastened on a glass plate, which is resting horizontally on a table. Because of the way we typically think about space, it would be safe to say that these seeds live in a two-dimensional space, more or less, because each...
What is the manifold assumption in semi-supervised learning? Imagine that you have a bunch of seeds fastened on a glass plate, which is resting horizontally on a table. Because of the way we typically think about space, it would be safe to say that these seeds
10,585
What is the manifold assumption in semi-supervised learning?
First, make sure that you understand what an embedding is. It's borrowed from mathematics. Roughly speaking, it is a mapping of the data into another space (often called embedding space or feature space), preserving some structure or properties of the data. Note that its dimensionality can be bigger or smaller than the...
What is the manifold assumption in semi-supervised learning?
First, make sure that you understand what an embedding is. It's borrowed from mathematics. Roughly speaking, it is a mapping of the data into another space (often called embedding space or feature spa
What is the manifold assumption in semi-supervised learning? First, make sure that you understand what an embedding is. It's borrowed from mathematics. Roughly speaking, it is a mapping of the data into another space (often called embedding space or feature space), preserving some structure or properties of the data. N...
What is the manifold assumption in semi-supervised learning? First, make sure that you understand what an embedding is. It's borrowed from mathematics. Roughly speaking, it is a mapping of the data into another space (often called embedding space or feature spa
10,586
Seeking certain type of ARIMA explanation
My suggested reading for an intro to ARIMA modelling would be Applied Time Series Analysis for the Social Sciences 1980 by R McCleary ; R A Hay ; E E Meidinger ; D McDowall This is aimed at social scientists so the mathematical demands are not too rigorous. Also for shorter treatments I would suggest two Sage Green Boo...
Seeking certain type of ARIMA explanation
My suggested reading for an intro to ARIMA modelling would be Applied Time Series Analysis for the Social Sciences 1980 by R McCleary ; R A Hay ; E E Meidinger ; D McDowall This is aimed at social sci
Seeking certain type of ARIMA explanation My suggested reading for an intro to ARIMA modelling would be Applied Time Series Analysis for the Social Sciences 1980 by R McCleary ; R A Hay ; E E Meidinger ; D McDowall This is aimed at social scientists so the mathematical demands are not too rigorous. Also for shorter tre...
Seeking certain type of ARIMA explanation My suggested reading for an intro to ARIMA modelling would be Applied Time Series Analysis for the Social Sciences 1980 by R McCleary ; R A Hay ; E E Meidinger ; D McDowall This is aimed at social sci
10,587
Seeking certain type of ARIMA explanation
I will try and respond to the gentle urging of whuber to simply “respond to the question” and stay on topic. We are given 144 monthly readings of a series called “The Airline Series” . Box and Jenkins were widely criticized for providing a forecast that was wildly on the high side due to the “explosive nature” of a rev...
Seeking certain type of ARIMA explanation
I will try and respond to the gentle urging of whuber to simply “respond to the question” and stay on topic. We are given 144 monthly readings of a series called “The Airline Series” . Box and Jenkins
Seeking certain type of ARIMA explanation I will try and respond to the gentle urging of whuber to simply “respond to the question” and stay on topic. We are given 144 monthly readings of a series called “The Airline Series” . Box and Jenkins were widely criticized for providing a forecast that was wildly on the high s...
Seeking certain type of ARIMA explanation I will try and respond to the gentle urging of whuber to simply “respond to the question” and stay on topic. We are given 144 monthly readings of a series called “The Airline Series” . Box and Jenkins
10,588
Seeking certain type of ARIMA explanation
I tried to do that in chapter 7 of my 1998 textbook with Makridakis & Wheelwright. Whether I succeeded or not I'll leave others to judge. You can read some of the chapter online via Amazon (from p311). Search for "ARIMA" in the book to persuade Amazon to show you the relevant pages. Update: I have a new book which is f...
Seeking certain type of ARIMA explanation
I tried to do that in chapter 7 of my 1998 textbook with Makridakis & Wheelwright. Whether I succeeded or not I'll leave others to judge. You can read some of the chapter online via Amazon (from p311)
Seeking certain type of ARIMA explanation I tried to do that in chapter 7 of my 1998 textbook with Makridakis & Wheelwright. Whether I succeeded or not I'll leave others to judge. You can read some of the chapter online via Amazon (from p311). Search for "ARIMA" in the book to persuade Amazon to show you the relevant p...
Seeking certain type of ARIMA explanation I tried to do that in chapter 7 of my 1998 textbook with Makridakis & Wheelwright. Whether I succeeded or not I'll leave others to judge. You can read some of the chapter online via Amazon (from p311)
10,589
Seeking certain type of ARIMA explanation
I would recommend Forecasting with Univariate Box - Jenkins Models: Concepts and Cases by Alan Pankratz. This classic book has all the features that you asked for: uses minimal math extends the discussion beyond building a model into using that model to forecast specific cases uses graphics as well as numerical result...
Seeking certain type of ARIMA explanation
I would recommend Forecasting with Univariate Box - Jenkins Models: Concepts and Cases by Alan Pankratz. This classic book has all the features that you asked for: uses minimal math extends the discu
Seeking certain type of ARIMA explanation I would recommend Forecasting with Univariate Box - Jenkins Models: Concepts and Cases by Alan Pankratz. This classic book has all the features that you asked for: uses minimal math extends the discussion beyond building a model into using that model to forecast specific cases...
Seeking certain type of ARIMA explanation I would recommend Forecasting with Univariate Box - Jenkins Models: Concepts and Cases by Alan Pankratz. This classic book has all the features that you asked for: uses minimal math extends the discu
10,590
Seeking certain type of ARIMA explanation
An ARIMA model is simply a weighted average. It answers the double question; How many period (k )should I use to compute a weighted average and Precisely what are the k weights It answers the maiden's prayer to determine how to adjust to previous values ( and previous values ALONE ) in order to project the series...
Seeking certain type of ARIMA explanation
An ARIMA model is simply a weighted average. It answers the double question; How many period (k )should I use to compute a weighted average and Precisely what are the k weights It answers the ma
Seeking certain type of ARIMA explanation An ARIMA model is simply a weighted average. It answers the double question; How many period (k )should I use to compute a weighted average and Precisely what are the k weights It answers the maiden's prayer to determine how to adjust to previous values ( and previous val...
Seeking certain type of ARIMA explanation An ARIMA model is simply a weighted average. It answers the double question; How many period (k )should I use to compute a weighted average and Precisely what are the k weights It answers the ma
10,591
Transform Data to Desired Mean and Standard Deviation
Suppose you start $\{x_i\}$ with mean $m_1$ and non-zero standard deviation $s_1$ and you want to arrive at a similar set with mean $m_2$ and standard deviation $s_2$. Then multiplying all your values by $\frac{s_2}{s_1}$ will give a set with mean $m_1 \times \frac{s_2}{s_1}$ and standard deviation $s_2$. Now adding...
Transform Data to Desired Mean and Standard Deviation
Suppose you start $\{x_i\}$ with mean $m_1$ and non-zero standard deviation $s_1$ and you want to arrive at a similar set with mean $m_2$ and standard deviation $s_2$. Then multiplying all your value
Transform Data to Desired Mean and Standard Deviation Suppose you start $\{x_i\}$ with mean $m_1$ and non-zero standard deviation $s_1$ and you want to arrive at a similar set with mean $m_2$ and standard deviation $s_2$. Then multiplying all your values by $\frac{s_2}{s_1}$ will give a set with mean $m_1 \times \fr...
Transform Data to Desired Mean and Standard Deviation Suppose you start $\{x_i\}$ with mean $m_1$ and non-zero standard deviation $s_1$ and you want to arrive at a similar set with mean $m_2$ and standard deviation $s_2$. Then multiplying all your value
10,592
Transform Data to Desired Mean and Standard Deviation
Let’s consider the z-score calculation of data $x_i$ with mean $\bar{x}$ and standard deviation $s_x$. $$z_i = \dfrac{x_i-\bar{x}}{s_x}$$ This means that, given some data $(x_i)$, we can transform to data with a mean of $0$ and standard deviation of $1$. Rearranging, we get: $$x_i = z_i s_x+ \bar{x}$$ This gives us bac...
Transform Data to Desired Mean and Standard Deviation
Let’s consider the z-score calculation of data $x_i$ with mean $\bar{x}$ and standard deviation $s_x$. $$z_i = \dfrac{x_i-\bar{x}}{s_x}$$ This means that, given some data $(x_i)$, we can transform to
Transform Data to Desired Mean and Standard Deviation Let’s consider the z-score calculation of data $x_i$ with mean $\bar{x}$ and standard deviation $s_x$. $$z_i = \dfrac{x_i-\bar{x}}{s_x}$$ This means that, given some data $(x_i)$, we can transform to data with a mean of $0$ and standard deviation of $1$. Rearranging...
Transform Data to Desired Mean and Standard Deviation Let’s consider the z-score calculation of data $x_i$ with mean $\bar{x}$ and standard deviation $s_x$. $$z_i = \dfrac{x_i-\bar{x}}{s_x}$$ This means that, given some data $(x_i)$, we can transform to
10,593
Averaging correlation values
The simple way is to add a categorical variable $z$ to identify the different experimental conditions and include it in your model along with an "interaction" with $x$; that is, $y \sim z + x\#z$. This conducts all five regressions at once. Its $R^2$ is what you want. To see why averaging individual $R$ values may be...
Averaging correlation values
The simple way is to add a categorical variable $z$ to identify the different experimental conditions and include it in your model along with an "interaction" with $x$; that is, $y \sim z + x\#z$. Th
Averaging correlation values The simple way is to add a categorical variable $z$ to identify the different experimental conditions and include it in your model along with an "interaction" with $x$; that is, $y \sim z + x\#z$. This conducts all five regressions at once. Its $R^2$ is what you want. To see why averaging...
Averaging correlation values The simple way is to add a categorical variable $z$ to identify the different experimental conditions and include it in your model along with an "interaction" with $x$; that is, $y \sim z + x\#z$. Th
10,594
Averaging correlation values
For Pearson correlation coefficients, it is generally appropriate to transform the r values using a Fisher z transformation. Then average the z-values and convert the average back to an r value. I imagine it would be fine for a Spearman coefficient as well. Here's a paper and the wikipedia entry.
Averaging correlation values
For Pearson correlation coefficients, it is generally appropriate to transform the r values using a Fisher z transformation. Then average the z-values and convert the average back to an r value. I im
Averaging correlation values For Pearson correlation coefficients, it is generally appropriate to transform the r values using a Fisher z transformation. Then average the z-values and convert the average back to an r value. I imagine it would be fine for a Spearman coefficient as well. Here's a paper and the wikipedia...
Averaging correlation values For Pearson correlation coefficients, it is generally appropriate to transform the r values using a Fisher z transformation. Then average the z-values and convert the average back to an r value. I im
10,595
Averaging correlation values
The average correlation can be meaningul. Also consider the distribution of correlations (for example, plot a histogram). But as I understand it, for each individual you have some ranking of $n$ items plus predicted rankings of those items for that individual, and you're looking at the correlation between an individua...
Averaging correlation values
The average correlation can be meaningul. Also consider the distribution of correlations (for example, plot a histogram). But as I understand it, for each individual you have some ranking of $n$ item
Averaging correlation values The average correlation can be meaningul. Also consider the distribution of correlations (for example, plot a histogram). But as I understand it, for each individual you have some ranking of $n$ items plus predicted rankings of those items for that individual, and you're looking at the cor...
Averaging correlation values The average correlation can be meaningul. Also consider the distribution of correlations (for example, plot a histogram). But as I understand it, for each individual you have some ranking of $n$ item
10,596
Averaging correlation values
What about using mean squared predicted eror (MSPE) for the performance of the algorithm? This is a standard approach to what you are trying to do, if you are trying to compare predictive performance among a set of algorithms.
Averaging correlation values
What about using mean squared predicted eror (MSPE) for the performance of the algorithm? This is a standard approach to what you are trying to do, if you are trying to compare predictive performance
Averaging correlation values What about using mean squared predicted eror (MSPE) for the performance of the algorithm? This is a standard approach to what you are trying to do, if you are trying to compare predictive performance among a set of algorithms.
Averaging correlation values What about using mean squared predicted eror (MSPE) for the performance of the algorithm? This is a standard approach to what you are trying to do, if you are trying to compare predictive performance
10,597
Is there a "hello, world" for statistical graphics?
Two thoughts: A. When I try to get at the essence of "Hello World", it's the minimum that must be done in the programming language to generate a valid program that prints a single line of text. That suggests to me that your "Hello World" should be a univariate data set, the most basic thing you could plug into a statis...
Is there a "hello, world" for statistical graphics?
Two thoughts: A. When I try to get at the essence of "Hello World", it's the minimum that must be done in the programming language to generate a valid program that prints a single line of text. That s
Is there a "hello, world" for statistical graphics? Two thoughts: A. When I try to get at the essence of "Hello World", it's the minimum that must be done in the programming language to generate a valid program that prints a single line of text. That suggests to me that your "Hello World" should be a univariate data se...
Is there a "hello, world" for statistical graphics? Two thoughts: A. When I try to get at the essence of "Hello World", it's the minimum that must be done in the programming language to generate a valid program that prints a single line of text. That s
10,598
Is there a "hello, world" for statistical graphics?
I would probably start with scatterplots and demonstrate the four ugly correlations.
Is there a "hello, world" for statistical graphics?
I would probably start with scatterplots and demonstrate the four ugly correlations.
Is there a "hello, world" for statistical graphics? I would probably start with scatterplots and demonstrate the four ugly correlations.
Is there a "hello, world" for statistical graphics? I would probably start with scatterplots and demonstrate the four ugly correlations.
10,599
Is there a "hello, world" for statistical graphics?
The histogram of a sample of a normally distributed random variable.
Is there a "hello, world" for statistical graphics?
The histogram of a sample of a normally distributed random variable.
Is there a "hello, world" for statistical graphics? The histogram of a sample of a normally distributed random variable.
Is there a "hello, world" for statistical graphics? The histogram of a sample of a normally distributed random variable.
10,600
Is there a "hello, world" for statistical graphics?
I think the answer is "no". That is, there is no generally agreed upon answer to your question. @StasK points to the scatterplot. But I'd consider what plot does in R: It depends on the data! You could argue that univariate statistics are simpler than bivariate ones. So... perhaps the most basic thing is a histogram; ...
Is there a "hello, world" for statistical graphics?
I think the answer is "no". That is, there is no generally agreed upon answer to your question. @StasK points to the scatterplot. But I'd consider what plot does in R: It depends on the data! You cou
Is there a "hello, world" for statistical graphics? I think the answer is "no". That is, there is no generally agreed upon answer to your question. @StasK points to the scatterplot. But I'd consider what plot does in R: It depends on the data! You could argue that univariate statistics are simpler than bivariate ones....
Is there a "hello, world" for statistical graphics? I think the answer is "no". That is, there is no generally agreed upon answer to your question. @StasK points to the scatterplot. But I'd consider what plot does in R: It depends on the data! You cou