idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
36,801
Permutation testing for machine learning: permute entire set or only training set?
I just met that question and found that there is a simulation study (Valente et al., 2021) proved to permutation all data before CV is correct. And, here is reason. A theoretical insight into why the other resampling schemes result in an inflation of false positives can be gained from (Bengio and Grandvalet, 2004), where the authors describe the error covariance matrix across all samples in terms of blocks and decompose the variance of the cross-validation error (i.e. the sum of all the elements of the covariance matrix) into three components. The first component is the variance of errors for each test data point (main diagonal of the covariance matrix), the other two stem from the use of cross-validation: the first arises from the fact that all the sample in a partition are tested using the same model (that changes per test partition), while the second arises from the overlap present in the training data of different partitions (this overlap is more pronounced when more partitions are used). When conducting a permutation test, if the resampling takes place in both the training and testing datasets or only in one of them at each cross-validation iteration, the cross-validation related terms in the variance decomposition are ignored, since it is implicitly assumed that the data across different iterations are independent. In other words, these resampling schemes subsume that one or both of the cross-validation related variance components described above is zero. This underestimation results in a sharper (i.e. with lower variance) null distribution, and therefore overconfident statements and invalid tests. On the other hand, when the data/labels association is kept constant across the different cross-validation iterations, the cross-validation related variance components are kept also in the estimation under H0, resulting in a more realistic empirical null distribution. Valente, G., Castellanos, A. L., Hausfeld, L., De Martino, F., & Formisano, E. (2021). Cross-validation and permutations in MVPA: Validity of permutation strategies and power of cross-validation schemes. NeuroImage, 238, 118145. https://doi.org/10.1016/j.neuroimage.2021.118145
Permutation testing for machine learning: permute entire set or only training set?
I just met that question and found that there is a simulation study (Valente et al., 2021) proved to permutation all data before CV is correct. And, here is reason. A theoretical insight into why the
Permutation testing for machine learning: permute entire set or only training set? I just met that question and found that there is a simulation study (Valente et al., 2021) proved to permutation all data before CV is correct. And, here is reason. A theoretical insight into why the other resampling schemes result in an inflation of false positives can be gained from (Bengio and Grandvalet, 2004), where the authors describe the error covariance matrix across all samples in terms of blocks and decompose the variance of the cross-validation error (i.e. the sum of all the elements of the covariance matrix) into three components. The first component is the variance of errors for each test data point (main diagonal of the covariance matrix), the other two stem from the use of cross-validation: the first arises from the fact that all the sample in a partition are tested using the same model (that changes per test partition), while the second arises from the overlap present in the training data of different partitions (this overlap is more pronounced when more partitions are used). When conducting a permutation test, if the resampling takes place in both the training and testing datasets or only in one of them at each cross-validation iteration, the cross-validation related terms in the variance decomposition are ignored, since it is implicitly assumed that the data across different iterations are independent. In other words, these resampling schemes subsume that one or both of the cross-validation related variance components described above is zero. This underestimation results in a sharper (i.e. with lower variance) null distribution, and therefore overconfident statements and invalid tests. On the other hand, when the data/labels association is kept constant across the different cross-validation iterations, the cross-validation related variance components are kept also in the estimation under H0, resulting in a more realistic empirical null distribution. Valente, G., Castellanos, A. L., Hausfeld, L., De Martino, F., & Formisano, E. (2021). Cross-validation and permutations in MVPA: Validity of permutation strategies and power of cross-validation schemes. NeuroImage, 238, 118145. https://doi.org/10.1016/j.neuroimage.2021.118145
Permutation testing for machine learning: permute entire set or only training set? I just met that question and found that there is a simulation study (Valente et al., 2021) proved to permutation all data before CV is correct. And, here is reason. A theoretical insight into why the
36,802
Why are most epidemic models continuous-time?
This is a really interesting question and I doubt that there is any one 'correct' answer to it, but here are my thoughts on the reasons, which can be split into three categories. History I suggest you have a look at Section 2.7 Discrete-Time Models in the book Modeling Infectious Diseases in Humans and Animals by Keeling and Rohani. This book points out that most of the literature on epidemic modelling is focussed on continuous time differential equation models, following on from the SIR differential equation model which was first analysed by Kermack and McKendrick in 1927. The authors state This [focus on differential equation models in our book] is partly because the vast majority of models in the literature are based on this framework. The inherent assumption has been that the processes of disease transmission occur in real time and that variability in factors such as the infectious period may be dynamically important. However, they also acknowledge that some discrete time models have been developed, for example the "chain binomial" models which assume that there are successive generations of new infections. Mathematics Most differential equation models based on the SIR model cannot be solved explicitly. However they can usually be simulated in a straightforward manner, so the lack of easy solvability is not really a problem. One potential issue (also discussed in Section 2.7 of the book) with discrete time models is that we have to choose a time step, and it may not be obvious what the time step should be. Perhaps the time step should be the 'generation time' i.e. the time between successive individuals being infected in a chain of infection. However, these times may vary between individuals, and may be very uncertain in real applications. So we may not want to embed an assumption regarding the generation time in the formulation of the model. Hence continuous time models can seem more attractive. Another issue is that the process of formulating a discrete time model can 'fail', in the sense that the discrete time model does not exhibit the expected properties of the real disease system e.g. see Glass, Kathryn, Yincun Xia, and Bryan T. Grenfell. Interpreting time-series analyses for continuous-time biological models—measles as a case study. Journal of theoretical biology 223, no. 1 (2003): 19-25. Relating model to data One apparent motivation for discrete time models is that much of the data are collected at discrete time intervals e.g. daily reports of new infections. However, more sophisticated techniques for model parameterisation have made this consideration less relevant. We can use Bayesian techniques to fit epidemic models with latent variables: we can formulate a continuous time process model (which is not directly observed) and use discrete time data to parameterise it. There are countless examples of this approach in the literature e.g. Ster IC, Singh BK, Ferguson NM (2009) Epidemiological inference for partially observed epidemics: The example of the 2001 foot and mouth epidemic in great britain. Epidemics 1: 21–34.
Why are most epidemic models continuous-time?
This is a really interesting question and I doubt that there is any one 'correct' answer to it, but here are my thoughts on the reasons, which can be split into three categories. History I suggest you
Why are most epidemic models continuous-time? This is a really interesting question and I doubt that there is any one 'correct' answer to it, but here are my thoughts on the reasons, which can be split into three categories. History I suggest you have a look at Section 2.7 Discrete-Time Models in the book Modeling Infectious Diseases in Humans and Animals by Keeling and Rohani. This book points out that most of the literature on epidemic modelling is focussed on continuous time differential equation models, following on from the SIR differential equation model which was first analysed by Kermack and McKendrick in 1927. The authors state This [focus on differential equation models in our book] is partly because the vast majority of models in the literature are based on this framework. The inherent assumption has been that the processes of disease transmission occur in real time and that variability in factors such as the infectious period may be dynamically important. However, they also acknowledge that some discrete time models have been developed, for example the "chain binomial" models which assume that there are successive generations of new infections. Mathematics Most differential equation models based on the SIR model cannot be solved explicitly. However they can usually be simulated in a straightforward manner, so the lack of easy solvability is not really a problem. One potential issue (also discussed in Section 2.7 of the book) with discrete time models is that we have to choose a time step, and it may not be obvious what the time step should be. Perhaps the time step should be the 'generation time' i.e. the time between successive individuals being infected in a chain of infection. However, these times may vary between individuals, and may be very uncertain in real applications. So we may not want to embed an assumption regarding the generation time in the formulation of the model. Hence continuous time models can seem more attractive. Another issue is that the process of formulating a discrete time model can 'fail', in the sense that the discrete time model does not exhibit the expected properties of the real disease system e.g. see Glass, Kathryn, Yincun Xia, and Bryan T. Grenfell. Interpreting time-series analyses for continuous-time biological models—measles as a case study. Journal of theoretical biology 223, no. 1 (2003): 19-25. Relating model to data One apparent motivation for discrete time models is that much of the data are collected at discrete time intervals e.g. daily reports of new infections. However, more sophisticated techniques for model parameterisation have made this consideration less relevant. We can use Bayesian techniques to fit epidemic models with latent variables: we can formulate a continuous time process model (which is not directly observed) and use discrete time data to parameterise it. There are countless examples of this approach in the literature e.g. Ster IC, Singh BK, Ferguson NM (2009) Epidemiological inference for partially observed epidemics: The example of the 2001 foot and mouth epidemic in great britain. Epidemics 1: 21–34.
Why are most epidemic models continuous-time? This is a really interesting question and I doubt that there is any one 'correct' answer to it, but here are my thoughts on the reasons, which can be split into three categories. History I suggest you
36,803
Why are most epidemic models continuous-time?
This is an interesting question. I mean, I guess you could ask the same thing about derivatives. What does it really mean to be going 60 kms per hour instantaneously if speed is distance travelled per unit time? It seems inherently discrete (measuring distance over a specific interval of time, even if that interval is very small) and yet the derivative continues to be used. I largely suspect that the reason differential equations are used over difference equations is not a matter them being "better" but because they were developed to further understand the dynamics of the simple epidemics that motivated their development. If difference equations operate on the unit of days, how can one ask about the concept of an $\mathcal{R}_0$? Under the difference equation model, a single person might infect multiple people in a single day, who may then go on to infect several more people. How can we parse out how many new infections an index case creates without examining the dynamics as the time step we take becomes infinitesimally small? Because the SIR model and other models like it have been studied for nigh a century, I highly suspect an answer exists "out there". It might be beneficial to start here in which the author begins with the difference equations for an epidemic and derives the SIR model from them.
Why are most epidemic models continuous-time?
This is an interesting question. I mean, I guess you could ask the same thing about derivatives. What does it really mean to be going 60 kms per hour instantaneously if speed is distance travelled p
Why are most epidemic models continuous-time? This is an interesting question. I mean, I guess you could ask the same thing about derivatives. What does it really mean to be going 60 kms per hour instantaneously if speed is distance travelled per unit time? It seems inherently discrete (measuring distance over a specific interval of time, even if that interval is very small) and yet the derivative continues to be used. I largely suspect that the reason differential equations are used over difference equations is not a matter them being "better" but because they were developed to further understand the dynamics of the simple epidemics that motivated their development. If difference equations operate on the unit of days, how can one ask about the concept of an $\mathcal{R}_0$? Under the difference equation model, a single person might infect multiple people in a single day, who may then go on to infect several more people. How can we parse out how many new infections an index case creates without examining the dynamics as the time step we take becomes infinitesimally small? Because the SIR model and other models like it have been studied for nigh a century, I highly suspect an answer exists "out there". It might be beneficial to start here in which the author begins with the difference equations for an epidemic and derives the SIR model from them.
Why are most epidemic models continuous-time? This is an interesting question. I mean, I guess you could ask the same thing about derivatives. What does it really mean to be going 60 kms per hour instantaneously if speed is distance travelled p
36,804
What is the sample space in a statistical model?
To begin with, the statistical model is a triple $(\Omega,\mathcal{F},P)$, where $\Omega$ is the sample space, $\mathcal{F}$ is a sigma-algebra of subsets of $\Omega$ and $P$ is a family of probability distributions that can be indexed by a parameter $\theta$. To make things clear, let's understand why we need all of these things. $\Omega$ tells us all the possibilities that each realization of a random experiment can take. In you case, each individual unit (a tree) takes a pair of values $(age,height)$. And the space where this pair has possible values is $\mathbb{R}^2$. So suppose you have data on a set of $n$ trees, $X_1,...,X_n$. Each individual $X_i=(age_i,height_i)\in\mathbb{R}^2 \implies (X_1,...,X_n)\in\mathbb{R}^{2n}$. The second element of the statistical model is a sigma algebra of subsets of $\Omega$, which lists all subsets of our sample space that were interested in measuring probability. For example, we might me interested in measusing the probability that $X_i=(age_i,height_i)\in[10,20]\times[5,10]$, that is the probability that a particular individual tree has age between 10 and 20 and height between 5 and 10m. For continuous values, the common-sigma algebra that we're used to take is the Borel sigma algebra of $\mathbb{R}^n$. For discrete data it is easier to grasp the idea of what the sigma algebra contains. Let's take as an example an experiment of running a 6-sided dice. In this case $\Omega=\{1,2,3,4,5,6\}$, because each realization of the experiment can only assume on of these values. But we're interested in measuring probability in subsets of $\Omega$. For example, take $A=\{1,2,3\}\subseteq\Omega$. We might be interested in knowing $P(A)$, the probability that a particular realization of the experiment takes a value in $A$. In other words, the probability that the dice returns 1,2 or 3. Also, note that we can be interested in the probability of the complement of $A, A^C=\{4,5,6\}$, or over a union or intersection of sets contained in $\Omega$. Finally, $P$, the family of probability distributions is a set from which we might choose a particular distribution indexed by a parameter, and this particular distribution fits better the observed data of the experiment by some criteria, for example, a Maximum Likelihood estimate or a regression. In your problem, you're trying to explain height based on age. That means you're trying to find the density function that better describes height, in practical terms you have a family of distributions $\{f_{\theta}(height),\theta\in\Theta\}$ and you're trying to find which $\theta$ gives you the best fit for height, and the criteria to choose this $\theta$ is the regression you're trying to run. Age is being used as a mean to find the best distribution for height. In this case, we take age as given, not as a random variable. I think the answer would be something along the lines of: $\Omega=\mathbb{R}^n,\mathcal{F}=\mathbb{B}(\mathbb{R}^n),P=\{f_{\theta,age}(height),\theta\in\Theta\}$ If you were trying to find a joint density for height and age or something like this, then, you would be dealing with a statistical model whose sample space is $\mathbb{R}^{2n}$ because you'd be treating both variables as random. That means, you might have the same data, but depending on what you're doing, the statistical model of interest can change. If something is wrong, constructive comments are welcome
What is the sample space in a statistical model?
To begin with, the statistical model is a triple $(\Omega,\mathcal{F},P)$, where $\Omega$ is the sample space, $\mathcal{F}$ is a sigma-algebra of subsets of $\Omega$ and $P$ is a family of probabilit
What is the sample space in a statistical model? To begin with, the statistical model is a triple $(\Omega,\mathcal{F},P)$, where $\Omega$ is the sample space, $\mathcal{F}$ is a sigma-algebra of subsets of $\Omega$ and $P$ is a family of probability distributions that can be indexed by a parameter $\theta$. To make things clear, let's understand why we need all of these things. $\Omega$ tells us all the possibilities that each realization of a random experiment can take. In you case, each individual unit (a tree) takes a pair of values $(age,height)$. And the space where this pair has possible values is $\mathbb{R}^2$. So suppose you have data on a set of $n$ trees, $X_1,...,X_n$. Each individual $X_i=(age_i,height_i)\in\mathbb{R}^2 \implies (X_1,...,X_n)\in\mathbb{R}^{2n}$. The second element of the statistical model is a sigma algebra of subsets of $\Omega$, which lists all subsets of our sample space that were interested in measuring probability. For example, we might me interested in measusing the probability that $X_i=(age_i,height_i)\in[10,20]\times[5,10]$, that is the probability that a particular individual tree has age between 10 and 20 and height between 5 and 10m. For continuous values, the common-sigma algebra that we're used to take is the Borel sigma algebra of $\mathbb{R}^n$. For discrete data it is easier to grasp the idea of what the sigma algebra contains. Let's take as an example an experiment of running a 6-sided dice. In this case $\Omega=\{1,2,3,4,5,6\}$, because each realization of the experiment can only assume on of these values. But we're interested in measuring probability in subsets of $\Omega$. For example, take $A=\{1,2,3\}\subseteq\Omega$. We might be interested in knowing $P(A)$, the probability that a particular realization of the experiment takes a value in $A$. In other words, the probability that the dice returns 1,2 or 3. Also, note that we can be interested in the probability of the complement of $A, A^C=\{4,5,6\}$, or over a union or intersection of sets contained in $\Omega$. Finally, $P$, the family of probability distributions is a set from which we might choose a particular distribution indexed by a parameter, and this particular distribution fits better the observed data of the experiment by some criteria, for example, a Maximum Likelihood estimate or a regression. In your problem, you're trying to explain height based on age. That means you're trying to find the density function that better describes height, in practical terms you have a family of distributions $\{f_{\theta}(height),\theta\in\Theta\}$ and you're trying to find which $\theta$ gives you the best fit for height, and the criteria to choose this $\theta$ is the regression you're trying to run. Age is being used as a mean to find the best distribution for height. In this case, we take age as given, not as a random variable. I think the answer would be something along the lines of: $\Omega=\mathbb{R}^n,\mathcal{F}=\mathbb{B}(\mathbb{R}^n),P=\{f_{\theta,age}(height),\theta\in\Theta\}$ If you were trying to find a joint density for height and age or something like this, then, you would be dealing with a statistical model whose sample space is $\mathbb{R}^{2n}$ because you'd be treating both variables as random. That means, you might have the same data, but depending on what you're doing, the statistical model of interest can change. If something is wrong, constructive comments are welcome
What is the sample space in a statistical model? To begin with, the statistical model is a triple $(\Omega,\mathcal{F},P)$, where $\Omega$ is the sample space, $\mathcal{F}$ is a sigma-algebra of subsets of $\Omega$ and $P$ is a family of probabilit
36,805
What is the sample space in a statistical model?
I'll keep it as simple as I can. The sample space depends on your sampling method, but in your case, it is probably $\mathbb R^n$. Let's see how else could it be: Let's say you decide to sample $n$ trees (it's not really relevant where and how) and measure their age and height. In that case, the sample you gather ranges on the space $\mathbb R^{2n}$. Since you decided the sample size beforehand, that's indeed the sample space dimensionality. Let's say you go for another fancier sampling method: you keep gathering data until you find a tree higher than 10 meters. You can absolutely do that. Of course the sample space has not fixed dimensionality anymore, you simply can't express it anymore unless you resort to more complex mathematical constructions. You may say that it is $\mathbb R^\infty$, but that is not really accurate. Let's now drop this overly complicate case, and think to a more useful example: you sample a fixed number $n$ of trees of some given ages of interest to you (or you may grow them for a fixed time span) and then you measure the height. Age is not random, it depends on your experiment design, so age is not really sampled. Sample space is $\mathbb R^n$. Anyway, more often than not, in observational studies where you don't decide covariates in advance, but you aim to build a regression model, statisticians condition the sample and the model on the values of the covariates. I think I understand that you have a model where the height is the target variable and the age is the covariate, in that case you condition everything on the observed ages and when you condition something on something else, the second thing is not random anymore, even if it has been sampled like in the first bullet above. That's why your sample space from $\mathbb R^{2n}$ becomes $\mathbb R^n$. This has some useful theoretical consequences (and some bad ones too, to be fair), and this is the reason for which books tend to represent this way sample spaces in case of regression models, but it does depend on the book. The others who commented raised the concern that you may decide to use $\mathbb R^+$ instead of $\mathbb R$, and more importantly, that your definition of a statistical model is both a little reductive and not very useful. In any case, I hope I helped you to understand what the sample space is.
What is the sample space in a statistical model?
I'll keep it as simple as I can. The sample space depends on your sampling method, but in your case, it is probably $\mathbb R^n$. Let's see how else could it be: Let's say you decide to sample $n$ t
What is the sample space in a statistical model? I'll keep it as simple as I can. The sample space depends on your sampling method, but in your case, it is probably $\mathbb R^n$. Let's see how else could it be: Let's say you decide to sample $n$ trees (it's not really relevant where and how) and measure their age and height. In that case, the sample you gather ranges on the space $\mathbb R^{2n}$. Since you decided the sample size beforehand, that's indeed the sample space dimensionality. Let's say you go for another fancier sampling method: you keep gathering data until you find a tree higher than 10 meters. You can absolutely do that. Of course the sample space has not fixed dimensionality anymore, you simply can't express it anymore unless you resort to more complex mathematical constructions. You may say that it is $\mathbb R^\infty$, but that is not really accurate. Let's now drop this overly complicate case, and think to a more useful example: you sample a fixed number $n$ of trees of some given ages of interest to you (or you may grow them for a fixed time span) and then you measure the height. Age is not random, it depends on your experiment design, so age is not really sampled. Sample space is $\mathbb R^n$. Anyway, more often than not, in observational studies where you don't decide covariates in advance, but you aim to build a regression model, statisticians condition the sample and the model on the values of the covariates. I think I understand that you have a model where the height is the target variable and the age is the covariate, in that case you condition everything on the observed ages and when you condition something on something else, the second thing is not random anymore, even if it has been sampled like in the first bullet above. That's why your sample space from $\mathbb R^{2n}$ becomes $\mathbb R^n$. This has some useful theoretical consequences (and some bad ones too, to be fair), and this is the reason for which books tend to represent this way sample spaces in case of regression models, but it does depend on the book. The others who commented raised the concern that you may decide to use $\mathbb R^+$ instead of $\mathbb R$, and more importantly, that your definition of a statistical model is both a little reductive and not very useful. In any case, I hope I helped you to understand what the sample space is.
What is the sample space in a statistical model? I'll keep it as simple as I can. The sample space depends on your sampling method, but in your case, it is probably $\mathbb R^n$. Let's see how else could it be: Let's say you decide to sample $n$ t
36,806
What is the sample space in a statistical model?
A sample space is a set of all possible outcomes of a random experiment. An event is a subset of the sample space. A probability function takes an event as input, and outputs a real number between 0 and 1 (probability). A stochastic model captures our understanding of the random experiment. In order to summarize all possible ways to choose the outcome (age, height) of a stochastic model, with different probabilities, a distribution is used. This distribution (or likelihood) typically involves some unknown parameters (such as the slope of age vs height, and the height-intercept bias) that are inferred using statistical inference. Each possible parameter setting gives rise to a different stochastic model. The collection of all such stochastic models is usually referred to as a statistical model. So, a statistical model with unknown parameters becomes a stochastic model with inferred parameters. The stochastic model on the tree dataset will be the age on x-axis, height on y-axis, and probability on z-axis. That makes the sample space R^2, with the z-axis being the probability distribution (topology) on that sample space. The task of inferring/learning the unknown parameter (say, using gradient descent) is called Inference. Guessing the height given the age is called prediction. It is a kind of fine-tuning where we know the age and we fine-tune it to include height. This is done by passing age to the stochastic model that outputs the height. It falls under the purview of Decision. References: 1 Blitzstein J.K., Hwang J. - Introduction to Probability-CRC (2015) 2 Using statistical methods to model the fine-tuning of molecular machines and systems - Steinar Thorvaldsen
What is the sample space in a statistical model?
A sample space is a set of all possible outcomes of a random experiment. An event is a subset of the sample space. A probability function takes an event as input, and outputs a real number between 0 a
What is the sample space in a statistical model? A sample space is a set of all possible outcomes of a random experiment. An event is a subset of the sample space. A probability function takes an event as input, and outputs a real number between 0 and 1 (probability). A stochastic model captures our understanding of the random experiment. In order to summarize all possible ways to choose the outcome (age, height) of a stochastic model, with different probabilities, a distribution is used. This distribution (or likelihood) typically involves some unknown parameters (such as the slope of age vs height, and the height-intercept bias) that are inferred using statistical inference. Each possible parameter setting gives rise to a different stochastic model. The collection of all such stochastic models is usually referred to as a statistical model. So, a statistical model with unknown parameters becomes a stochastic model with inferred parameters. The stochastic model on the tree dataset will be the age on x-axis, height on y-axis, and probability on z-axis. That makes the sample space R^2, with the z-axis being the probability distribution (topology) on that sample space. The task of inferring/learning the unknown parameter (say, using gradient descent) is called Inference. Guessing the height given the age is called prediction. It is a kind of fine-tuning where we know the age and we fine-tune it to include height. This is done by passing age to the stochastic model that outputs the height. It falls under the purview of Decision. References: 1 Blitzstein J.K., Hwang J. - Introduction to Probability-CRC (2015) 2 Using statistical methods to model the fine-tuning of molecular machines and systems - Steinar Thorvaldsen
What is the sample space in a statistical model? A sample space is a set of all possible outcomes of a random experiment. An event is a subset of the sample space. A probability function takes an event as input, and outputs a real number between 0 a
36,807
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak?
Reason 1 something technical about the computation. Dying occurs with some delay to getting sick. As a consequence the ratio of people that got sick and the people that have died, is not equal to the ratio of people that will die. (Still, if the number of sick cases and death cases both grow exponentially with the same factor then you might still expect this number to remain constant, but keep in mind that the growth is not exponential and that it is only a simplified model) Reason 2 something important about the data acquisition You might say, ok then let's compare the number of death cases with the number of sick several days ago (according to the average number that it takes between getting sick and dying). But, the most important reason why the death rate based on these statistics is not constant and not comparible is because those numbers are only the reported cases and those may be a lot less than the real cases. So you are not computing a real death rate. The statistic (reported/confirmed cases) is not what you think that it is (number of cases). This is especially clear in the curve of cases for China which has a bump because the number of cases rapidly increased after the defenitions were changed (from positively tested people to people with clinical symptoms)
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak?
Reason 1 something technical about the computation. Dying occurs with some delay to getting sick. As a consequence the ratio of people that got sick and the people that have died, is not equal to the
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak? Reason 1 something technical about the computation. Dying occurs with some delay to getting sick. As a consequence the ratio of people that got sick and the people that have died, is not equal to the ratio of people that will die. (Still, if the number of sick cases and death cases both grow exponentially with the same factor then you might still expect this number to remain constant, but keep in mind that the growth is not exponential and that it is only a simplified model) Reason 2 something important about the data acquisition You might say, ok then let's compare the number of death cases with the number of sick several days ago (according to the average number that it takes between getting sick and dying). But, the most important reason why the death rate based on these statistics is not constant and not comparible is because those numbers are only the reported cases and those may be a lot less than the real cases. So you are not computing a real death rate. The statistic (reported/confirmed cases) is not what you think that it is (number of cases). This is especially clear in the curve of cases for China which has a bump because the number of cases rapidly increased after the defenitions were changed (from positively tested people to people with clinical symptoms)
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak? Reason 1 something technical about the computation. Dying occurs with some delay to getting sick. As a consequence the ratio of people that got sick and the people that have died, is not equal to the
36,808
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak?
Note in your wikipedia definition of case fatality rate, you NEED to know the eventual outcome of all individuals infected with disease. As they note, of the 100, 9 die, and 91 recover, they do not live with infection. Your data do not show the number who recovered from disease. If the lag between confirmed case and death is long, you underestimated CFR. CFR can also be biased by the number of unconfirmed cases who die from disease and are confirmed as cases based on cause of death.
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak?
Note in your wikipedia definition of case fatality rate, you NEED to know the eventual outcome of all individuals infected with disease. As they note, of the 100, 9 die, and 91 recover, they do not li
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak? Note in your wikipedia definition of case fatality rate, you NEED to know the eventual outcome of all individuals infected with disease. As they note, of the 100, 9 die, and 91 recover, they do not live with infection. Your data do not show the number who recovered from disease. If the lag between confirmed case and death is long, you underestimated CFR. CFR can also be biased by the number of unconfirmed cases who die from disease and are confirmed as cases based on cause of death.
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak? Note in your wikipedia definition of case fatality rate, you NEED to know the eventual outcome of all individuals infected with disease. As they note, of the 100, 9 die, and 91 recover, they do not li
36,809
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak?
recently something occurred to us when we also perform test. The fatality rate does not quite describe number of death by certain disease to start with. We performed test against Covid-19 patient who have/not have chronic disease. it turns out patient who have chronic disease would have more chance to develop Pneumonia and acute respiratory. it might not because of covid-19 virus who caused death. It might be chronic disease or other condition. if you study medical system every patient would likely to be grouped by DRG code. DRG code is hospital way of group all disease for a particular patient and decide patient priority. In other words a lot of disease are appear together and it might be pre-existing disease who lower down immu system which cause death. As far as i know a lot patient in china who were not able to be diagnosed might be categorised under different reason instead of Covid (flu for example) Death rate can not reflect age group. As we all know by know this virus are particularly worse for senior. Therefore we can not compare country which have more senior population with country who have middle age death rate is complicated you also might not compare with right stage. until all patient discharge from hospital you would not know those who admitted would die or discharge
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak?
recently something occurred to us when we also perform test. The fatality rate does not quite describe number of death by certain disease to start with. We performed test against Covid-19 patient who
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak? recently something occurred to us when we also perform test. The fatality rate does not quite describe number of death by certain disease to start with. We performed test against Covid-19 patient who have/not have chronic disease. it turns out patient who have chronic disease would have more chance to develop Pneumonia and acute respiratory. it might not because of covid-19 virus who caused death. It might be chronic disease or other condition. if you study medical system every patient would likely to be grouped by DRG code. DRG code is hospital way of group all disease for a particular patient and decide patient priority. In other words a lot of disease are appear together and it might be pre-existing disease who lower down immu system which cause death. As far as i know a lot patient in china who were not able to be diagnosed might be categorised under different reason instead of Covid (flu for example) Death rate can not reflect age group. As we all know by know this virus are particularly worse for senior. Therefore we can not compare country which have more senior population with country who have middle age death rate is complicated you also might not compare with right stage. until all patient discharge from hospital you would not know those who admitted would die or discharge
How to correctly analyze fatality rate and daily deaths of Chinese and Italian COVID-19 outbreak? recently something occurred to us when we also perform test. The fatality rate does not quite describe number of death by certain disease to start with. We performed test against Covid-19 patient who
36,810
Support Vector Machine with Perceptron Loss
Maximizing the margin is not just "rhetoric". It is the essential feature of support vector machines and ensures that the trained classifier has the optimal generalization properties. More precisely, making the margin large maximizes the probability that the classification error on new data will be small. The theory behind it is called the Vapnik-Chervonenkis (VC) Theory. In your question you consider a soft-margin classifier, but the reasoning is equally valid for hard-margin classifiers, which work only on linearly separable datasets. In that case, all your $\zeta_i$'s would simply turn out $0$, thus minimizing the sum in your objective function. Therefore, for simplicity, I'll reformulate your argument for linearly separable data. Training a support vector machine amounts to optimizing: $$\min ~ \lVert w \rVert ^2 \\ \text{s.t.} ~ ~ ~ y_i (w^T x_i + b) \ge 1$$ We want to minimize $\lVert w \rVert ^2$ because that maximizes the margin $\gamma$: $$\gamma = \frac{1}{\lVert w \rVert}$$ The constraints ensure not only that all points from the training set are correctly classified, but also that the margin is maximized. As long as there are points for which $y_i (w^T x_i + b) \lt 1$, the training continues by adjusting $w$ and $b$. Now, you suggest that we can use different constraints: $$\min ~ \lVert w \rVert ^2 \\ \text{s.t.} ~ ~ ~ y_i (w^T x_i + b) \ge 0$$ The solution to this problem is trivial: Simply set $w=0$ and $b=0$, and $\lVert w \rVert ^2$ will be zero, too. However, this doesn't give you any information about the class boundary: $w$, being now a null vector, doesn't define any hyperplane at all! Or, to have a look from a different perspective: Imagine having performed the training using some different learning algorithm, let's say, the perceptron algorithm. You have found some $w$ and $b$ which result in perfect classification on your training set. In other words, your constraint $y_i (w^T x_i + b) \ge 0$ is satisfied for every $i$. But, is this class boundary realistic? Take the following example: The class boundary separates the blue from the red points, but almost touches one in each class (i.e. the "perceptron" condition is satisfied, but not the SVM condition): In contrast, the one below has a "large margin", satisfying the SVM condition: It is intuitively clear that this classifier, having the boundary as far as possible from the training points, has a better chance of good generalization.
Support Vector Machine with Perceptron Loss
Maximizing the margin is not just "rhetoric". It is the essential feature of support vector machines and ensures that the trained classifier has the optimal generalization properties. More precisely,
Support Vector Machine with Perceptron Loss Maximizing the margin is not just "rhetoric". It is the essential feature of support vector machines and ensures that the trained classifier has the optimal generalization properties. More precisely, making the margin large maximizes the probability that the classification error on new data will be small. The theory behind it is called the Vapnik-Chervonenkis (VC) Theory. In your question you consider a soft-margin classifier, but the reasoning is equally valid for hard-margin classifiers, which work only on linearly separable datasets. In that case, all your $\zeta_i$'s would simply turn out $0$, thus minimizing the sum in your objective function. Therefore, for simplicity, I'll reformulate your argument for linearly separable data. Training a support vector machine amounts to optimizing: $$\min ~ \lVert w \rVert ^2 \\ \text{s.t.} ~ ~ ~ y_i (w^T x_i + b) \ge 1$$ We want to minimize $\lVert w \rVert ^2$ because that maximizes the margin $\gamma$: $$\gamma = \frac{1}{\lVert w \rVert}$$ The constraints ensure not only that all points from the training set are correctly classified, but also that the margin is maximized. As long as there are points for which $y_i (w^T x_i + b) \lt 1$, the training continues by adjusting $w$ and $b$. Now, you suggest that we can use different constraints: $$\min ~ \lVert w \rVert ^2 \\ \text{s.t.} ~ ~ ~ y_i (w^T x_i + b) \ge 0$$ The solution to this problem is trivial: Simply set $w=0$ and $b=0$, and $\lVert w \rVert ^2$ will be zero, too. However, this doesn't give you any information about the class boundary: $w$, being now a null vector, doesn't define any hyperplane at all! Or, to have a look from a different perspective: Imagine having performed the training using some different learning algorithm, let's say, the perceptron algorithm. You have found some $w$ and $b$ which result in perfect classification on your training set. In other words, your constraint $y_i (w^T x_i + b) \ge 0$ is satisfied for every $i$. But, is this class boundary realistic? Take the following example: The class boundary separates the blue from the red points, but almost touches one in each class (i.e. the "perceptron" condition is satisfied, but not the SVM condition): In contrast, the one below has a "large margin", satisfying the SVM condition: It is intuitively clear that this classifier, having the boundary as far as possible from the training points, has a better chance of good generalization.
Support Vector Machine with Perceptron Loss Maximizing the margin is not just "rhetoric". It is the essential feature of support vector machines and ensures that the trained classifier has the optimal generalization properties. More precisely,
36,811
Is it always possible to find the feature map from a given kernel?
Short answer: It depends on what you mean by find and the precise kind of kernel you are looking at. In many cases you can prove the abstract existence of such a feature map but in practice it is always hard and generally impossible to "write it down". Furthermore, the constructions are mathematically subtle. You need to be careful about technical assumptions. Background Let your kernel be defined as $K:\Omega\times\Omega\rightarrow\mathbb{R}$ (the domain is important!). There are many feature maps in the sense that a feature map is an embedding of $\Omega$ into a suitable Hilbert space. Of course, there is always the canonical feature map: $\Phi:\Omega\rightarrow\mathbb{R}^\Omega, x\mapsto K(x,\cdot).$ Judging from the right hand side of your equation you are looking for a different feature map, one which maps into "vectors" i.e. $l^2$ which is the Hilbert space of square summable sequences with the canonical scalar product $<x,x>=\sum_i x_i x_i$ aka "$x^Tx".$ Mercer's Theorem Key fact to obtain such a feature map is Mercer's theorem (see Theorem 4.49 in [1]). If your kernel $K$ is continuous and its domain of definition $\Omega$ compact, then the map defined on square integrable functions $$ M_K: L^2(\Omega) \rightarrow L^2(\Omega), f\mapsto \int_\Omega f(t)K(t,\cdot)dt$$ is a so called Hilbert-Schmidt operator. The theory of these operators tells us that there exists a countable family of functions $\phi_i:\Omega\rightarrow\mathbb{R}$ which spans $L^2(\Omega)$ such that one can write the kernel $K$ as $$ K(x,y) = \sum \phi_i(x)\phi_i(y),$$ which, of course, is exactly the feature map you are looking for. Further aspects To find the $\phi_i$ explicitly you need to find all solutions to the integral equation $M_K(\phi)=\lambda \phi$. This is very hard (or impossible) in general. Even this special kind of feature map is not unique. There will be other families $\psi_i$ which also allow such a representation. The feature map depends not only on the Kernel $K$ but also on its domain $\Omega$. [1]: Ingo Steinwart; Andreas Christmann "Support Vector Machines"
Is it always possible to find the feature map from a given kernel?
Short answer: It depends on what you mean by find and the precise kind of kernel you are looking at. In many cases you can prove the abstract existence of such a feature map but in practice it is alwa
Is it always possible to find the feature map from a given kernel? Short answer: It depends on what you mean by find and the precise kind of kernel you are looking at. In many cases you can prove the abstract existence of such a feature map but in practice it is always hard and generally impossible to "write it down". Furthermore, the constructions are mathematically subtle. You need to be careful about technical assumptions. Background Let your kernel be defined as $K:\Omega\times\Omega\rightarrow\mathbb{R}$ (the domain is important!). There are many feature maps in the sense that a feature map is an embedding of $\Omega$ into a suitable Hilbert space. Of course, there is always the canonical feature map: $\Phi:\Omega\rightarrow\mathbb{R}^\Omega, x\mapsto K(x,\cdot).$ Judging from the right hand side of your equation you are looking for a different feature map, one which maps into "vectors" i.e. $l^2$ which is the Hilbert space of square summable sequences with the canonical scalar product $<x,x>=\sum_i x_i x_i$ aka "$x^Tx".$ Mercer's Theorem Key fact to obtain such a feature map is Mercer's theorem (see Theorem 4.49 in [1]). If your kernel $K$ is continuous and its domain of definition $\Omega$ compact, then the map defined on square integrable functions $$ M_K: L^2(\Omega) \rightarrow L^2(\Omega), f\mapsto \int_\Omega f(t)K(t,\cdot)dt$$ is a so called Hilbert-Schmidt operator. The theory of these operators tells us that there exists a countable family of functions $\phi_i:\Omega\rightarrow\mathbb{R}$ which spans $L^2(\Omega)$ such that one can write the kernel $K$ as $$ K(x,y) = \sum \phi_i(x)\phi_i(y),$$ which, of course, is exactly the feature map you are looking for. Further aspects To find the $\phi_i$ explicitly you need to find all solutions to the integral equation $M_K(\phi)=\lambda \phi$. This is very hard (or impossible) in general. Even this special kind of feature map is not unique. There will be other families $\psi_i$ which also allow such a representation. The feature map depends not only on the Kernel $K$ but also on its domain $\Omega$. [1]: Ingo Steinwart; Andreas Christmann "Support Vector Machines"
Is it always possible to find the feature map from a given kernel? Short answer: It depends on what you mean by find and the precise kind of kernel you are looking at. In many cases you can prove the abstract existence of such a feature map but in practice it is alwa
36,812
Is there such a thing as a "good/bad" seed in pseudo-random number generation?
You might want to look at Matsumoto et al.'s "Common Defects in Initialization of Pseudorandom Number Generators". In one sense shouldn't matter what seed you use, in that with a good PRNG, weird results should be rare, just as low-probability outcomes in nature should be rare. Whether you can use the same seed all of the time, as some people do, depends on the application. I personally would never do this, but I run Monte Carl simulations in which different seeds might produce different patterns of outcomes, and I want to know whether this happens. The data that I ultimately care about is is the collection of data I get when I run the simulation many times with the same parameters but different seeds. I can then perform statistics on this data, plot summary data, etc. Mersenne Twister and other algorithms: Concerning jbowman's comment, not everyone is as negative about Mersenne Twisters as O'Neill (pcg-random.org/other-rngs.html). If you don't care about M.T.'s extremely long period (arguably overkill), there are better algorithms (including, I think, O'Neill's PCG algorithms, though there's been some debate between O'Neill and Vigna about whose algorithms are better). However, M.T. is still common in software packages that I trust, where PRNG quality matters. (By contrast, some Java rand() functions are not as good.) Any decent implementation of Mersenne Twister will initialize its state with another, simpler PRNG, which might differ for different implementations (cf. these remarks by Matsumoto). The Twister should then be iterated at least 624 times (better yet, twice that number) before using its output. If you do that, you shouldn't have a problem with nearby seeds. (A high-quality agent-based modeling library, MASON uses Mersenne Twister with this scheme, and if you tell it to perform multiple runs with the same parameters, by default it will simply increment the first seed for each subsequent run.) Another issue with Mersenne Twisters is that if its internal 624x32-bit state has many zero bits, it takes many iterations to get out of that pattern. (See Pannetton et al. "Improved Long-Period Generators Based on Linear Recurrences Modulo 2", which describes a better algorithm, WELL, although with shorter periods.) However, if you initialize the Mersenne Twister in the usual way with another pseudorandom number generating algorithm, I would think that the zeros issue would be unlikely to be a serious problem, since it should be rare that such an algorithm gives a Mersenne Twister a starting state with a lot of zero bits. (The most recent version of M.T. has less of a problem with zero bits; see Saito and Matsumoto, "SIMD-oriented Fast Mersenne Twister: a 128-bit Pseudorandom Number Generator", pages 13-14.) (N.B. Extra details on Mersenne Twisters: The remarks above are for the most common kind of Mersenne Twister with a 624x32-bit internal state and a period of $2^{19937}−1$. If you seed it directly, you need to provide 624 32-bit numbers as a seed. Since it's usually undesirable to have to do that, by default you give a wrapper function a 32-bit (or possibly 64-bit) seed, which passes it to a simpler, lower-quality pseudorandom number generator. This is used to generate the 624x32-bit seed for the Mersenne Twister. However, the way that a M.T. works is that it takes successive numbers from its state, passes them to a function that rearranges the bits in the number, and outputs the result. When all 624 numbers are used this way, it performs an operation on the entire internal state (including a step known as a "twist") to generate a new 624x32-bit state. This is why you should not use the first 624 or 1248 outputs; they are partially the result of a lower-quality pseudorandom number generator, and are not due to the full Mersenne Twister algorithm. Kneusel's introductory book on PRNGs includes an introduction to Mersenne Twisters, but read the xorshift section first.) (Other introductory texts--not quite as easy as Kneusel, and they don't necessarily cover Mersenne Twisters, if that's what you're interested in--include: Johnston's Random Number Generators--Principles and Practices. Knuth, Chapter 3 in volume 2 of the 3rd edition of The Art of Computer Programming (still deserves to be called the "bible" of PRNGs, even though there have been crucial innovations since it was published). Several papers at Pierre L'Ecuyer's site The paper by O'Neill mentioned above.)
Is there such a thing as a "good/bad" seed in pseudo-random number generation?
You might want to look at Matsumoto et al.'s "Common Defects in Initialization of Pseudorandom Number Generators". In one sense shouldn't matter what seed you use, in that with a good PRNG, weird resu
Is there such a thing as a "good/bad" seed in pseudo-random number generation? You might want to look at Matsumoto et al.'s "Common Defects in Initialization of Pseudorandom Number Generators". In one sense shouldn't matter what seed you use, in that with a good PRNG, weird results should be rare, just as low-probability outcomes in nature should be rare. Whether you can use the same seed all of the time, as some people do, depends on the application. I personally would never do this, but I run Monte Carl simulations in which different seeds might produce different patterns of outcomes, and I want to know whether this happens. The data that I ultimately care about is is the collection of data I get when I run the simulation many times with the same parameters but different seeds. I can then perform statistics on this data, plot summary data, etc. Mersenne Twister and other algorithms: Concerning jbowman's comment, not everyone is as negative about Mersenne Twisters as O'Neill (pcg-random.org/other-rngs.html). If you don't care about M.T.'s extremely long period (arguably overkill), there are better algorithms (including, I think, O'Neill's PCG algorithms, though there's been some debate between O'Neill and Vigna about whose algorithms are better). However, M.T. is still common in software packages that I trust, where PRNG quality matters. (By contrast, some Java rand() functions are not as good.) Any decent implementation of Mersenne Twister will initialize its state with another, simpler PRNG, which might differ for different implementations (cf. these remarks by Matsumoto). The Twister should then be iterated at least 624 times (better yet, twice that number) before using its output. If you do that, you shouldn't have a problem with nearby seeds. (A high-quality agent-based modeling library, MASON uses Mersenne Twister with this scheme, and if you tell it to perform multiple runs with the same parameters, by default it will simply increment the first seed for each subsequent run.) Another issue with Mersenne Twisters is that if its internal 624x32-bit state has many zero bits, it takes many iterations to get out of that pattern. (See Pannetton et al. "Improved Long-Period Generators Based on Linear Recurrences Modulo 2", which describes a better algorithm, WELL, although with shorter periods.) However, if you initialize the Mersenne Twister in the usual way with another pseudorandom number generating algorithm, I would think that the zeros issue would be unlikely to be a serious problem, since it should be rare that such an algorithm gives a Mersenne Twister a starting state with a lot of zero bits. (The most recent version of M.T. has less of a problem with zero bits; see Saito and Matsumoto, "SIMD-oriented Fast Mersenne Twister: a 128-bit Pseudorandom Number Generator", pages 13-14.) (N.B. Extra details on Mersenne Twisters: The remarks above are for the most common kind of Mersenne Twister with a 624x32-bit internal state and a period of $2^{19937}−1$. If you seed it directly, you need to provide 624 32-bit numbers as a seed. Since it's usually undesirable to have to do that, by default you give a wrapper function a 32-bit (or possibly 64-bit) seed, which passes it to a simpler, lower-quality pseudorandom number generator. This is used to generate the 624x32-bit seed for the Mersenne Twister. However, the way that a M.T. works is that it takes successive numbers from its state, passes them to a function that rearranges the bits in the number, and outputs the result. When all 624 numbers are used this way, it performs an operation on the entire internal state (including a step known as a "twist") to generate a new 624x32-bit state. This is why you should not use the first 624 or 1248 outputs; they are partially the result of a lower-quality pseudorandom number generator, and are not due to the full Mersenne Twister algorithm. Kneusel's introductory book on PRNGs includes an introduction to Mersenne Twisters, but read the xorshift section first.) (Other introductory texts--not quite as easy as Kneusel, and they don't necessarily cover Mersenne Twisters, if that's what you're interested in--include: Johnston's Random Number Generators--Principles and Practices. Knuth, Chapter 3 in volume 2 of the 3rd edition of The Art of Computer Programming (still deserves to be called the "bible" of PRNGs, even though there have been crucial innovations since it was published). Several papers at Pierre L'Ecuyer's site The paper by O'Neill mentioned above.)
Is there such a thing as a "good/bad" seed in pseudo-random number generation? You might want to look at Matsumoto et al.'s "Common Defects in Initialization of Pseudorandom Number Generators". In one sense shouldn't matter what seed you use, in that with a good PRNG, weird resu
36,813
How to explain random forest ML algorithm doesn't learn at all, while logistic regression learns very well?
RF does very poorly when the data is highly sparse, because there's a high probability that the feature it selects to split on will be all 0s. See: When to avoid Random Forest? Something as simple as svd or non-negative-matrix-factorization can improve RF when it recovers a useful dense representation of the sparse data. But this isn't guaranteed. Too rich a tree (too high max depth and related parameters) can be a source of overfit, but the effect is usually small so most people just build the deepest tree and call it a day. Setting the number of features to split on is by far the most important hyper-parameter; I can't find the article that I'm thinking of right now, though. Also, "auto" and "sqrt" do the same thing for RandomForestClassifier according to the documentation. https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html Using F1 score to select the model might not be sensitive enough, and could choose a bogus model. Using a strictly proper scoring rule that takes into account the full probability information is best. Some examples are Brier score and the cross-entropy. Semi-related note: you don't have to tune the number of trees in a random forest Do we have to tune the number of trees in a random forest? Just pick a large enough number that the variance in the predictions is small.
How to explain random forest ML algorithm doesn't learn at all, while logistic regression learns ver
RF does very poorly when the data is highly sparse, because there's a high probability that the feature it selects to split on will be all 0s. See: When to avoid Random Forest? Something as simple as
How to explain random forest ML algorithm doesn't learn at all, while logistic regression learns very well? RF does very poorly when the data is highly sparse, because there's a high probability that the feature it selects to split on will be all 0s. See: When to avoid Random Forest? Something as simple as svd or non-negative-matrix-factorization can improve RF when it recovers a useful dense representation of the sparse data. But this isn't guaranteed. Too rich a tree (too high max depth and related parameters) can be a source of overfit, but the effect is usually small so most people just build the deepest tree and call it a day. Setting the number of features to split on is by far the most important hyper-parameter; I can't find the article that I'm thinking of right now, though. Also, "auto" and "sqrt" do the same thing for RandomForestClassifier according to the documentation. https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html Using F1 score to select the model might not be sensitive enough, and could choose a bogus model. Using a strictly proper scoring rule that takes into account the full probability information is best. Some examples are Brier score and the cross-entropy. Semi-related note: you don't have to tune the number of trees in a random forest Do we have to tune the number of trees in a random forest? Just pick a large enough number that the variance in the predictions is small.
How to explain random forest ML algorithm doesn't learn at all, while logistic regression learns ver RF does very poorly when the data is highly sparse, because there's a high probability that the feature it selects to split on will be all 0s. See: When to avoid Random Forest? Something as simple as
36,814
How to explain random forest ML algorithm doesn't learn at all, while logistic regression learns very well?
You've left the default value for n_iter, 10, in the search. That's way too low for most uses with more than a couple important hyperparameters, and especially if your search space contains large regions of poorly performing hyperparameter combinations. In particular, I think your tree complexity controls are too often too strict: a small depth or large min samples per split or leaf or small max leaves will likely cause underfitting. You could jump n_iter to 60-100, shrink the ranges of those parameters to less-strict ones (or don't select uniformly), and/or just search over fewer of those similar-purpose hyperparameters. The random forests with very few trees are likely to have unstable scores, as well; better not to search that hyperparameter, and just leave it at something large-ish.
How to explain random forest ML algorithm doesn't learn at all, while logistic regression learns ver
You've left the default value for n_iter, 10, in the search. That's way too low for most uses with more than a couple important hyperparameters, and especially if your search space contains large regi
How to explain random forest ML algorithm doesn't learn at all, while logistic regression learns very well? You've left the default value for n_iter, 10, in the search. That's way too low for most uses with more than a couple important hyperparameters, and especially if your search space contains large regions of poorly performing hyperparameter combinations. In particular, I think your tree complexity controls are too often too strict: a small depth or large min samples per split or leaf or small max leaves will likely cause underfitting. You could jump n_iter to 60-100, shrink the ranges of those parameters to less-strict ones (or don't select uniformly), and/or just search over fewer of those similar-purpose hyperparameters. The random forests with very few trees are likely to have unstable scores, as well; better not to search that hyperparameter, and just leave it at something large-ish.
How to explain random forest ML algorithm doesn't learn at all, while logistic regression learns ver You've left the default value for n_iter, 10, in the search. That's way too low for most uses with more than a couple important hyperparameters, and especially if your search space contains large regi
36,815
Sign of Covariance and of Spearman's Rho
There are many counterexamples. But let's address the underlying question: What I am eventually after is a proof that if $h$ is an increasing monotonic transformation, then $\operatorname{Sign}\{\operatorname{Cov}(X,Y)\}=\operatorname{Sign}\{\operatorname{Cov}(X,h(Y))\}$. This is false. The first counterexample is the discrete uniform distribution $F$ on the $(x_i,y_i)$ points $(1,8.1), (2,9.1), (3,10.1), (4,11.1), (5,12.1), (6,13.1), (7,0.1),$ here depicted by plotting those seven points as red circles in the left panel: Consider the family of Box-Cox transformations $$h_p(y) = \frac{y^p - 1}{p\, C} + 1$$ where the constant $C$ is chosen to make the values of $h_p(y_i)$ comparable to those of $y$ (for instance, by setting $C$ to be the $p-1$ power of the geometric mean of the $y_i$) and $1$ is added to make $h_1$ the identity. These are all monotonic; an example is shown for $p=2$ in the right panel. Their effects on the covariance are plotted in the middle panel. It shows a change from negative covariance (due to that outlying point at the bottom left) to positive covariance (because the transformation makes the point just a little less outlying, reducing its negative effect on the otherwise strong positive covariance of all the other data). In particular, to be perfectly explicit, you may compute that $$h(y_i,2) = (7.0, 8.6, 10.4, 12.4, 14.5, 16.8, 0.908),$$ giving $\operatorname{Cov}(x_i,y_i) = -7/3 \lt 0$ and $\operatorname{Cov}(x_i, h(y_i,2))=0.39217 \gt 0.$ The points $(x_i, h(y_i,2))$ are plotted as hollow blue triangles in the left panel. The second counterexample is a continuous version of the first. Let $(U,V)$ have any continuous distribution supported on $[-1,1]\times[-1,1].$ For any real number $\epsilon$ define $$(X_\epsilon, Y_\epsilon) = (X,Y) + \epsilon(U,V).$$ Provided $\epsilon\ne 0,$ $(X_\epsilon, Y_\epsilon)$ has a continuous distribution (see Is the sum of a continuous random variable and mixed random variable continuous?). Provided $|\epsilon| \lt 1/10,$ the support of $(X_\epsilon, Y_\epsilon)$ is in the first quadrant (strictly positive in both variables), implying the Box-Cox transformations can be applied to $Y_\epsilon.$ You can perform the calculations confirming that the covariance of $(X_\epsilon,Y_\epsilon)$ is a continuous function of $\epsilon.$ Ergo, for sufficiently small $\epsilon,$ the first counterexample shows the covariance of $(X_\epsilon,Y_\epsilon)$ is negative while that of $(X_\epsilon, h_2(Y_\epsilon))$ is positive, QED.
Sign of Covariance and of Spearman's Rho
There are many counterexamples. But let's address the underlying question: What I am eventually after is a proof that if $h$ is an increasing monotonic transformation, then $\operatorname{Sign}\{\op
Sign of Covariance and of Spearman's Rho There are many counterexamples. But let's address the underlying question: What I am eventually after is a proof that if $h$ is an increasing monotonic transformation, then $\operatorname{Sign}\{\operatorname{Cov}(X,Y)\}=\operatorname{Sign}\{\operatorname{Cov}(X,h(Y))\}$. This is false. The first counterexample is the discrete uniform distribution $F$ on the $(x_i,y_i)$ points $(1,8.1), (2,9.1), (3,10.1), (4,11.1), (5,12.1), (6,13.1), (7,0.1),$ here depicted by plotting those seven points as red circles in the left panel: Consider the family of Box-Cox transformations $$h_p(y) = \frac{y^p - 1}{p\, C} + 1$$ where the constant $C$ is chosen to make the values of $h_p(y_i)$ comparable to those of $y$ (for instance, by setting $C$ to be the $p-1$ power of the geometric mean of the $y_i$) and $1$ is added to make $h_1$ the identity. These are all monotonic; an example is shown for $p=2$ in the right panel. Their effects on the covariance are plotted in the middle panel. It shows a change from negative covariance (due to that outlying point at the bottom left) to positive covariance (because the transformation makes the point just a little less outlying, reducing its negative effect on the otherwise strong positive covariance of all the other data). In particular, to be perfectly explicit, you may compute that $$h(y_i,2) = (7.0, 8.6, 10.4, 12.4, 14.5, 16.8, 0.908),$$ giving $\operatorname{Cov}(x_i,y_i) = -7/3 \lt 0$ and $\operatorname{Cov}(x_i, h(y_i,2))=0.39217 \gt 0.$ The points $(x_i, h(y_i,2))$ are plotted as hollow blue triangles in the left panel. The second counterexample is a continuous version of the first. Let $(U,V)$ have any continuous distribution supported on $[-1,1]\times[-1,1].$ For any real number $\epsilon$ define $$(X_\epsilon, Y_\epsilon) = (X,Y) + \epsilon(U,V).$$ Provided $\epsilon\ne 0,$ $(X_\epsilon, Y_\epsilon)$ has a continuous distribution (see Is the sum of a continuous random variable and mixed random variable continuous?). Provided $|\epsilon| \lt 1/10,$ the support of $(X_\epsilon, Y_\epsilon)$ is in the first quadrant (strictly positive in both variables), implying the Box-Cox transformations can be applied to $Y_\epsilon.$ You can perform the calculations confirming that the covariance of $(X_\epsilon,Y_\epsilon)$ is a continuous function of $\epsilon.$ Ergo, for sufficiently small $\epsilon,$ the first counterexample shows the covariance of $(X_\epsilon,Y_\epsilon)$ is negative while that of $(X_\epsilon, h_2(Y_\epsilon))$ is positive, QED.
Sign of Covariance and of Spearman's Rho There are many counterexamples. But let's address the underlying question: What I am eventually after is a proof that if $h$ is an increasing monotonic transformation, then $\operatorname{Sign}\{\op
36,816
Sign of Covariance and of Spearman's Rho
I say they can have opposite signs. Let's look at the following simulation. # Set a random seed so that everyone can get the same results # set.seed(1) # Import the library that simulates correlated bivariate data # library(MASS) # Simulate bivariate normal data with standard normal # marginals and 0.9 Pearson correlation. To those 99 # observations, add a gigantic outlier completely out # of the mainstream of the other 99 points. This is why # we end up with negative covariance. # X <- rbind(mvrnorm(99,c(0,0),matrix(c(1,0.9,0.9,1),2,2)),c(-10000,10000)) # Plot the data # plot(X[,1],X[,2]) # Calculate the covariance of the sample. When we regard # the simulated data as a discrete population, this is # the population covariance. # cov(X[,1],X[,2]) # comes out negative, as the plot suggests # Calculate the sample Spearman correlation, which is # positive, since 99% of the data follow an upward trend. # cor(X[,1],X[,2],method='spearman') # comes out positive However, we can take the simulated data as a discrete population. # Apply the empirical CDF function to perform the probability # integral transform. If we regard the sampled data as a # discrete population, we have tricked R into calculating the # population Spearman correlation. # cov(ecdf(X[,1])(X[,1]),ecdf(X[,2])(X[,2])) # Positive, same value as before The "ecdf" (empirical CDF) tricks R into making the population CDF of this discrete variable, so I think we're working at the population level and that this is a counterexample.
Sign of Covariance and of Spearman's Rho
I say they can have opposite signs. Let's look at the following simulation. # Set a random seed so that everyone can get the same results # set.seed(1) # Import the library that simulates correl
Sign of Covariance and of Spearman's Rho I say they can have opposite signs. Let's look at the following simulation. # Set a random seed so that everyone can get the same results # set.seed(1) # Import the library that simulates correlated bivariate data # library(MASS) # Simulate bivariate normal data with standard normal # marginals and 0.9 Pearson correlation. To those 99 # observations, add a gigantic outlier completely out # of the mainstream of the other 99 points. This is why # we end up with negative covariance. # X <- rbind(mvrnorm(99,c(0,0),matrix(c(1,0.9,0.9,1),2,2)),c(-10000,10000)) # Plot the data # plot(X[,1],X[,2]) # Calculate the covariance of the sample. When we regard # the simulated data as a discrete population, this is # the population covariance. # cov(X[,1],X[,2]) # comes out negative, as the plot suggests # Calculate the sample Spearman correlation, which is # positive, since 99% of the data follow an upward trend. # cor(X[,1],X[,2],method='spearman') # comes out positive However, we can take the simulated data as a discrete population. # Apply the empirical CDF function to perform the probability # integral transform. If we regard the sampled data as a # discrete population, we have tricked R into calculating the # population Spearman correlation. # cov(ecdf(X[,1])(X[,1]),ecdf(X[,2])(X[,2])) # Positive, same value as before The "ecdf" (empirical CDF) tricks R into making the population CDF of this discrete variable, so I think we're working at the population level and that this is a counterexample.
Sign of Covariance and of Spearman's Rho I say they can have opposite signs. Let's look at the following simulation. # Set a random seed so that everyone can get the same results # set.seed(1) # Import the library that simulates correl
36,817
Sign of Covariance and of Spearman's Rho
To enhance the value of this thread I will lay out why Quadrant Dependence implies that a) Covariance will have the same sign as Spearman's Rho if both are not zero b) The sign of covariance is not affected by strictly increasing monotonic transformations, if it remains non-zero. I will show it for continuous distributions with densities, but this is not a critical condition. Let $X$, $Y$ be two random variables with joint distribution function $F_{XY}(x,y)$, marginal distribution functions $F_X(x), F_Y(y)$ and marginal density/probability mass functions $f_X(x), f_Y(y)$. Then we have \begin{cases} \text{Positive Quadrant Dependence iff} \;\;\; F_{XY}(x,y) - F_X(x)F_Y(y) \geq 0\;\;\; \forall (x,y)\\ \text{Negative Quadrant Dependence iff}\;\;\ F_{XY}(x,y) - F_X(x)F_Y(y) \leq 0\;\;\; \forall (x,y) \end{cases} Note that the crucial condition is the "for all $(x,y)$" qualifier. Now the "beautiful covariance formula of Hoeffding" is $$\text{Cov}(X,Y) = \int\int_{S_{XY}}[F_{XY}(x,y) - F_X(x)F_Y(y)] dx dy$$ where $S_{XY}$ is the joint support. On the other hand, Spearman's Rho can be expressed as $$\rho_S(X,Y) = 12\cdot \int\int_{S_{XY}}f_x(x)f_y(y)[F_{XY}(x,y) - F_X(x)F_Y(y)] dx dy$$ Those that remember that $dF(x) = f(x)dx$ understand why the existence of densities is not critical. But it is clarifying: compacting $[F_{XY}(x,y) - F_X(x)F_Y(y)] \equiv QD(x,y)$ we have $$\text{Cov}(X,Y) = \int\int_{S_{XY}}QD(x,y) dx dy$$ $$\rho_S(X,Y) = 12\cdot \int\int_{S_{XY}}f_x(x)f_y(y)QD(x,y) dx dy$$ We see that the covariance "sums" the quantities $QD(x,y)$ over the joint support "unweighted", while Spearman's Rho sums them weighted by the product of the densities, $f_x(x)f_y(y)$ (which is always non-negative). If Quadrant Dependence holds, then in both measures we "sum" either non-negative things only or non-positive things only. So a) Under $QD$, Covariance will have the same sign as Spearman's Rho if both are not zero: $$\text{sign}\left\{\text{Cov}(X,Y)\right\} = \text{sign}\left\{\rho_s(X,Y)\right\}$$ Moreover, consider a strictly increasing monotonic transformation of $Y$, $h(Y)$. Spearmans's Rho is invariant under such a transformation so $$\rho_S(X,Y) = \rho_S(X,h(Y))$$ Under Quadrant Dependence, we will have, again when both measures are not zero, $$\text{sign}\left\{\text{Cov}(X,h(Y))\right\} = \text{sign}\left\{\rho_s(X,h(Y))\right\}$$ Linking sign equalities we then obtain $$\text{sign}\left\{\text{Cov}(X,Y)\right\} = \text{sign}\left\{\text{Cov}(X,h(Y))\right\}$$ As implied in the other answers, the counterintuitive result here is that Quadrant Dependence cannot be dropped: if it does not hold, then we have no guarantee that a strictly increasing transformation of one variable will preserve the sign of covariance. Therefore, "pretty logical" informal arguments like "since, when $Y$ tends to increase so does $h(Y)$, it follows that if $X$ covaries positively with $Y$, it will covary positively also with $h(Y)$" are wrong when by using the verb "co-vary" we have in mind the Covariance measure. "It follows" for Covariance only if $QD$ holds. Formally, one can see this by setting $Z= h(Y), h'(y) >0$ and observing that $$F_Z(z) = F_Y(h^{-1}(z)),\;\;\;F_{XZ}(x,z) = F_{XY}(x,h^{-1}(z)), dz = h'(y)dy$$. Then we have $$\text{Cov}(X,Z) = \int\int_{S_{XZ}}[F_{XZ}(x,z) - F_X(x)F_Z(z)] dx dz$$ $$= \int\int_{S_{XZ}}[F_{XY}(x,h^{-1}(z)) - F_X(x)F_Y(h^{-1}(z))] dx dz$$ and then make a change of variable from $Z$ to $Y$, to get $$\text{Cov}(X,Z) = \int\int_{S_{X,Y}}h'(y)\cdot QD(x,y)dx dy$$ If $QD$ does not hold, it means that some $QD(x,y)$ will be positive and some negative. Then, the fact that, say $\text{Cov}(X,Y) >0$ alone cannot guarantee that $\text{Cov}(X,Z) >0$ also, since, here, we weight the previous integrand by $h'(y)$, which although strictly positive is not a constant and so it may be the case that it weights disproportionately more those $QD(x,y)$ that are negative, than those that are positive, resulting overall in a negative value. So , from this path at least, the property of Quadrant Dependence is essential.
Sign of Covariance and of Spearman's Rho
To enhance the value of this thread I will lay out why Quadrant Dependence implies that a) Covariance will have the same sign as Spearman's Rho if both are not zero b) The sign of covariance is not af
Sign of Covariance and of Spearman's Rho To enhance the value of this thread I will lay out why Quadrant Dependence implies that a) Covariance will have the same sign as Spearman's Rho if both are not zero b) The sign of covariance is not affected by strictly increasing monotonic transformations, if it remains non-zero. I will show it for continuous distributions with densities, but this is not a critical condition. Let $X$, $Y$ be two random variables with joint distribution function $F_{XY}(x,y)$, marginal distribution functions $F_X(x), F_Y(y)$ and marginal density/probability mass functions $f_X(x), f_Y(y)$. Then we have \begin{cases} \text{Positive Quadrant Dependence iff} \;\;\; F_{XY}(x,y) - F_X(x)F_Y(y) \geq 0\;\;\; \forall (x,y)\\ \text{Negative Quadrant Dependence iff}\;\;\ F_{XY}(x,y) - F_X(x)F_Y(y) \leq 0\;\;\; \forall (x,y) \end{cases} Note that the crucial condition is the "for all $(x,y)$" qualifier. Now the "beautiful covariance formula of Hoeffding" is $$\text{Cov}(X,Y) = \int\int_{S_{XY}}[F_{XY}(x,y) - F_X(x)F_Y(y)] dx dy$$ where $S_{XY}$ is the joint support. On the other hand, Spearman's Rho can be expressed as $$\rho_S(X,Y) = 12\cdot \int\int_{S_{XY}}f_x(x)f_y(y)[F_{XY}(x,y) - F_X(x)F_Y(y)] dx dy$$ Those that remember that $dF(x) = f(x)dx$ understand why the existence of densities is not critical. But it is clarifying: compacting $[F_{XY}(x,y) - F_X(x)F_Y(y)] \equiv QD(x,y)$ we have $$\text{Cov}(X,Y) = \int\int_{S_{XY}}QD(x,y) dx dy$$ $$\rho_S(X,Y) = 12\cdot \int\int_{S_{XY}}f_x(x)f_y(y)QD(x,y) dx dy$$ We see that the covariance "sums" the quantities $QD(x,y)$ over the joint support "unweighted", while Spearman's Rho sums them weighted by the product of the densities, $f_x(x)f_y(y)$ (which is always non-negative). If Quadrant Dependence holds, then in both measures we "sum" either non-negative things only or non-positive things only. So a) Under $QD$, Covariance will have the same sign as Spearman's Rho if both are not zero: $$\text{sign}\left\{\text{Cov}(X,Y)\right\} = \text{sign}\left\{\rho_s(X,Y)\right\}$$ Moreover, consider a strictly increasing monotonic transformation of $Y$, $h(Y)$. Spearmans's Rho is invariant under such a transformation so $$\rho_S(X,Y) = \rho_S(X,h(Y))$$ Under Quadrant Dependence, we will have, again when both measures are not zero, $$\text{sign}\left\{\text{Cov}(X,h(Y))\right\} = \text{sign}\left\{\rho_s(X,h(Y))\right\}$$ Linking sign equalities we then obtain $$\text{sign}\left\{\text{Cov}(X,Y)\right\} = \text{sign}\left\{\text{Cov}(X,h(Y))\right\}$$ As implied in the other answers, the counterintuitive result here is that Quadrant Dependence cannot be dropped: if it does not hold, then we have no guarantee that a strictly increasing transformation of one variable will preserve the sign of covariance. Therefore, "pretty logical" informal arguments like "since, when $Y$ tends to increase so does $h(Y)$, it follows that if $X$ covaries positively with $Y$, it will covary positively also with $h(Y)$" are wrong when by using the verb "co-vary" we have in mind the Covariance measure. "It follows" for Covariance only if $QD$ holds. Formally, one can see this by setting $Z= h(Y), h'(y) >0$ and observing that $$F_Z(z) = F_Y(h^{-1}(z)),\;\;\;F_{XZ}(x,z) = F_{XY}(x,h^{-1}(z)), dz = h'(y)dy$$. Then we have $$\text{Cov}(X,Z) = \int\int_{S_{XZ}}[F_{XZ}(x,z) - F_X(x)F_Z(z)] dx dz$$ $$= \int\int_{S_{XZ}}[F_{XY}(x,h^{-1}(z)) - F_X(x)F_Y(h^{-1}(z))] dx dz$$ and then make a change of variable from $Z$ to $Y$, to get $$\text{Cov}(X,Z) = \int\int_{S_{X,Y}}h'(y)\cdot QD(x,y)dx dy$$ If $QD$ does not hold, it means that some $QD(x,y)$ will be positive and some negative. Then, the fact that, say $\text{Cov}(X,Y) >0$ alone cannot guarantee that $\text{Cov}(X,Z) >0$ also, since, here, we weight the previous integrand by $h'(y)$, which although strictly positive is not a constant and so it may be the case that it weights disproportionately more those $QD(x,y)$ that are negative, than those that are positive, resulting overall in a negative value. So , from this path at least, the property of Quadrant Dependence is essential.
Sign of Covariance and of Spearman's Rho To enhance the value of this thread I will lay out why Quadrant Dependence implies that a) Covariance will have the same sign as Spearman's Rho if both are not zero b) The sign of covariance is not af
36,818
How can I compare models without fitting?
In this situation you are essentially comparing the distributions of the $\epsilon_i$ among the 3 models. So you need to examine issues like: Are the mean values of the $\epsilon_i$ different among the 3 models, and is any of these mean values different from 0? (That is, is there a bias in any of the models and do the 3 models differ in bias?) Is there any systematic relation of the $\epsilon_i$ to the values predicted from the corresponding model, or to the values of the independent variables $x_{1,i},x_{2,i}, x_{3,1}$? You should consider all three independent variables here even if the particular model only used 1 or 2 of them. Are there significant differences in the variances of the $\epsilon_i$ among the 3 models? The details of how best to approach these questions will depend on the nature of your data. For example, if values of $y_i$ are necessarily positive and have typical measurement errors proportional to their values (as often is the case in practice), it might make sense to do this analysis on differences between log-transformed $y_i$ and log-transformed predictions from each of your models. Visual analysis of the distributions of the $\epsilon_i$ among the 3 models, for example with density plots, would be an important first step. Depending on the nature of the data, standard parametric or non-parametric statistical tests for differences in mean values, applied to the $\epsilon_i$ for the 3 models, would address Issue 1. Issue 2 is essentially what is done to examine the quality of any fitted model; in your case this analysis might show domains of the independent variables over which one or more of your pre-specified models does not work well. Plots of $\epsilon_i$ versus predicted values and independent-variable values, with loess curves to highlight trends, for each of your models would be useful. If there is no bias in any models and analysis of Issue 2 shows no problems, then the remaining Issue 3 is whether any of the models is superior in terms of precision/variance. In the ideal case with normally distributed $\epsilon_i$ within each model, F-tests could test for equality of variances.
How can I compare models without fitting?
In this situation you are essentially comparing the distributions of the $\epsilon_i$ among the 3 models. So you need to examine issues like: Are the mean values of the $\epsilon_i$ different among t
How can I compare models without fitting? In this situation you are essentially comparing the distributions of the $\epsilon_i$ among the 3 models. So you need to examine issues like: Are the mean values of the $\epsilon_i$ different among the 3 models, and is any of these mean values different from 0? (That is, is there a bias in any of the models and do the 3 models differ in bias?) Is there any systematic relation of the $\epsilon_i$ to the values predicted from the corresponding model, or to the values of the independent variables $x_{1,i},x_{2,i}, x_{3,1}$? You should consider all three independent variables here even if the particular model only used 1 or 2 of them. Are there significant differences in the variances of the $\epsilon_i$ among the 3 models? The details of how best to approach these questions will depend on the nature of your data. For example, if values of $y_i$ are necessarily positive and have typical measurement errors proportional to their values (as often is the case in practice), it might make sense to do this analysis on differences between log-transformed $y_i$ and log-transformed predictions from each of your models. Visual analysis of the distributions of the $\epsilon_i$ among the 3 models, for example with density plots, would be an important first step. Depending on the nature of the data, standard parametric or non-parametric statistical tests for differences in mean values, applied to the $\epsilon_i$ for the 3 models, would address Issue 1. Issue 2 is essentially what is done to examine the quality of any fitted model; in your case this analysis might show domains of the independent variables over which one or more of your pre-specified models does not work well. Plots of $\epsilon_i$ versus predicted values and independent-variable values, with loess curves to highlight trends, for each of your models would be useful. If there is no bias in any models and analysis of Issue 2 shows no problems, then the remaining Issue 3 is whether any of the models is superior in terms of precision/variance. In the ideal case with normally distributed $\epsilon_i$ within each model, F-tests could test for equality of variances.
How can I compare models without fitting? In this situation you are essentially comparing the distributions of the $\epsilon_i$ among the 3 models. So you need to examine issues like: Are the mean values of the $\epsilon_i$ different among t
36,819
How can I compare models without fitting?
A probabilistic comparison of the models, e.g. involving some likelihood computed from the $\epsilon$ with some data (and derived from this AIC or ratio test), makes little sense. This is because You already know for certain that the model is gonna be wrong. The residuals that you end up with have no relation with the hypothesised distribution of errors that you use to test different hypotheses. (you do not have a statistical/probabilisitc model) Your goal is not to test a hypothesis (basic/pure science), but to characterize the prediction performance of a simplified model (applied science). Most often people describe models in terms of the percent of error for predictions. Examples: Sludge pipe flow pressure drop prediction using composite power-law friction factor-Reynolds number correlations based on different non-Newtonian Reynolds numbers It is shown that these correlations can be used to predict pressure drop to within ±20% for a given sludge concentration and operating condition. Predicting the effective viscosity of nanofluids based on the rheology of suspensions of solid particles The present model suits with the 501 viscosity values with mean deviations lower than 5% and 75% of them are within the correlation coefficient 0.78–1. Application of artificial intelligence to modelling asphalt –rubber viscosity Figure 2 presents a comparison between measured viscosity ($\rho$) and the viscosity calculated by Einstein model. A difference between calculated and measured values confirms that there is an elevated physical interaction between asphalt base and rubber particles. Bond contribution method for estimating henry's law constants A correlation coefficient (r2) of 0.94 was determined for the relationship between known LWAPCs (log water‐to‐air partition coefficients) and bond estimated LWAPCs for the 345 compound data set. Basically you can google any model that is a simplification of reality and you will find people describing their discrepancy with reality in terms of correlation coefficients, or percent of variation. I want to test the hypothesis that "phenomenon A" involving x_3,i contributes measurably to the production of y. Model f incorporates phenomenon A while g and h do not, so if my hypothesis were true, I would predict that model f performs significantly better than either g or h. For such comparison you could consider the measured performance as a sample, a sample taken out of a larger (hypothetical) population of performance. So you sort of wish to describe the parameters of the population distribution of the errors $\epsilon$ and compare those. This you might consider as probabilistic. For instance, you could phrase it as 'the average error of the model is $y \pm x$'. Your hypothesis is about those parameters that describe the distribution of the errors. However this view is a bit problematic, since often the "sample" that is used to measure performance, is not really a random selection (e.g. it are measurements along a predifined range or among a selected practical set of items). Then any quantification of the error in the estimate of general peformance should not be based on a model for random selection (e.g. using variance in the sample to describe te error of the estimate). So it still makes little sense to use a probabilistic model to describe the comparisons. It might be sufficient to just state descriptive data, and make your "estimate" about generalization based on logical arguments.
How can I compare models without fitting?
A probabilistic comparison of the models, e.g. involving some likelihood computed from the $\epsilon$ with some data (and derived from this AIC or ratio test), makes little sense. This is because Y
How can I compare models without fitting? A probabilistic comparison of the models, e.g. involving some likelihood computed from the $\epsilon$ with some data (and derived from this AIC or ratio test), makes little sense. This is because You already know for certain that the model is gonna be wrong. The residuals that you end up with have no relation with the hypothesised distribution of errors that you use to test different hypotheses. (you do not have a statistical/probabilisitc model) Your goal is not to test a hypothesis (basic/pure science), but to characterize the prediction performance of a simplified model (applied science). Most often people describe models in terms of the percent of error for predictions. Examples: Sludge pipe flow pressure drop prediction using composite power-law friction factor-Reynolds number correlations based on different non-Newtonian Reynolds numbers It is shown that these correlations can be used to predict pressure drop to within ±20% for a given sludge concentration and operating condition. Predicting the effective viscosity of nanofluids based on the rheology of suspensions of solid particles The present model suits with the 501 viscosity values with mean deviations lower than 5% and 75% of them are within the correlation coefficient 0.78–1. Application of artificial intelligence to modelling asphalt –rubber viscosity Figure 2 presents a comparison between measured viscosity ($\rho$) and the viscosity calculated by Einstein model. A difference between calculated and measured values confirms that there is an elevated physical interaction between asphalt base and rubber particles. Bond contribution method for estimating henry's law constants A correlation coefficient (r2) of 0.94 was determined for the relationship between known LWAPCs (log water‐to‐air partition coefficients) and bond estimated LWAPCs for the 345 compound data set. Basically you can google any model that is a simplification of reality and you will find people describing their discrepancy with reality in terms of correlation coefficients, or percent of variation. I want to test the hypothesis that "phenomenon A" involving x_3,i contributes measurably to the production of y. Model f incorporates phenomenon A while g and h do not, so if my hypothesis were true, I would predict that model f performs significantly better than either g or h. For such comparison you could consider the measured performance as a sample, a sample taken out of a larger (hypothetical) population of performance. So you sort of wish to describe the parameters of the population distribution of the errors $\epsilon$ and compare those. This you might consider as probabilistic. For instance, you could phrase it as 'the average error of the model is $y \pm x$'. Your hypothesis is about those parameters that describe the distribution of the errors. However this view is a bit problematic, since often the "sample" that is used to measure performance, is not really a random selection (e.g. it are measurements along a predifined range or among a selected practical set of items). Then any quantification of the error in the estimate of general peformance should not be based on a model for random selection (e.g. using variance in the sample to describe te error of the estimate). So it still makes little sense to use a probabilistic model to describe the comparisons. It might be sufficient to just state descriptive data, and make your "estimate" about generalization based on logical arguments.
How can I compare models without fitting? A probabilistic comparison of the models, e.g. involving some likelihood computed from the $\epsilon$ with some data (and derived from this AIC or ratio test), makes little sense. This is because Y
36,820
Using regression weights when $Y$ might be measured with nonzero-mean measurement error
I would use the 'probability of biased' as a dummy variable in the regression; it can possibly 'sop up' the bias present in the biased case. Using your example, (but calling set.seed(1234) before the call to get_df), I tried summary(lm(y_observed ~ x1 + x2 + I(1-pr_unbiased), data=df_unrated)) and got: Call: lm(formula = y_observed ~ x1 + x2 + I(1 - pr_unbiased), data = df_unrated) Residuals: Min 1Q Median 3Q Max -9.771 -2.722 -0.386 2.474 11.238 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.515 0.250 22.07 <2e-16 *** x1 1.108 0.169 6.54 1e-10 *** x2 4.917 0.168 29.26 <2e-16 *** I(1 - pr_unbiased) -3.727 0.383 -9.72 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 5.25 on 996 degrees of freedom Multiple R-squared: 0.514, Adjusted R-squared: 0.513 F-statistic: 351 on 3 and 996 DF, p-value: <2e-16 The coefficient for the term 1-pr_unbiased should be the size of the bias.
Using regression weights when $Y$ might be measured with nonzero-mean measurement error
I would use the 'probability of biased' as a dummy variable in the regression; it can possibly 'sop up' the bias present in the biased case. Using your example, (but calling set.seed(1234) before the
Using regression weights when $Y$ might be measured with nonzero-mean measurement error I would use the 'probability of biased' as a dummy variable in the regression; it can possibly 'sop up' the bias present in the biased case. Using your example, (but calling set.seed(1234) before the call to get_df), I tried summary(lm(y_observed ~ x1 + x2 + I(1-pr_unbiased), data=df_unrated)) and got: Call: lm(formula = y_observed ~ x1 + x2 + I(1 - pr_unbiased), data = df_unrated) Residuals: Min 1Q Median 3Q Max -9.771 -2.722 -0.386 2.474 11.238 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.515 0.250 22.07 <2e-16 *** x1 1.108 0.169 6.54 1e-10 *** x2 4.917 0.168 29.26 <2e-16 *** I(1 - pr_unbiased) -3.727 0.383 -9.72 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 5.25 on 996 degrees of freedom Multiple R-squared: 0.514, Adjusted R-squared: 0.513 F-statistic: 351 on 3 and 996 DF, p-value: <2e-16 The coefficient for the term 1-pr_unbiased should be the size of the bias.
Using regression weights when $Y$ might be measured with nonzero-mean measurement error I would use the 'probability of biased' as a dummy variable in the regression; it can possibly 'sop up' the bias present in the biased case. Using your example, (but calling set.seed(1234) before the
36,821
Using regression weights when $Y$ might be measured with nonzero-mean measurement error
This is an omitted-variable problem where you have an indicator variable $Z$ that is unobserved, but which has a relationship with the response variable. Since "bias" is a property of an estimator, not a regression variable, I am going to reframe your question as one where you want to find the true regression function conditional on $Z=0$ using regression data that does not include this variable, and a separate set of regression training data that is used to estimate the probabilities $p_0(x,y) \equiv \mathbb{P}(Z=0|X=x,Y=y)$. Let $p_{Y|X}$ denote the conditional density of the response variable in the regression problem with response variable $Y$ and explanatory variable $X$ (but excluding $Z$). From the rules of conditional probability, the target distribution of interest can be written as: $$\begin{equation} \begin{aligned} p(Y=y|X=x,Z=0) &= \frac{p(Y=y,Z=0|X=x)}{p(Z=0|X=x)} \\[6pt] &= \frac{p_0(x,y) \cdot p_{Y|X}(y|x)}{\int_\mathbb{R} p_0(x,y) \cdot p_{Y|X}(y|x) \ dy} \\[6pt] &\overset{y}{\propto} p_0(x,y) \cdot p_{Y|X}(y|x). \\[6pt] \end{aligned} \end{equation}$$ Thus, we can see that it is sufficient to be able to estimate the regression function $p_{Y|X}$ in the regression model with $Z$ omitted, and also estimate the probability function $p_0$ which you have as a separate estimator from your training data. The former can be estimated using OLS estimation without imposing any weights. The "weighting" occurs after estimation of this function, by substitution into the above equation. We can see that it is not necessary (or desirable) to use any weights in the regression of $Y$ on $X$, since it is sufficient to estimate the conditional density $p_{Y|X}$ without consideration of $Z$. OLS estimation of the coefficients of this regression gives an estimator $\hat{p}_{Y|X}$, and since you also have a separate estimator $\hat{p}_0$ you then have: $$\hat{p}(Y=y|X=x,Z=0) \propto \hat{p}_0(x,y) \cdot \hat{p}_{Y|X}(y|x). $$ Once you have substituted these estimators, all that remains is to try to determine the scaling constant that yields a proper density function. This can be done by a range of numerical integration methods (e.g., Simpson's rule, quadrature, Metropolis-Hastings, etc.).
Using regression weights when $Y$ might be measured with nonzero-mean measurement error
This is an omitted-variable problem where you have an indicator variable $Z$ that is unobserved, but which has a relationship with the response variable. Since "bias" is a property of an estimator, n
Using regression weights when $Y$ might be measured with nonzero-mean measurement error This is an omitted-variable problem where you have an indicator variable $Z$ that is unobserved, but which has a relationship with the response variable. Since "bias" is a property of an estimator, not a regression variable, I am going to reframe your question as one where you want to find the true regression function conditional on $Z=0$ using regression data that does not include this variable, and a separate set of regression training data that is used to estimate the probabilities $p_0(x,y) \equiv \mathbb{P}(Z=0|X=x,Y=y)$. Let $p_{Y|X}$ denote the conditional density of the response variable in the regression problem with response variable $Y$ and explanatory variable $X$ (but excluding $Z$). From the rules of conditional probability, the target distribution of interest can be written as: $$\begin{equation} \begin{aligned} p(Y=y|X=x,Z=0) &= \frac{p(Y=y,Z=0|X=x)}{p(Z=0|X=x)} \\[6pt] &= \frac{p_0(x,y) \cdot p_{Y|X}(y|x)}{\int_\mathbb{R} p_0(x,y) \cdot p_{Y|X}(y|x) \ dy} \\[6pt] &\overset{y}{\propto} p_0(x,y) \cdot p_{Y|X}(y|x). \\[6pt] \end{aligned} \end{equation}$$ Thus, we can see that it is sufficient to be able to estimate the regression function $p_{Y|X}$ in the regression model with $Z$ omitted, and also estimate the probability function $p_0$ which you have as a separate estimator from your training data. The former can be estimated using OLS estimation without imposing any weights. The "weighting" occurs after estimation of this function, by substitution into the above equation. We can see that it is not necessary (or desirable) to use any weights in the regression of $Y$ on $X$, since it is sufficient to estimate the conditional density $p_{Y|X}$ without consideration of $Z$. OLS estimation of the coefficients of this regression gives an estimator $\hat{p}_{Y|X}$, and since you also have a separate estimator $\hat{p}_0$ you then have: $$\hat{p}(Y=y|X=x,Z=0) \propto \hat{p}_0(x,y) \cdot \hat{p}_{Y|X}(y|x). $$ Once you have substituted these estimators, all that remains is to try to determine the scaling constant that yields a proper density function. This can be done by a range of numerical integration methods (e.g., Simpson's rule, quadrature, Metropolis-Hastings, etc.).
Using regression weights when $Y$ might be measured with nonzero-mean measurement error This is an omitted-variable problem where you have an indicator variable $Z$ that is unobserved, but which has a relationship with the response variable. Since "bias" is a property of an estimator, n
36,822
Using regression weights when $Y$ might be measured with nonzero-mean measurement error
Your idea will not give an unbiased estimate, unless you can always be 100% sure whether it is biased or not. As soon as one biased example can be part of your training set with nonzero probability, there will be bias, as you have nothing to cancel out that bias. In practice, your bias will simply be multiplied by a factor $\alpha<1$, where $\alpha$ is the probability that a biased example is detected as such. Assuming you have enough data, a better approach is to compute $P(Z=biased|X,Y)$ for each sample, and then remove all samples from the training set where this probability exceeds a certain threshold. For example, if it is feasible for you to train your dataset on only the samples for which $P(Z=biased|X,Y)<0.01$, and your dataset decreases from $N$ biased and $M$ unbiased to $n$ biased and $m$ unbiased examples, and the bias will multiply by a factor $f=\frac{n(N+M)}{N(n+m)}$. Since typically $\frac nN$ will be far lower than $\frac mM$, $f$ will be much smaller than $1$, resulting in a significant improvement. Note that both techniques can be combined: rows with $p=P(Z=biased|X,Y)>\beta$ go out (for some choice of $\beta$, above I used $\beta=0.01$), and the rows that stay in get a weight $(1-\frac{p}{\beta})^2$, that should actually give you the best of both worlds.
Using regression weights when $Y$ might be measured with nonzero-mean measurement error
Your idea will not give an unbiased estimate, unless you can always be 100% sure whether it is biased or not. As soon as one biased example can be part of your training set with nonzero probability, t
Using regression weights when $Y$ might be measured with nonzero-mean measurement error Your idea will not give an unbiased estimate, unless you can always be 100% sure whether it is biased or not. As soon as one biased example can be part of your training set with nonzero probability, there will be bias, as you have nothing to cancel out that bias. In practice, your bias will simply be multiplied by a factor $\alpha<1$, where $\alpha$ is the probability that a biased example is detected as such. Assuming you have enough data, a better approach is to compute $P(Z=biased|X,Y)$ for each sample, and then remove all samples from the training set where this probability exceeds a certain threshold. For example, if it is feasible for you to train your dataset on only the samples for which $P(Z=biased|X,Y)<0.01$, and your dataset decreases from $N$ biased and $M$ unbiased to $n$ biased and $m$ unbiased examples, and the bias will multiply by a factor $f=\frac{n(N+M)}{N(n+m)}$. Since typically $\frac nN$ will be far lower than $\frac mM$, $f$ will be much smaller than $1$, resulting in a significant improvement. Note that both techniques can be combined: rows with $p=P(Z=biased|X,Y)>\beta$ go out (for some choice of $\beta$, above I used $\beta=0.01$), and the rows that stay in get a weight $(1-\frac{p}{\beta})^2$, that should actually give you the best of both worlds.
Using regression weights when $Y$ might be measured with nonzero-mean measurement error Your idea will not give an unbiased estimate, unless you can always be 100% sure whether it is biased or not. As soon as one biased example can be part of your training set with nonzero probability, t
36,823
Convergence of the Matérn covariance function to the squared exponential
The Matérn function can be written in terms of $$f_{\nu}(x) = C_\nu |x|^{\nu} K_{\nu}\left(|x|\right)\tag{*}$$ where $C_\nu$ is a normalizing constant (to make the value of $f_\nu(0)$ equal to $1$) and $x = \sqrt{2\nu}\, d/\rho.$ (This agrees with the Wikipedia notation where $x$ represents $\sqrt{2\nu} d/\rho.$) As shown at Moment generating function of the inner product of two gaussian random vectors (using elementary techniques), the Matern function is proportional to the density function for the distribution of the dot product of two random vectors where each has $2\nu+1$ components and all components are independently distributed as standard Normal variables. Such an inner product is the sum of the $2\nu+1$ independent and identically distributed products of corresponding components of the vectors. Each of those is the product of two independent standard Normal variables $X$ and $Y$ and therefore has mean $0$ and variance $$\operatorname{Var}(XY) = E[(XY)^2] = E[X^2]E[Y^2] = (1)(1) = 1.$$ Consequently the inner product has mean $(2\nu+1)(0) = 0$ and variance $(2\nu+1)(1)=2\nu+1.$ The Central Limit Theorem asserts that the normalized versions of these inner products therefore approach a standard Normal distribution almost surely. The effect of normalization is to replace $x$ by the square root of its variance, $x\sqrt{2\nu+1},$ which changes the probability element $f_{\nu}(x)\mathrm{d}x$ by $$f_{\nu}(x\sqrt{2\nu+1})\mathrm{d}(x\sqrt{2\nu+1}) = \sqrt{2\nu+1} f_{\nu}(x\sqrt{2\nu+1})\mathrm{d}x.$$ This differs from $(*)$ (where we may take $\rho=1$ without any loss of generality, because it merely establishes the distance unit of measurement) only insofar as $x$ is multiplied by $\sqrt{2\nu+1}$ instead of $\sqrt{2\nu}.$ Since the ratio of these terms approaches unity, in the limit it makes no difference which one is used. Consequently, the convergence is almost sure. One tiny nicety is that because $f_\nu$ is normalized to have a peak height of $1,$ which is $\sqrt{2\pi}$ times the peak height of the standard Normal density, the convergence is to $\sqrt{2\pi}$ times the standard Normal density rather than the density itself. Re-introducing the scale factor $\rho$, we have deduced--using purely statistical thinking!--that $$\lim_{\nu\to\infty} f_\nu(d) = \exp\left(-\frac{d^2}{2\rho^2}\right)$$ almost surely. This agrees with what Wikipedia asserts. This plot shows graphs of $f_2$ (blue), $f_5$ (red), and the limiting Gaussian (gold). The convergence occurs by pulling the tail in to fill out the peak.
Convergence of the Matérn covariance function to the squared exponential
The Matérn function can be written in terms of $$f_{\nu}(x) = C_\nu |x|^{\nu} K_{\nu}\left(|x|\right)\tag{*}$$ where $C_\nu$ is a normalizing constant (to make the value of $f_\nu(0)$ equal to $1$) an
Convergence of the Matérn covariance function to the squared exponential The Matérn function can be written in terms of $$f_{\nu}(x) = C_\nu |x|^{\nu} K_{\nu}\left(|x|\right)\tag{*}$$ where $C_\nu$ is a normalizing constant (to make the value of $f_\nu(0)$ equal to $1$) and $x = \sqrt{2\nu}\, d/\rho.$ (This agrees with the Wikipedia notation where $x$ represents $\sqrt{2\nu} d/\rho.$) As shown at Moment generating function of the inner product of two gaussian random vectors (using elementary techniques), the Matern function is proportional to the density function for the distribution of the dot product of two random vectors where each has $2\nu+1$ components and all components are independently distributed as standard Normal variables. Such an inner product is the sum of the $2\nu+1$ independent and identically distributed products of corresponding components of the vectors. Each of those is the product of two independent standard Normal variables $X$ and $Y$ and therefore has mean $0$ and variance $$\operatorname{Var}(XY) = E[(XY)^2] = E[X^2]E[Y^2] = (1)(1) = 1.$$ Consequently the inner product has mean $(2\nu+1)(0) = 0$ and variance $(2\nu+1)(1)=2\nu+1.$ The Central Limit Theorem asserts that the normalized versions of these inner products therefore approach a standard Normal distribution almost surely. The effect of normalization is to replace $x$ by the square root of its variance, $x\sqrt{2\nu+1},$ which changes the probability element $f_{\nu}(x)\mathrm{d}x$ by $$f_{\nu}(x\sqrt{2\nu+1})\mathrm{d}(x\sqrt{2\nu+1}) = \sqrt{2\nu+1} f_{\nu}(x\sqrt{2\nu+1})\mathrm{d}x.$$ This differs from $(*)$ (where we may take $\rho=1$ without any loss of generality, because it merely establishes the distance unit of measurement) only insofar as $x$ is multiplied by $\sqrt{2\nu+1}$ instead of $\sqrt{2\nu}.$ Since the ratio of these terms approaches unity, in the limit it makes no difference which one is used. Consequently, the convergence is almost sure. One tiny nicety is that because $f_\nu$ is normalized to have a peak height of $1,$ which is $\sqrt{2\pi}$ times the peak height of the standard Normal density, the convergence is to $\sqrt{2\pi}$ times the standard Normal density rather than the density itself. Re-introducing the scale factor $\rho$, we have deduced--using purely statistical thinking!--that $$\lim_{\nu\to\infty} f_\nu(d) = \exp\left(-\frac{d^2}{2\rho^2}\right)$$ almost surely. This agrees with what Wikipedia asserts. This plot shows graphs of $f_2$ (blue), $f_5$ (red), and the limiting Gaussian (gold). The convergence occurs by pulling the tail in to fill out the peak.
Convergence of the Matérn covariance function to the squared exponential The Matérn function can be written in terms of $$f_{\nu}(x) = C_\nu |x|^{\nu} K_{\nu}\left(|x|\right)\tag{*}$$ where $C_\nu$ is a normalizing constant (to make the value of $f_\nu(0)$ equal to $1$) an
36,824
$\min(x)$ as a quantile estimator for the 1% quantile of $x$
Min of 100 observations long sample is used as an estimator of 1% quantile in practice. I've seen it called "empirical percentile." Known distribution family If you want a different estimate AND have an idea about the distribution of the data, then I suggest to look at order statistics medians. For instance, this R package uses them for probability plot correlation coefficients PPCC. You can find how they do it for some distributions such as normal. You can see more details in Vogel's 1986 paper "The Probability Plot Correlation Coefficient Test for the Normal, Lognormal, and Gumbel Distributional Hypothese" here on order statistic medians on normal and lognormal distributions. For instance, from Vogel's paper Eq.2 defines the min(x) of 100 observations sample from the standard normal distribution as follows: $$M_1=\Phi^{-1}(F_Y(\min(y)))$$ where the estimate of the median of CDF: $$\hat F_Y(\min(y))=1-(1/2)^{1/100}=0.0069$$ We get the following value: $M_1=-2.46$ for the standard normal to which you can apply the location and scale to get your estimate of 1th percentile: $\hat\mu-2.46\hat\sigma$. Here how this compares to min(x) on normal distribution: The plot on the top is the distribution of min(x) estimator of 1th percentile, and the one on the bottom is one I suggested to look at. I also pasted the code below. In the code I randomly pick mean and dispersion of the normal distribution, then generate a sample of length 100 observations. Next, I find min(x), then scale it to standard normal using true parameters of the normal distribution. For M1 method, I calculate the quantile using estimated mean and variance, then scale it back to standard using the true parameters again. This way I can account for impact of estimation error of mean and standard deviation to some extent. I also show the true percentile with a vertical line. You can see how M1 estimator is much tighter than min(x). It is because we use our knowledge of the true distribution type, i.e. normal. We still don't know true parameters, but even knowing the distribution family improved our estimate tremendously. OCTAVE CODE You can run it here online: https://octave-online.net/ N=100000 n=100 mus = randn(1,N); sigmas = abs(randn(1,N)); r = randn(n,N).*repmat(sigmas,n,1)+repmat(mus,n,1); muhats = mean(r); sigmahats = std(r); fhat = 1-(1/2)^(1/100) M1 = norminv(fhat) onepcthats = (M1*sigmahats + muhats - mus) ./ sigmas; mins = min(r); minonepcthats = (mins - mus) ./ sigmas; onepct = norminv(0.01) figure subplot(2,1,1) hist(minonepcthats,100) title 'min(x)' xlims = xlim; ylims = ylim; hold on plot([onepct,onepct],ylims) subplot(2,1,2) hist(onepcthats,100) title 'M1' xlim(xlims) hold on plot([onepct,onepct],ylims) Unknown distribution If you don't from which distribution the data is coming, then there's another approach that is used in financial risk applications. There are two Johnson distributions SU and SL. The former is for unbounded cases such as Normal and Student t, and the latter is for lower bounded such as lognormal. You can fit Johnson distribution to your data, then using the estimated parameters estimate the required quantile. Tuenter (2001) suggested a moment-matching fitting procedure, which is used in practice by some. Will it be better than min(x)? I don't know for sure, but sometimes it produces better results in my practice, e.g. when you don't know the distribution but know that it's lower bounded.
$\min(x)$ as a quantile estimator for the 1% quantile of $x$
Min of 100 observations long sample is used as an estimator of 1% quantile in practice. I've seen it called "empirical percentile." Known distribution family If you want a different estimate AND have
$\min(x)$ as a quantile estimator for the 1% quantile of $x$ Min of 100 observations long sample is used as an estimator of 1% quantile in practice. I've seen it called "empirical percentile." Known distribution family If you want a different estimate AND have an idea about the distribution of the data, then I suggest to look at order statistics medians. For instance, this R package uses them for probability plot correlation coefficients PPCC. You can find how they do it for some distributions such as normal. You can see more details in Vogel's 1986 paper "The Probability Plot Correlation Coefficient Test for the Normal, Lognormal, and Gumbel Distributional Hypothese" here on order statistic medians on normal and lognormal distributions. For instance, from Vogel's paper Eq.2 defines the min(x) of 100 observations sample from the standard normal distribution as follows: $$M_1=\Phi^{-1}(F_Y(\min(y)))$$ where the estimate of the median of CDF: $$\hat F_Y(\min(y))=1-(1/2)^{1/100}=0.0069$$ We get the following value: $M_1=-2.46$ for the standard normal to which you can apply the location and scale to get your estimate of 1th percentile: $\hat\mu-2.46\hat\sigma$. Here how this compares to min(x) on normal distribution: The plot on the top is the distribution of min(x) estimator of 1th percentile, and the one on the bottom is one I suggested to look at. I also pasted the code below. In the code I randomly pick mean and dispersion of the normal distribution, then generate a sample of length 100 observations. Next, I find min(x), then scale it to standard normal using true parameters of the normal distribution. For M1 method, I calculate the quantile using estimated mean and variance, then scale it back to standard using the true parameters again. This way I can account for impact of estimation error of mean and standard deviation to some extent. I also show the true percentile with a vertical line. You can see how M1 estimator is much tighter than min(x). It is because we use our knowledge of the true distribution type, i.e. normal. We still don't know true parameters, but even knowing the distribution family improved our estimate tremendously. OCTAVE CODE You can run it here online: https://octave-online.net/ N=100000 n=100 mus = randn(1,N); sigmas = abs(randn(1,N)); r = randn(n,N).*repmat(sigmas,n,1)+repmat(mus,n,1); muhats = mean(r); sigmahats = std(r); fhat = 1-(1/2)^(1/100) M1 = norminv(fhat) onepcthats = (M1*sigmahats + muhats - mus) ./ sigmas; mins = min(r); minonepcthats = (mins - mus) ./ sigmas; onepct = norminv(0.01) figure subplot(2,1,1) hist(minonepcthats,100) title 'min(x)' xlims = xlim; ylims = ylim; hold on plot([onepct,onepct],ylims) subplot(2,1,2) hist(onepcthats,100) title 'M1' xlim(xlims) hold on plot([onepct,onepct],ylims) Unknown distribution If you don't from which distribution the data is coming, then there's another approach that is used in financial risk applications. There are two Johnson distributions SU and SL. The former is for unbounded cases such as Normal and Student t, and the latter is for lower bounded such as lognormal. You can fit Johnson distribution to your data, then using the estimated parameters estimate the required quantile. Tuenter (2001) suggested a moment-matching fitting procedure, which is used in practice by some. Will it be better than min(x)? I don't know for sure, but sometimes it produces better results in my practice, e.g. when you don't know the distribution but know that it's lower bounded.
$\min(x)$ as a quantile estimator for the 1% quantile of $x$ Min of 100 observations long sample is used as an estimator of 1% quantile in practice. I've seen it called "empirical percentile." Known distribution family If you want a different estimate AND have
36,825
What does decay_steps mean in Tensorflow tf.train.exponential_decay?
As mentioned in the code of the function the relation of decay_steps with decayed_learning_rate is the following: decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps) Hence, you should set the decay_steps proportional to the global_step of the algorithm.
What does decay_steps mean in Tensorflow tf.train.exponential_decay?
As mentioned in the code of the function the relation of decay_steps with decayed_learning_rate is the following: decayed_learning_rate = learning_rate * decay_rate ^ (global_ste
What does decay_steps mean in Tensorflow tf.train.exponential_decay? As mentioned in the code of the function the relation of decay_steps with decayed_learning_rate is the following: decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps) Hence, you should set the decay_steps proportional to the global_step of the algorithm.
What does decay_steps mean in Tensorflow tf.train.exponential_decay? As mentioned in the code of the function the relation of decay_steps with decayed_learning_rate is the following: decayed_learning_rate = learning_rate * decay_rate ^ (global_ste
36,826
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldots,X_n$ i.i.d. random variables
For the covariance term $\operatorname{Cov}\{S(X_i-X_j),S(X_l-X_w)\}, j>i, w>l$ the only zero cases occur when both $l$ and $w$ are different from $i$ and $j$. Tedious decomposition of the summations gives \begin{align} 4^{-1}n^4 \operatorname{Var}(U) = {} & \sum_{i=1}^{n-1} \sum_{j>i}^n \operatorname{Var} \{S(X_i-X_j)\} \\ & {} + \sum_{i=1}^{n-2} \sum_{j>i}^n \sum_{w>i,w\neq j}^n \operatorname{Cov}\{S(X_i-X_j),S(X_i-X_w)\} \\ & {} +\sum_{i=1}^{n-1} \sum_{j>i}^n \sum_{l\neq i,j>l}^{n-1} \operatorname{Cov} \{S(X_i-X_j),S(X_l-X_j)\}\\ & {} +\sum_{i=1}^{n-1} \sum_{j>i}^{n} \sum_{w>j}^n \operatorname{Cov} \{S(X_i-X_j),S(X_j-X_w)\}\\ & {} +\sum_{i=1}^{n-1} \sum_{j>i}^{n} \sum_{i>l}^{n-1} \operatorname{Cov} \{S(X_i-X_j),S(X_l-X_i)\}\\ & {} +\sum_{i=1}^{n-1} \sum_{j>i}^{n} \sum_{l \neq i,l \neq j}^{n-1} \sum_{w>l,w \neq j, w \neq i}^n \operatorname{Cov} \{S(X_i-X_j),S(X_l-X_w)\} \end{align} The assumption that $X_1, \ldots,X_n$ are a set of i.i.d random variables ensures that all the covariances above are the same, with the exception of the last term which is zero. Then, with a little of combinatorics, we obtain \begin{align} 4^{-1}n^4 \operatorname{Var}(U) = {} & [(n-1)+(n-2)+\ldots+1] \operatorname{Var} \{S(X_1-X_2)\}\\ & {} +2[(n-1)(n-2)+(n-2)(n-3)+\ldots+2]\operatorname{Cov}\{S(X_1-X_2),S(X_2-X_3)\}\\ & {} +\frac{1}{2}[(n-1)(n-2)+(n-2)(n-3)+\ldots+2] \operatorname{Cov}\{S(X_1-X_2),S(X_2-X_3)\}\\ & {} +[(n-2)1+(n-3)2+\ldots+2(n-3)+1(n-2)] \operatorname{Cov}\{S(X_1-X_2),S(X_2-X_3)\}\\ \end{align} Keeping in mind that $\sum_{k=1}^{n} k = \frac{n(n+1)}{2}$ and $\sum_{k=1}^{n} k(k+1) = \frac{n(n+1)(n+2)}{3}$, we easily see that $\sum_{k=1}^{n-2} k(n-(k+1)) = \frac{n(n-1)(n-2)}{2}-\frac{n(n-1)(n-2)}{3}$. Using these results, it is straightforward to obtain \begin{align} 4^{-1}n^4 \operatorname{Var}(U) ={} & \frac{n(n-1)}{2}\operatorname{Var} \{S(X_1-X_2)\}\\ & {} +\frac{n(n-1)(n-2)}{3} \operatorname{Cov}\{S(X_1-X_2),S(X_2-X_3)\} \end{align} which is the required result.
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldo
For the covariance term $\operatorname{Cov}\{S(X_i-X_j),S(X_l-X_w)\}, j>i, w>l$ the only zero cases occur when both $l$ and $w$ are different from $i$ and $j$. Tedious decomposition of the summations
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldots,X_n$ i.i.d. random variables For the covariance term $\operatorname{Cov}\{S(X_i-X_j),S(X_l-X_w)\}, j>i, w>l$ the only zero cases occur when both $l$ and $w$ are different from $i$ and $j$. Tedious decomposition of the summations gives \begin{align} 4^{-1}n^4 \operatorname{Var}(U) = {} & \sum_{i=1}^{n-1} \sum_{j>i}^n \operatorname{Var} \{S(X_i-X_j)\} \\ & {} + \sum_{i=1}^{n-2} \sum_{j>i}^n \sum_{w>i,w\neq j}^n \operatorname{Cov}\{S(X_i-X_j),S(X_i-X_w)\} \\ & {} +\sum_{i=1}^{n-1} \sum_{j>i}^n \sum_{l\neq i,j>l}^{n-1} \operatorname{Cov} \{S(X_i-X_j),S(X_l-X_j)\}\\ & {} +\sum_{i=1}^{n-1} \sum_{j>i}^{n} \sum_{w>j}^n \operatorname{Cov} \{S(X_i-X_j),S(X_j-X_w)\}\\ & {} +\sum_{i=1}^{n-1} \sum_{j>i}^{n} \sum_{i>l}^{n-1} \operatorname{Cov} \{S(X_i-X_j),S(X_l-X_i)\}\\ & {} +\sum_{i=1}^{n-1} \sum_{j>i}^{n} \sum_{l \neq i,l \neq j}^{n-1} \sum_{w>l,w \neq j, w \neq i}^n \operatorname{Cov} \{S(X_i-X_j),S(X_l-X_w)\} \end{align} The assumption that $X_1, \ldots,X_n$ are a set of i.i.d random variables ensures that all the covariances above are the same, with the exception of the last term which is zero. Then, with a little of combinatorics, we obtain \begin{align} 4^{-1}n^4 \operatorname{Var}(U) = {} & [(n-1)+(n-2)+\ldots+1] \operatorname{Var} \{S(X_1-X_2)\}\\ & {} +2[(n-1)(n-2)+(n-2)(n-3)+\ldots+2]\operatorname{Cov}\{S(X_1-X_2),S(X_2-X_3)\}\\ & {} +\frac{1}{2}[(n-1)(n-2)+(n-2)(n-3)+\ldots+2] \operatorname{Cov}\{S(X_1-X_2),S(X_2-X_3)\}\\ & {} +[(n-2)1+(n-3)2+\ldots+2(n-3)+1(n-2)] \operatorname{Cov}\{S(X_1-X_2),S(X_2-X_3)\}\\ \end{align} Keeping in mind that $\sum_{k=1}^{n} k = \frac{n(n+1)}{2}$ and $\sum_{k=1}^{n} k(k+1) = \frac{n(n+1)(n+2)}{3}$, we easily see that $\sum_{k=1}^{n-2} k(n-(k+1)) = \frac{n(n-1)(n-2)}{2}-\frac{n(n-1)(n-2)}{3}$. Using these results, it is straightforward to obtain \begin{align} 4^{-1}n^4 \operatorname{Var}(U) ={} & \frac{n(n-1)}{2}\operatorname{Var} \{S(X_1-X_2)\}\\ & {} +\frac{n(n-1)(n-2)}{3} \operatorname{Cov}\{S(X_1-X_2),S(X_2-X_3)\} \end{align} which is the required result.
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldo For the covariance term $\operatorname{Cov}\{S(X_i-X_j),S(X_l-X_w)\}, j>i, w>l$ the only zero cases occur when both $l$ and $w$ are different from $i$ and $j$. Tedious decomposition of the summations
36,827
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldots,X_n$ i.i.d. random variables
You have the right idea writing the variance as a sum of many covariances. Using iid-ness, you just need to separate that into two or three different kinds of sums that all have equal summands. Once you know all the summands are equal, you just need to count the number of times they show up. If you look at the answer you want to end up with, it suggests there are two (or three) ways that you will get nonzero covariances. First $$ \text{Cov}\left( S(X_i - X_j), S(X_l - X_m)\right) \neq 0 $$ when $i = l$ *and* $j = m$. This you took into account already, so nice job. However, there are also the situations when $i = l$ *and* $j \neq m$ $i \neq l$ *and* $j = m$. If either of these are true, then there is one $X$ random variable, not two, in each of the arguments of your $\text{Cov}(\cdot, \cdot)$ operator.
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldo
You have the right idea writing the variance as a sum of many covariances. Using iid-ness, you just need to separate that into two or three different kinds of sums that all have equal summands. Once y
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldots,X_n$ i.i.d. random variables You have the right idea writing the variance as a sum of many covariances. Using iid-ness, you just need to separate that into two or three different kinds of sums that all have equal summands. Once you know all the summands are equal, you just need to count the number of times they show up. If you look at the answer you want to end up with, it suggests there are two (or three) ways that you will get nonzero covariances. First $$ \text{Cov}\left( S(X_i - X_j), S(X_l - X_m)\right) \neq 0 $$ when $i = l$ *and* $j = m$. This you took into account already, so nice job. However, there are also the situations when $i = l$ *and* $j \neq m$ $i \neq l$ *and* $j = m$. If either of these are true, then there is one $X$ random variable, not two, in each of the arguments of your $\text{Cov}(\cdot, \cdot)$ operator.
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldo You have the right idea writing the variance as a sum of many covariances. Using iid-ness, you just need to separate that into two or three different kinds of sums that all have equal summands. Once y
36,828
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldots,X_n$ i.i.d. random variables
You wrote: "since $\operatorname{Cov}\{S(X_i-X_j),S(X_\ell-X_w)\}=0,\forall i\neq \ell,j \neq w$." That's wrong. In some terms you have $i\ne\ell$ and $j\ne w$ but $\ell=j.$ What happens then? Suppose $n=5$ and $(i,j)=(1,2)$ and we forbid $1=\ell$ and $2=j.$ Then cases of nonzero covariance include: \begin{align} (\ell,w) = {} & \phantom{\text{or }} (2,3) \\ & \text{or } (2,4) \\ & \text{or } (2,5). \end{align} Now suppose $n=5$ and $(i,j)=(4,5).$ Then cases of nonzero covariance include \begin{align} (\ell,w) = & \phantom{\text{or }} (1,4) \\ & \text{or } (2,4) \\ & \text{or } (3,4). \end{align} and so on.
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldo
You wrote: "since $\operatorname{Cov}\{S(X_i-X_j),S(X_\ell-X_w)\}=0,\forall i\neq \ell,j \neq w$." That's wrong. In some terms you have $i\ne\ell$ and $j\ne w$ but $\ell=j.$ What happens then? Suppose
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldots,X_n$ i.i.d. random variables You wrote: "since $\operatorname{Cov}\{S(X_i-X_j),S(X_\ell-X_w)\}=0,\forall i\neq \ell,j \neq w$." That's wrong. In some terms you have $i\ne\ell$ and $j\ne w$ but $\ell=j.$ What happens then? Suppose $n=5$ and $(i,j)=(1,2)$ and we forbid $1=\ell$ and $2=j.$ Then cases of nonzero covariance include: \begin{align} (\ell,w) = {} & \phantom{\text{or }} (2,3) \\ & \text{or } (2,4) \\ & \text{or } (2,5). \end{align} Now suppose $n=5$ and $(i,j)=(4,5).$ Then cases of nonzero covariance include \begin{align} (\ell,w) = & \phantom{\text{or }} (1,4) \\ & \text{or } (2,4) \\ & \text{or } (3,4). \end{align} and so on.
Calculate the variance of $\sum\limits_{i=1}^{n-1} \sum\limits_{j=i+1}^n S(X_i - X_j)$ for $X_1,\ldo You wrote: "since $\operatorname{Cov}\{S(X_i-X_j),S(X_\ell-X_w)\}=0,\forall i\neq \ell,j \neq w$." That's wrong. In some terms you have $i\ne\ell$ and $j\ne w$ but $\ell=j.$ What happens then? Suppose
36,829
When are correlated Normal random variables multivariate Normal? [duplicate]
Say I observe n univariate random variables $X_1, \dots, X_n$ that are each $N(\mu, \sigma^2)$ with common correlation $\rho$. Is it possible that these are jointly normal? If so, what are the conditions and how would I know if they are jointly normal. There are no conditions based only on the marginal pdfs that can ensure joint normality. Let $\phi(\cdot)$ denote the standard normal density. Then, if $X$ and $Y$ have joint pdf $$f_{X,Y}(x,y) = \begin{cases} 2\phi(x)\phi(y), & x \geq 0, y \geq 0,\\ 2\phi(x)\phi(y), & x < 0, y < 0,\\ 0, &\text{otherwise},\end{cases}$$ then $X$ and $Y$ are (positively) correlated standard normal random variables (work out the marginal densities to verify this if it is not immediately obvious) that do not have a bivariate joint normal density. So, given only that $X$ and $Y$ are correlated standard normal random variables, how can we tell whether $X$ and $Y$ have the joint pdf shown above or the bivariate joint normal density with the same correlation coefficient ? In the opposite direction, if $X$ and $Y$ are independent random variables (note the utter lack of mention of normality of $X$ and $Y$) and $X+Y$ is normal, then $X$ and $Y$ are normal random variables (Feller, Chapter XV.8, Theorem 1).
When are correlated Normal random variables multivariate Normal? [duplicate]
Say I observe n univariate random variables $X_1, \dots, X_n$ that are each $N(\mu, \sigma^2)$ with common correlation $\rho$. Is it possible that these are jointly normal? If so, what are the condit
When are correlated Normal random variables multivariate Normal? [duplicate] Say I observe n univariate random variables $X_1, \dots, X_n$ that are each $N(\mu, \sigma^2)$ with common correlation $\rho$. Is it possible that these are jointly normal? If so, what are the conditions and how would I know if they are jointly normal. There are no conditions based only on the marginal pdfs that can ensure joint normality. Let $\phi(\cdot)$ denote the standard normal density. Then, if $X$ and $Y$ have joint pdf $$f_{X,Y}(x,y) = \begin{cases} 2\phi(x)\phi(y), & x \geq 0, y \geq 0,\\ 2\phi(x)\phi(y), & x < 0, y < 0,\\ 0, &\text{otherwise},\end{cases}$$ then $X$ and $Y$ are (positively) correlated standard normal random variables (work out the marginal densities to verify this if it is not immediately obvious) that do not have a bivariate joint normal density. So, given only that $X$ and $Y$ are correlated standard normal random variables, how can we tell whether $X$ and $Y$ have the joint pdf shown above or the bivariate joint normal density with the same correlation coefficient ? In the opposite direction, if $X$ and $Y$ are independent random variables (note the utter lack of mention of normality of $X$ and $Y$) and $X+Y$ is normal, then $X$ and $Y$ are normal random variables (Feller, Chapter XV.8, Theorem 1).
When are correlated Normal random variables multivariate Normal? [duplicate] Say I observe n univariate random variables $X_1, \dots, X_n$ that are each $N(\mu, \sigma^2)$ with common correlation $\rho$. Is it possible that these are jointly normal? If so, what are the condit
36,830
When are correlated Normal random variables multivariate Normal? [duplicate]
It certainly is possible. From a theoretical perspective, there are many different ways to "characterize" the Multivariate Normal distribution, see for example Hamedani, G. G. (1992). Bivariate and multivariate normal characterizations: a brief survey. Communications in Statistics-Theory and Methods, 21(9), 2665-2688. From a practical perspective see for example Henze, N. (2002). Invariant tests for multivariate normality: a critical review. Statistical papers, 43(4), 467-506.
When are correlated Normal random variables multivariate Normal? [duplicate]
It certainly is possible. From a theoretical perspective, there are many different ways to "characterize" the Multivariate Normal distribution, see for example Hamedani, G. G. (1992). Bivariate and m
When are correlated Normal random variables multivariate Normal? [duplicate] It certainly is possible. From a theoretical perspective, there are many different ways to "characterize" the Multivariate Normal distribution, see for example Hamedani, G. G. (1992). Bivariate and multivariate normal characterizations: a brief survey. Communications in Statistics-Theory and Methods, 21(9), 2665-2688. From a practical perspective see for example Henze, N. (2002). Invariant tests for multivariate normality: a critical review. Statistical papers, 43(4), 467-506.
When are correlated Normal random variables multivariate Normal? [duplicate] It certainly is possible. From a theoretical perspective, there are many different ways to "characterize" the Multivariate Normal distribution, see for example Hamedani, G. G. (1992). Bivariate and m
36,831
When are correlated Normal random variables multivariate Normal? [duplicate]
This is an interesting question. I will look at it from another viewpoint: When should you expect that a joint distribution with normal marginals, not is multinormal? Certain phenomena occurring in data cannot be described by a multinormal distribution, and some examples (even a list) of such phenomena is interesting. Two examples, as a start: If the random vector $(Y,X^T)^T$ is multinormal (take here $Y$ as scalar), then the conditional expectation of $Y$ given $X$ takes the form of a linear function in the $X$: $$\DeclareMathOperator{\E}{\mathbb{E}} \E [Y \mid X=x]= \beta_0 + \beta^T x $$ for some parameters $\beta_0, \beta$ (which can be calculated form the expectation and covariance matrix of $(Y,X^T)^T$). So, if data indicates that regressing $Y$ on $X$ is nonlinear, or needs interaction terms, then the joint distribution cannot be multinormal. For an example see Conditional expectation of two identical marginal normal random variables. Another example is Can I analyze or model a conditional correlation? which is about studying the correlation between two variables conditional on a third, how the correlation changes with values of the third. If the three variables are multinormal, one can show easily that the conditional correlation is a constant, so this phenomenon cannot occur. But there must be many other interesting such examples ...
When are correlated Normal random variables multivariate Normal? [duplicate]
This is an interesting question. I will look at it from another viewpoint: When should you expect that a joint distribution with normal marginals, not is multinormal? Certain phenomena occurring in d
When are correlated Normal random variables multivariate Normal? [duplicate] This is an interesting question. I will look at it from another viewpoint: When should you expect that a joint distribution with normal marginals, not is multinormal? Certain phenomena occurring in data cannot be described by a multinormal distribution, and some examples (even a list) of such phenomena is interesting. Two examples, as a start: If the random vector $(Y,X^T)^T$ is multinormal (take here $Y$ as scalar), then the conditional expectation of $Y$ given $X$ takes the form of a linear function in the $X$: $$\DeclareMathOperator{\E}{\mathbb{E}} \E [Y \mid X=x]= \beta_0 + \beta^T x $$ for some parameters $\beta_0, \beta$ (which can be calculated form the expectation and covariance matrix of $(Y,X^T)^T$). So, if data indicates that regressing $Y$ on $X$ is nonlinear, or needs interaction terms, then the joint distribution cannot be multinormal. For an example see Conditional expectation of two identical marginal normal random variables. Another example is Can I analyze or model a conditional correlation? which is about studying the correlation between two variables conditional on a third, how the correlation changes with values of the third. If the three variables are multinormal, one can show easily that the conditional correlation is a constant, so this phenomenon cannot occur. But there must be many other interesting such examples ...
When are correlated Normal random variables multivariate Normal? [duplicate] This is an interesting question. I will look at it from another viewpoint: When should you expect that a joint distribution with normal marginals, not is multinormal? Certain phenomena occurring in d
36,832
Examples of variables on interval scale (besides temperature)
I will consider a variable on an interval scale to be one which has an order of their elements, with a meaningful and comparable difference, but with a zero which is not meaningful. This is in contrast with ratio scales which have all the qualities of interval scales and also a meaningful zero, where zero means the quantity vanish, does not exist. Now some examples: temperature: if measured in kelvin is on a ratio scale, since 0 K means there is no heat; when temperature is measured in Celsius or Fahrenheit is on an interval scale dates: interval scale, since you have no zero; you can choose your reference how do you like, it has no meaning location in Cartesian space: you can choose your origin however you like, having a point on $0$ on the real axis in 1D space, does not mean it does not have a location; note however that distance from an origin is a ratio scale measurement cardinal direction measured in degrees from true North is on interval scale; the departure from North, in contrast, is on ratio scale custom scores - for example a score between 1 and 5 which defines satisfaction; while there is some debate if this scores are ordinal or not, there are many which considers them interval scales, even in sociology texts IQ scores, GPA and similar - most of them are calibrated around some mean, but no human is assumed to have a 0 score, equivalent of no intelligence at all (even if I can think that some of them have good chances to break that)
Examples of variables on interval scale (besides temperature)
I will consider a variable on an interval scale to be one which has an order of their elements, with a meaningful and comparable difference, but with a zero which is not meaningful. This is in contra
Examples of variables on interval scale (besides temperature) I will consider a variable on an interval scale to be one which has an order of their elements, with a meaningful and comparable difference, but with a zero which is not meaningful. This is in contrast with ratio scales which have all the qualities of interval scales and also a meaningful zero, where zero means the quantity vanish, does not exist. Now some examples: temperature: if measured in kelvin is on a ratio scale, since 0 K means there is no heat; when temperature is measured in Celsius or Fahrenheit is on an interval scale dates: interval scale, since you have no zero; you can choose your reference how do you like, it has no meaning location in Cartesian space: you can choose your origin however you like, having a point on $0$ on the real axis in 1D space, does not mean it does not have a location; note however that distance from an origin is a ratio scale measurement cardinal direction measured in degrees from true North is on interval scale; the departure from North, in contrast, is on ratio scale custom scores - for example a score between 1 and 5 which defines satisfaction; while there is some debate if this scores are ordinal or not, there are many which considers them interval scales, even in sociology texts IQ scores, GPA and similar - most of them are calibrated around some mean, but no human is assumed to have a 0 score, equivalent of no intelligence at all (even if I can think that some of them have good chances to break that)
Examples of variables on interval scale (besides temperature) I will consider a variable on an interval scale to be one which has an order of their elements, with a meaningful and comparable difference, but with a zero which is not meaningful. This is in contra
36,833
Examples of variables on interval scale (besides temperature)
In economics, utility can be considered to be on interval scale. There is some disagreement whether utility should be ordinal or cardinal, but in the case of cardinal utility, it is measured on the interval scale. The following is a quote from Wikipedia's article on Cardinal utility, section Measurability: It is helpful to consider the same problem as it appears in the construction of scales of measurement in the natural sciences. In the case of temperature there are two degrees of freedom for its measurement - the choice of unit and the zero. Different temperature scales map its intensity in different ways. In the celsius scale the zero is chosen to be the point where water freezes, and likewise, in cardinal utility theory one would be tempted to think that the choice of zero would correspond to a good or service that brings exactly $0$ utils. However this is not necessarily true. The mathematical index remains cardinal, even if the zero gets moved arbitrarily to another point, or if the choice of scale is changed, or if both the scale and the zero are changed. Every measurable entity maps into a cardinal function but not every cardinal function is the result of the mapping of a measurable entity. The point of this example was used to prove that (as with temperature) it is still possible to predict something about the combination of two values of some utility function, even if the utils get transformed into entirely different numbers, as long as it remains a linear transformation. See also Wakker "Explaining the characteristics of the power (CRRA) utility family" (2008).
Examples of variables on interval scale (besides temperature)
In economics, utility can be considered to be on interval scale. There is some disagreement whether utility should be ordinal or cardinal, but in the case of cardinal utility, it is measured on the i
Examples of variables on interval scale (besides temperature) In economics, utility can be considered to be on interval scale. There is some disagreement whether utility should be ordinal or cardinal, but in the case of cardinal utility, it is measured on the interval scale. The following is a quote from Wikipedia's article on Cardinal utility, section Measurability: It is helpful to consider the same problem as it appears in the construction of scales of measurement in the natural sciences. In the case of temperature there are two degrees of freedom for its measurement - the choice of unit and the zero. Different temperature scales map its intensity in different ways. In the celsius scale the zero is chosen to be the point where water freezes, and likewise, in cardinal utility theory one would be tempted to think that the choice of zero would correspond to a good or service that brings exactly $0$ utils. However this is not necessarily true. The mathematical index remains cardinal, even if the zero gets moved arbitrarily to another point, or if the choice of scale is changed, or if both the scale and the zero are changed. Every measurable entity maps into a cardinal function but not every cardinal function is the result of the mapping of a measurable entity. The point of this example was used to prove that (as with temperature) it is still possible to predict something about the combination of two values of some utility function, even if the utils get transformed into entirely different numbers, as long as it remains a linear transformation. See also Wakker "Explaining the characteristics of the power (CRRA) utility family" (2008).
Examples of variables on interval scale (besides temperature) In economics, utility can be considered to be on interval scale. There is some disagreement whether utility should be ordinal or cardinal, but in the case of cardinal utility, it is measured on the i
36,834
Calculate probability of disease appearance
I personally feel this lends itself well to a survival analysis. You have people without moles in a certain hand at the start of the period (your at risk population); you can select these, and you have time points for follow-up and whether or not they were censored (developed a mole). This gives you a hazard for whatever cohort you've selected. You can then calculate a hazard ratio (e.g. for developing a right-hand mole in people with a left-hand moles at baseline, versus those without). This could be expressed on a Kaplan-Meier graph and will come with a confidence interval.
Calculate probability of disease appearance
I personally feel this lends itself well to a survival analysis. You have people without moles in a certain hand at the start of the period (your at risk population); you can select these, and you hav
Calculate probability of disease appearance I personally feel this lends itself well to a survival analysis. You have people without moles in a certain hand at the start of the period (your at risk population); you can select these, and you have time points for follow-up and whether or not they were censored (developed a mole). This gives you a hazard for whatever cohort you've selected. You can then calculate a hazard ratio (e.g. for developing a right-hand mole in people with a left-hand moles at baseline, versus those without). This could be expressed on a Kaplan-Meier graph and will come with a confidence interval.
Calculate probability of disease appearance I personally feel this lends itself well to a survival analysis. You have people without moles in a certain hand at the start of the period (your at risk population); you can select these, and you hav
36,835
Calculate probability of disease appearance
There is no modeling to be done here, all of your questions are simple conditional probabilities. Alright, since people did not appreciate that answer, you need to clarify a couple of things. I am interested in finding the probability of a hand developing a mole among only the patients that developed a mole in one hand and finding the probability of developing a mole in the other hand (given that the patient had already a mole in the other hand). Do you mean per visit? Or that they never developed a mole ever? From your example: Patients 1 and 3 developed a mole on one hand. Patient 1 never developed a mole on the other hand but patient 3 did, so you could argue the answer to your question is 50%. Now, you could also argue that patient 1 had 4 checkups with 1 mole and not on the other and patient 3 had 0 checkups with 1 mole and not the other so the probability could be 1/5 = 20%. It depends on how you define your question.
Calculate probability of disease appearance
There is no modeling to be done here, all of your questions are simple conditional probabilities. Alright, since people did not appreciate that answer, you need to clarify a couple of things. I am
Calculate probability of disease appearance There is no modeling to be done here, all of your questions are simple conditional probabilities. Alright, since people did not appreciate that answer, you need to clarify a couple of things. I am interested in finding the probability of a hand developing a mole among only the patients that developed a mole in one hand and finding the probability of developing a mole in the other hand (given that the patient had already a mole in the other hand). Do you mean per visit? Or that they never developed a mole ever? From your example: Patients 1 and 3 developed a mole on one hand. Patient 1 never developed a mole on the other hand but patient 3 did, so you could argue the answer to your question is 50%. Now, you could also argue that patient 1 had 4 checkups with 1 mole and not on the other and patient 3 had 0 checkups with 1 mole and not the other so the probability could be 1/5 = 20%. It depends on how you define your question.
Calculate probability of disease appearance There is no modeling to be done here, all of your questions are simple conditional probabilities. Alright, since people did not appreciate that answer, you need to clarify a couple of things. I am
36,836
Calculate probability of disease appearance
Personally, I think you can start by studying the multicovariance generalized linear models: https://cran.r-project.org/web/packages/mcglm/index.html https://cran.r-project.org/web/packages/mcglm/vignettes/GLMExamples.html http://cursos.leg.ufpr.br/mcglm4aed/slides/2-mcglm.html#(1) Those models are apropriated for when you have more than one response variable and they're not gaussian, and this is your case, as you have two binary variables (mole or not mole in each hand). Also, the method lets you deal with intra-individual dependencies, which is given by the longitudinal structure. Here, longitudinal means repeated measures for the same individual, along the time. I think the links above will help you to have a good idea about these techniques, and they also provide the computational implementation in R.
Calculate probability of disease appearance
Personally, I think you can start by studying the multicovariance generalized linear models: https://cran.r-project.org/web/packages/mcglm/index.html https://cran.r-project.org/web/packages/mcglm/vig
Calculate probability of disease appearance Personally, I think you can start by studying the multicovariance generalized linear models: https://cran.r-project.org/web/packages/mcglm/index.html https://cran.r-project.org/web/packages/mcglm/vignettes/GLMExamples.html http://cursos.leg.ufpr.br/mcglm4aed/slides/2-mcglm.html#(1) Those models are apropriated for when you have more than one response variable and they're not gaussian, and this is your case, as you have two binary variables (mole or not mole in each hand). Also, the method lets you deal with intra-individual dependencies, which is given by the longitudinal structure. Here, longitudinal means repeated measures for the same individual, along the time. I think the links above will help you to have a good idea about these techniques, and they also provide the computational implementation in R.
Calculate probability of disease appearance Personally, I think you can start by studying the multicovariance generalized linear models: https://cran.r-project.org/web/packages/mcglm/index.html https://cran.r-project.org/web/packages/mcglm/vig
36,837
What is the difference between invariance to translation, covariance to translation and equivariance to translation?
There are two schools of thought when it comes to definitin of equivariance, covariance, invariance, and same-equivariance. Covariance is a concept often used in physics and is the same term as equivariance. Both are used when applying the transformation $\pi$ on the input of the function $f$ can be achieved by appying another transformation $\psi$ on the output of the function: $f(\pi(x))=\psi(f(x))$ Same-equivariance is an especial case of equivariance when $\psi=\pi$ (in some literature, same-equivariance is termed as equivariance, and instead, equivariance is termed covariance): $f(\pi(x))=\pi(f(x))$ Invariance is another especial case when the transformation $\psi$ is the identity function ($\psi=\mathbb{1}$) $f(\pi(x))=f(x)$ Based on the above definitions: convolution is "equivariant" to translation convolution is also "same-equivariant" to translation, and since covariance is just another term for the same concept, convolution is "covariant" to translation. Same is true for convolutional layers.
What is the difference between invariance to translation, covariance to translation and equivariance
There are two schools of thought when it comes to definitin of equivariance, covariance, invariance, and same-equivariance. Covariance is a concept often used in physics and is the same term as equiv
What is the difference between invariance to translation, covariance to translation and equivariance to translation? There are two schools of thought when it comes to definitin of equivariance, covariance, invariance, and same-equivariance. Covariance is a concept often used in physics and is the same term as equivariance. Both are used when applying the transformation $\pi$ on the input of the function $f$ can be achieved by appying another transformation $\psi$ on the output of the function: $f(\pi(x))=\psi(f(x))$ Same-equivariance is an especial case of equivariance when $\psi=\pi$ (in some literature, same-equivariance is termed as equivariance, and instead, equivariance is termed covariance): $f(\pi(x))=\pi(f(x))$ Invariance is another especial case when the transformation $\psi$ is the identity function ($\psi=\mathbb{1}$) $f(\pi(x))=f(x)$ Based on the above definitions: convolution is "equivariant" to translation convolution is also "same-equivariant" to translation, and since covariance is just another term for the same concept, convolution is "covariant" to translation. Same is true for convolutional layers.
What is the difference between invariance to translation, covariance to translation and equivariance There are two schools of thought when it comes to definitin of equivariance, covariance, invariance, and same-equivariance. Covariance is a concept often used in physics and is the same term as equiv
36,838
Transfer learning: How and why retrain only final layers of a network?
Why would one want to avoid retaining all the layers of a transfer learning network if the fine-tuning dataset was small? Ie, (if I understand it correctly), why would one not want to apply normal back-propagation through to the input layer? If the new dataset is small, the reason to restrict training to the new layers is to avoid overfitting. The entire network contains many more parameters and, in the small data regime, there's a higher chance of finding a solution that fits the training set but doesn't generalize well. The idea behind transfer learning is that the original network has learned an internal representation that will also work well for the new task. This representation is given by the output of the final layer we keep from the original network. By training only the new layers, we simply keep that representation and learn how to processes it for the new task. Because the new layers contain fewer parameters than the entire network, there's less risk of overfitting. In transfer learning, is there any difference in how backprop is applied when only training the last few layers? There's no difference it how gradients are computed or how they're used to update the parameters. The only difference is that parameters for the early layers are held fixed, so these components of the gradient need not be computed.
Transfer learning: How and why retrain only final layers of a network?
Why would one want to avoid retaining all the layers of a transfer learning network if the fine-tuning dataset was small? Ie, (if I understand it correctly), why would one not want to apply normal bac
Transfer learning: How and why retrain only final layers of a network? Why would one want to avoid retaining all the layers of a transfer learning network if the fine-tuning dataset was small? Ie, (if I understand it correctly), why would one not want to apply normal back-propagation through to the input layer? If the new dataset is small, the reason to restrict training to the new layers is to avoid overfitting. The entire network contains many more parameters and, in the small data regime, there's a higher chance of finding a solution that fits the training set but doesn't generalize well. The idea behind transfer learning is that the original network has learned an internal representation that will also work well for the new task. This representation is given by the output of the final layer we keep from the original network. By training only the new layers, we simply keep that representation and learn how to processes it for the new task. Because the new layers contain fewer parameters than the entire network, there's less risk of overfitting. In transfer learning, is there any difference in how backprop is applied when only training the last few layers? There's no difference it how gradients are computed or how they're used to update the parameters. The only difference is that parameters for the early layers are held fixed, so these components of the gradient need not be computed.
Transfer learning: How and why retrain only final layers of a network? Why would one want to avoid retaining all the layers of a transfer learning network if the fine-tuning dataset was small? Ie, (if I understand it correctly), why would one not want to apply normal bac
36,839
Transfer learning: How and why retrain only final layers of a network?
This answer applies to finetuning pretrained transformer models in NLP, but not computer vision. Contrary to Ng's advice, and somewhat contrary to the currently accepted answer, it's standard practice to fine-tune the entire transformer, more-or-less regardless of the amount of training data. See the standard text classification tutorial, for example. A more compelling example is that SetFit1 achieves excellent accuracy on many few-shot text classification benchmarks after finetuning all 100M+ parameters of a transformer model using as few as 50 observations. Some notes before presenting experiments: None of the training algorithms mentioned in this answer rely on layer-wise learning rates, in case you were concerned about that. As usual, the learning rate + scheduler is just another hyperparameter you tune based on folklore and experiments. In all of the experiments, unfreezing a transformer's attention block is phrased as unfreezing a "layer". An attention block technically contains two big layers (in the strictest sense of the word) and many many weight matrices. Here are 2 mini empirical analyses which contain plots where the x-axis is the # of frozen encoder or decoder layers and the y-axis is accuracy: The first GPT paper2: see the left plot of Figure 2 The paper doesn't vary training sizes for that plot, so it's hard to say how affordable different amounts of unfreezing are for a smaller training set. This blog post for BERT Interestingly, there doesn't appear to be a strong interaction effect of # unfrozen layers and training set size on accuracy; you can unfreeze somewhat liberally. Unfortunately, the blog post doesn't contain training scores, so it doesn't provide evidence that more unfreezing causes greater complexity. The GPT paper does provide this evidence. And in my experience training transformers for classification and similarity tasks, this has been the case. The plots are slightly dubious to me because it looks like freezing all 12 BERT encoder blocks (except presumably the tacked-on linear layer) gets majority accuracy, i.e., nothing is really learned. Typically, freezing all of the encoder blocks does not perform this terribly. More on that later. (From the blog post) SST-2 benchmark: (From the blog post) CoLA benchmark: Going even further, there's evidence3 that re-initializing some of BERT's attention blocks before training improves performance, even with just a few thousand training observations: In other words, intentionally forgetting some of what was learned during pretraining can improve performance on the target task. So don't be too concerned about seemingly immodest increases in variance / decreases in bias, as the accepted answer may lead you to believe. These quantities are not intuitive for modern NNs. You have to run experiments. (That paper is probably the most thorough analysis of BERT fine-tuning that I've seen. You may find other experiments in there to be insightful.) It's also important to not just count layers when thinking about complexity; pay attention (pun intended) to what the layers are doing. When classifying text using transformers, a linear layer is tacked on to a pooled or specifically chosen output from the pretrained model, which consists of many attention blocks which do the heavy lifting. Freezing all but the linear layer may do fine. But freezing all but the linear layer and the last attention block may end up doing significantly better, as the step in model complexity is significant. Empirically, freezing subsequent attention blocks can yield diminishing returns. Finally addressing your question: Is unfreezing more layers always better Yes for modern NLP transformer models. There aren't many caveats to that answer, which is indeed surprising. But keep in mind that you can save a great deal of training time and memory at little-to-no statistical cost by unfreezing fewer layers. Here's a passage from the original BERT paper4 re an experiment where they don't finetune BERT at all. They instead use it as a feature extractor for a named entity recognition task: . . . we apply the feature-based approach by extracting the activations from one or more layers without fine-tuning any parameters of BERT. These contextual embeddings are used as input to a randomly initialized two-layer 768-dimensional BiLSTM before the classification layer. ¶ Results are presented in Table 7. BERTLARGE performs competitively with state-of-the-art methods. The best performing method concatenates the token representations from the top four hidden layers of the pre-trained Transformer, which is only 0.3 F1 behind fine-tuning the entire model. This demonstrates that BERT is effective for both finetuning and feature-based approaches. Based on my own classification experiments, you don't even need to train a BiLSTM on BERT features to compete with finetuning BERT. Fitting $l_2$ logistic regression on mean-pooled token embeddings (or the [CLS] token embedding for BERT, or the last token embedding for autoregressive models) from the last attention block is a statistically stable and CPU-friendly baseline. Feature extraction approaches are also great for ML applications where you need to run a suite of classifiers for each input, as you can share the output of a single model's forward pass. Because of these benefits, I wouldn't be too keen on unfreezing layers for simpler tasks. References Tunstall, Lewis, et al. "Efficient Few-Shot Learning Without Prompts." arXiv preprint arXiv:2209.11055 (2022). Radford, Alec, et al. "Improving language understanding by generative pre-training." (2018). Zhang, Tianyi, et al. "Revisiting few-sample BERT fine-tuning." arXiv preprint arXiv:2006.05987 (2020). Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).
Transfer learning: How and why retrain only final layers of a network?
This answer applies to finetuning pretrained transformer models in NLP, but not computer vision. Contrary to Ng's advice, and somewhat contrary to the currently accepted answer, it's standard practice
Transfer learning: How and why retrain only final layers of a network? This answer applies to finetuning pretrained transformer models in NLP, but not computer vision. Contrary to Ng's advice, and somewhat contrary to the currently accepted answer, it's standard practice to fine-tune the entire transformer, more-or-less regardless of the amount of training data. See the standard text classification tutorial, for example. A more compelling example is that SetFit1 achieves excellent accuracy on many few-shot text classification benchmarks after finetuning all 100M+ parameters of a transformer model using as few as 50 observations. Some notes before presenting experiments: None of the training algorithms mentioned in this answer rely on layer-wise learning rates, in case you were concerned about that. As usual, the learning rate + scheduler is just another hyperparameter you tune based on folklore and experiments. In all of the experiments, unfreezing a transformer's attention block is phrased as unfreezing a "layer". An attention block technically contains two big layers (in the strictest sense of the word) and many many weight matrices. Here are 2 mini empirical analyses which contain plots where the x-axis is the # of frozen encoder or decoder layers and the y-axis is accuracy: The first GPT paper2: see the left plot of Figure 2 The paper doesn't vary training sizes for that plot, so it's hard to say how affordable different amounts of unfreezing are for a smaller training set. This blog post for BERT Interestingly, there doesn't appear to be a strong interaction effect of # unfrozen layers and training set size on accuracy; you can unfreeze somewhat liberally. Unfortunately, the blog post doesn't contain training scores, so it doesn't provide evidence that more unfreezing causes greater complexity. The GPT paper does provide this evidence. And in my experience training transformers for classification and similarity tasks, this has been the case. The plots are slightly dubious to me because it looks like freezing all 12 BERT encoder blocks (except presumably the tacked-on linear layer) gets majority accuracy, i.e., nothing is really learned. Typically, freezing all of the encoder blocks does not perform this terribly. More on that later. (From the blog post) SST-2 benchmark: (From the blog post) CoLA benchmark: Going even further, there's evidence3 that re-initializing some of BERT's attention blocks before training improves performance, even with just a few thousand training observations: In other words, intentionally forgetting some of what was learned during pretraining can improve performance on the target task. So don't be too concerned about seemingly immodest increases in variance / decreases in bias, as the accepted answer may lead you to believe. These quantities are not intuitive for modern NNs. You have to run experiments. (That paper is probably the most thorough analysis of BERT fine-tuning that I've seen. You may find other experiments in there to be insightful.) It's also important to not just count layers when thinking about complexity; pay attention (pun intended) to what the layers are doing. When classifying text using transformers, a linear layer is tacked on to a pooled or specifically chosen output from the pretrained model, which consists of many attention blocks which do the heavy lifting. Freezing all but the linear layer may do fine. But freezing all but the linear layer and the last attention block may end up doing significantly better, as the step in model complexity is significant. Empirically, freezing subsequent attention blocks can yield diminishing returns. Finally addressing your question: Is unfreezing more layers always better Yes for modern NLP transformer models. There aren't many caveats to that answer, which is indeed surprising. But keep in mind that you can save a great deal of training time and memory at little-to-no statistical cost by unfreezing fewer layers. Here's a passage from the original BERT paper4 re an experiment where they don't finetune BERT at all. They instead use it as a feature extractor for a named entity recognition task: . . . we apply the feature-based approach by extracting the activations from one or more layers without fine-tuning any parameters of BERT. These contextual embeddings are used as input to a randomly initialized two-layer 768-dimensional BiLSTM before the classification layer. ¶ Results are presented in Table 7. BERTLARGE performs competitively with state-of-the-art methods. The best performing method concatenates the token representations from the top four hidden layers of the pre-trained Transformer, which is only 0.3 F1 behind fine-tuning the entire model. This demonstrates that BERT is effective for both finetuning and feature-based approaches. Based on my own classification experiments, you don't even need to train a BiLSTM on BERT features to compete with finetuning BERT. Fitting $l_2$ logistic regression on mean-pooled token embeddings (or the [CLS] token embedding for BERT, or the last token embedding for autoregressive models) from the last attention block is a statistically stable and CPU-friendly baseline. Feature extraction approaches are also great for ML applications where you need to run a suite of classifiers for each input, as you can share the output of a single model's forward pass. Because of these benefits, I wouldn't be too keen on unfreezing layers for simpler tasks. References Tunstall, Lewis, et al. "Efficient Few-Shot Learning Without Prompts." arXiv preprint arXiv:2209.11055 (2022). Radford, Alec, et al. "Improving language understanding by generative pre-training." (2018). Zhang, Tianyi, et al. "Revisiting few-sample BERT fine-tuning." arXiv preprint arXiv:2006.05987 (2020). Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).
Transfer learning: How and why retrain only final layers of a network? This answer applies to finetuning pretrained transformer models in NLP, but not computer vision. Contrary to Ng's advice, and somewhat contrary to the currently accepted answer, it's standard practice
36,840
Finite sum of beta prime iid random variables
Let $\{X_i\}_{i=1}^n$ with $X_i\overset{\text{i.i.d}}{\sim}\beta^\prime(\alpha,\beta)$ and $Z=X_1+\dots+X_n$. It follows from linearity of the expected value that $$ \mathsf EZ=\sum_{i=1}^n\mathsf EX_i=n\mathsf EX=\frac{n\alpha}{\beta-1},\quad\beta>1. $$ Furthermore, by mutual independence of the $X_i$'s we have $$ \mathsf{Var}Z=\sum_{i=1}^n\mathsf{Var}X_i=n\mathsf{Var}X=\frac{n\alpha(\alpha+\beta-1)}{(\beta-2)(\beta-1)^2},\quad\beta>2. $$ Since we know $Z\sim\beta^\prime(\gamma,\delta)$ we may write the system of equations $$ \begin{aligned} \frac{n\alpha}{\beta-1} &=\frac{\gamma}{\delta-1}\\ \frac{n\alpha(\alpha+\beta-1)}{(\beta-2)(\beta-1)^2} &=\frac{\gamma(\gamma+\delta-1)}{(\delta-2)(\delta-1)^2}. \end{aligned} $$ Solving this system for $\gamma$ and $\delta$ subsequently yields the following result: $$ \begin{aligned} \gamma &=\frac{\alpha n \left(\alpha +\beta ^2-2 \beta +\alpha \beta n-2 \alpha n+1\right)}{(\beta -1) (\alpha +\beta -1)}\\ \delta &=\frac{2 \alpha +\beta ^2-\beta +\alpha \beta n-2 \alpha n}{\alpha +\beta -1}. \end{aligned} $$ With some algebra you may be able to simplify these expressions. Update: Based on the discussion surrounding the exactness/correctness of the results I decided to perform an experiment in MATLAB. Here is the code used which performs the simulation for $\alpha=\beta=15$ and $n=5$: a = 15; %alpha b = 15; %beta n = 5; c = (a*n*(a-2*b-2*a*n+b^2+a*b*n+1))/((b-1)*(a+b-1)); %gamma d = (2*a-b-2*a*n+b^2+a*b*n)/(a+b-1); %delta Xdata = 1./betarnd(a,b,1e6,n)-1; Xn = sum(Xdata,2); %Xn = X_1+X_2+...+X_n ax = linspace(0,max(Xn),256); f_Xn = @(x) x.^(c-1).*(1+x).^(-c-d)/beta(c,d); figure hold on histogram(Xn,64,'normalization','pdf') plot(ax,f_Xn(ax),'Color',[0,0,0],'LineWidth',1.5) xlabel(['X_' num2str(n)]) ylabel('density') box on hold off Here we see that the histogram does indeed agree with the theoretical beta prime distribution with parameters $\gamma$ and $\delta$ as derived above. I tried other values of $\alpha$, $\beta$ and $n$ with similar results.
Finite sum of beta prime iid random variables
Let $\{X_i\}_{i=1}^n$ with $X_i\overset{\text{i.i.d}}{\sim}\beta^\prime(\alpha,\beta)$ and $Z=X_1+\dots+X_n$. It follows from linearity of the expected value that $$ \mathsf EZ=\sum_{i=1}^n\mathsf EX_
Finite sum of beta prime iid random variables Let $\{X_i\}_{i=1}^n$ with $X_i\overset{\text{i.i.d}}{\sim}\beta^\prime(\alpha,\beta)$ and $Z=X_1+\dots+X_n$. It follows from linearity of the expected value that $$ \mathsf EZ=\sum_{i=1}^n\mathsf EX_i=n\mathsf EX=\frac{n\alpha}{\beta-1},\quad\beta>1. $$ Furthermore, by mutual independence of the $X_i$'s we have $$ \mathsf{Var}Z=\sum_{i=1}^n\mathsf{Var}X_i=n\mathsf{Var}X=\frac{n\alpha(\alpha+\beta-1)}{(\beta-2)(\beta-1)^2},\quad\beta>2. $$ Since we know $Z\sim\beta^\prime(\gamma,\delta)$ we may write the system of equations $$ \begin{aligned} \frac{n\alpha}{\beta-1} &=\frac{\gamma}{\delta-1}\\ \frac{n\alpha(\alpha+\beta-1)}{(\beta-2)(\beta-1)^2} &=\frac{\gamma(\gamma+\delta-1)}{(\delta-2)(\delta-1)^2}. \end{aligned} $$ Solving this system for $\gamma$ and $\delta$ subsequently yields the following result: $$ \begin{aligned} \gamma &=\frac{\alpha n \left(\alpha +\beta ^2-2 \beta +\alpha \beta n-2 \alpha n+1\right)}{(\beta -1) (\alpha +\beta -1)}\\ \delta &=\frac{2 \alpha +\beta ^2-\beta +\alpha \beta n-2 \alpha n}{\alpha +\beta -1}. \end{aligned} $$ With some algebra you may be able to simplify these expressions. Update: Based on the discussion surrounding the exactness/correctness of the results I decided to perform an experiment in MATLAB. Here is the code used which performs the simulation for $\alpha=\beta=15$ and $n=5$: a = 15; %alpha b = 15; %beta n = 5; c = (a*n*(a-2*b-2*a*n+b^2+a*b*n+1))/((b-1)*(a+b-1)); %gamma d = (2*a-b-2*a*n+b^2+a*b*n)/(a+b-1); %delta Xdata = 1./betarnd(a,b,1e6,n)-1; Xn = sum(Xdata,2); %Xn = X_1+X_2+...+X_n ax = linspace(0,max(Xn),256); f_Xn = @(x) x.^(c-1).*(1+x).^(-c-d)/beta(c,d); figure hold on histogram(Xn,64,'normalization','pdf') plot(ax,f_Xn(ax),'Color',[0,0,0],'LineWidth',1.5) xlabel(['X_' num2str(n)]) ylabel('density') box on hold off Here we see that the histogram does indeed agree with the theoretical beta prime distribution with parameters $\gamma$ and $\delta$ as derived above. I tried other values of $\alpha$, $\beta$ and $n$ with similar results.
Finite sum of beta prime iid random variables Let $\{X_i\}_{i=1}^n$ with $X_i\overset{\text{i.i.d}}{\sim}\beta^\prime(\alpha,\beta)$ and $Z=X_1+\dots+X_n$. It follows from linearity of the expected value that $$ \mathsf EZ=\sum_{i=1}^n\mathsf EX_
36,841
Interpretation of the Pearson Correlation with respect to Z-Scores
The Pearson sample correlation coefficient can be written as: $$r = \frac{1}{n-1} \sum_{i=1}^n z_{1,i} \cdot z_{2,i} \quad \quad \quad z_{k,i} = \frac{x_{k,i} - \bar{x}_k}{s_k}.$$ This result means that the sample correlation of two vectors $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ is equivalent to the sample correlation of their z-scores $\boldsymbol{z}_1$ and $\boldsymbol{z}_2$ (i.e., sample correlation is determined through the z-scores). Hence, it is accurate to say that the correlation coefficient expresses a relationship between the z-scores of the two sample vectors. The second part of the statement, about the predictive effect of a change in one variable, is not true in general, but is true in the special case where the underlying data is jointly-normal (so long as we interpret the statement without conflating correlation and cause$^\dagger$). If we have an underlying normal distribution $(X_1, X_2) \sim \text{N}$ then the expected difference in $Z_2$ conditional on an "increase" from $Z_1 = x$ to $Z_1 = x+k$ is: $$\text{Expected difference } (\Delta = k) = \mathbb{E}(Z_2 | Z_1 = x + k) - \mathbb{E}(Z_2 | Z_1 = x) = \rho \cdot k.$$ Hence, replacing the true correlation with the sample correlation you would have the predictive result: $$\text{Predicted change}(\Delta = k) = \mathbb{E}(Z_2 | Z_1 = x + k) - \mathbb{E}(Z_2 | Z_1 = x) = r \cdot k.$$ Now, taking $k=1$ yields an interpretation of $r$ as the predictive change in this case: $$\text{Predicted change}(\Delta = 1) = r \cdot 1 = r.$$ Hence, we see that for data from an underlying joint-normal distribution, an "increase" of one standard deviation for one of the variables, leads to a predictive change of $r$ standard deviations for the other variable. Note that this result is not a general result, and holds only in the case where the underlying distribution of the data is jointly normal. The second part of the statement should therefore be interpreted as a "rule of thumb" that applies in the jointly-normal case, but would apply only approximately for other distributions. $^\dagger$ Note that in the above exposition we need to be careful not to conflate correlation with cause. Strictly speaking, if we increase $X_1$ through some action, then it is not appropriate to make a prediction based on the correlation, since we now need to know the causal effect of that increase. Hence, the above equations should be interpreted as predictive changes comparing two different observations of $X_1$ that differ by a specified amount. We have indicated this by referring to the "increase" in quotation marks.
Interpretation of the Pearson Correlation with respect to Z-Scores
The Pearson sample correlation coefficient can be written as: $$r = \frac{1}{n-1} \sum_{i=1}^n z_{1,i} \cdot z_{2,i} \quad \quad \quad z_{k,i} = \frac{x_{k,i} - \bar{x}_k}{s_k}.$$ This result means th
Interpretation of the Pearson Correlation with respect to Z-Scores The Pearson sample correlation coefficient can be written as: $$r = \frac{1}{n-1} \sum_{i=1}^n z_{1,i} \cdot z_{2,i} \quad \quad \quad z_{k,i} = \frac{x_{k,i} - \bar{x}_k}{s_k}.$$ This result means that the sample correlation of two vectors $\boldsymbol{x}_1$ and $\boldsymbol{x}_2$ is equivalent to the sample correlation of their z-scores $\boldsymbol{z}_1$ and $\boldsymbol{z}_2$ (i.e., sample correlation is determined through the z-scores). Hence, it is accurate to say that the correlation coefficient expresses a relationship between the z-scores of the two sample vectors. The second part of the statement, about the predictive effect of a change in one variable, is not true in general, but is true in the special case where the underlying data is jointly-normal (so long as we interpret the statement without conflating correlation and cause$^\dagger$). If we have an underlying normal distribution $(X_1, X_2) \sim \text{N}$ then the expected difference in $Z_2$ conditional on an "increase" from $Z_1 = x$ to $Z_1 = x+k$ is: $$\text{Expected difference } (\Delta = k) = \mathbb{E}(Z_2 | Z_1 = x + k) - \mathbb{E}(Z_2 | Z_1 = x) = \rho \cdot k.$$ Hence, replacing the true correlation with the sample correlation you would have the predictive result: $$\text{Predicted change}(\Delta = k) = \mathbb{E}(Z_2 | Z_1 = x + k) - \mathbb{E}(Z_2 | Z_1 = x) = r \cdot k.$$ Now, taking $k=1$ yields an interpretation of $r$ as the predictive change in this case: $$\text{Predicted change}(\Delta = 1) = r \cdot 1 = r.$$ Hence, we see that for data from an underlying joint-normal distribution, an "increase" of one standard deviation for one of the variables, leads to a predictive change of $r$ standard deviations for the other variable. Note that this result is not a general result, and holds only in the case where the underlying distribution of the data is jointly normal. The second part of the statement should therefore be interpreted as a "rule of thumb" that applies in the jointly-normal case, but would apply only approximately for other distributions. $^\dagger$ Note that in the above exposition we need to be careful not to conflate correlation with cause. Strictly speaking, if we increase $X_1$ through some action, then it is not appropriate to make a prediction based on the correlation, since we now need to know the causal effect of that increase. Hence, the above equations should be interpreted as predictive changes comparing two different observations of $X_1$ that differ by a specified amount. We have indicated this by referring to the "increase" in quotation marks.
Interpretation of the Pearson Correlation with respect to Z-Scores The Pearson sample correlation coefficient can be written as: $$r = \frac{1}{n-1} \sum_{i=1}^n z_{1,i} \cdot z_{2,i} \quad \quad \quad z_{k,i} = \frac{x_{k,i} - \bar{x}_k}{s_k}.$$ This result means th
36,842
Interpretation of the Pearson Correlation with respect to Z-Scores
Consider a simple linear regression on ($y_i$, $x_i$), $i=1,..,n: y_i = \alpha + \beta*x_i + e_i$, where $e_i$ is error (and the usual regression assumptions) and take a look at the regression coefficient: $\hat{\beta} = S_{xy}/S_{xx}$, which can be written as a function of the correlation coefficient r. Specificially, $\hat{\beta} = r \sigma_y/\sigma_x$. Now interpret the regression coefficient as a 1 unit change in x results in $\beta * 1$ unit change in $y$. Therefore, a 1 std. dev. ($\sigma_x$) change in $x$ results in $\hat{\beta} * 1 \sigma_x$ change in $y = r \sigma_y$ change in $y$. Good luck with your teaching!
Interpretation of the Pearson Correlation with respect to Z-Scores
Consider a simple linear regression on ($y_i$, $x_i$), $i=1,..,n: y_i = \alpha + \beta*x_i + e_i$, where $e_i$ is error (and the usual regression assumptions) and take a look at the regression coeffic
Interpretation of the Pearson Correlation with respect to Z-Scores Consider a simple linear regression on ($y_i$, $x_i$), $i=1,..,n: y_i = \alpha + \beta*x_i + e_i$, where $e_i$ is error (and the usual regression assumptions) and take a look at the regression coefficient: $\hat{\beta} = S_{xy}/S_{xx}$, which can be written as a function of the correlation coefficient r. Specificially, $\hat{\beta} = r \sigma_y/\sigma_x$. Now interpret the regression coefficient as a 1 unit change in x results in $\beta * 1$ unit change in $y$. Therefore, a 1 std. dev. ($\sigma_x$) change in $x$ results in $\hat{\beta} * 1 \sigma_x$ change in $y = r \sigma_y$ change in $y$. Good luck with your teaching!
Interpretation of the Pearson Correlation with respect to Z-Scores Consider a simple linear regression on ($y_i$, $x_i$), $i=1,..,n: y_i = \alpha + \beta*x_i + e_i$, where $e_i$ is error (and the usual regression assumptions) and take a look at the regression coeffic
36,843
Seasonal Data with GAMMs
Depends how you want nest the autocorrelation, within days? modar2 <- gamm(apparentTemperature ~ s(year) + s(month, bs = "cc", k = 12) + s(time, k = 20), data = timetemp, correlation = corARMA(form = ~ 1|day, p = 2), control = ctrl) would have smooth long term trend, smooth seasonal effect, smooth time of day effect, with autocorrelation nested within days (for which you'd need to create a new variable day which generates the day of year from the date time variable. If you have a lot of data, you really don't want to use form = ~ obs_seq for the correlation structure, where obs_seq is a sequence 1, 2, ..., number of observations, as that will create a massive covariance matrix that lme() will need to invert at each iteration. Having fitted such a model to high frequency data, it took gamm() a week to converge on powerful multicore workstation. The reason I nested the correlation within year in that example was partly for this reason; that's a long monthly record and fitting a full ARMA function across all timepoints is not quick.
Seasonal Data with GAMMs
Depends how you want nest the autocorrelation, within days? modar2 <- gamm(apparentTemperature ~ s(year) + s(month, bs = "cc", k = 12) + s(time, k = 20), data = timetemp,
Seasonal Data with GAMMs Depends how you want nest the autocorrelation, within days? modar2 <- gamm(apparentTemperature ~ s(year) + s(month, bs = "cc", k = 12) + s(time, k = 20), data = timetemp, correlation = corARMA(form = ~ 1|day, p = 2), control = ctrl) would have smooth long term trend, smooth seasonal effect, smooth time of day effect, with autocorrelation nested within days (for which you'd need to create a new variable day which generates the day of year from the date time variable. If you have a lot of data, you really don't want to use form = ~ obs_seq for the correlation structure, where obs_seq is a sequence 1, 2, ..., number of observations, as that will create a massive covariance matrix that lme() will need to invert at each iteration. Having fitted such a model to high frequency data, it took gamm() a week to converge on powerful multicore workstation. The reason I nested the correlation within year in that example was partly for this reason; that's a long monthly record and fitting a full ARMA function across all timepoints is not quick.
Seasonal Data with GAMMs Depends how you want nest the autocorrelation, within days? modar2 <- gamm(apparentTemperature ~ s(year) + s(month, bs = "cc", k = 12) + s(time, k = 20), data = timetemp,
36,844
Support vector machines (SVMs) are the zero temperature limit of logistic regression?
In the case of hard-margin SVM and linearly separable data, this is true. An intuitive sketch: The loss for each datapoint in logistic regression dies out almost as an exponential decay curve as you get farther from the decision boundary (in the correct direction of course). This exponential decay means that the points closest to the boundary incur much more loss. As the temperature drops to 0, the points closest to the boundary completely dominate the loss, and the loss is determined by exactly how close the closest points are. Binary logistic regression has the cross-entropy loss: $- y \log p - (1-y)\log (1-p)$ where $y$ is the label and $p$ is the predicted probability in $(0,1)$. Typically, $p = \sigma(w^Tx + b)$ where $\sigma$ is the sigmoid function. Based on the temperature parameter introduced in this paper, I suspect that the temperature refers to a modification of the formulation: $p = \sigma(\frac{w^Tx}{\tau})$, where $\tau$ is the temperature and I've dropped the bias term for simplicity. Considering only the first term of the loss, $-y\log p = y\log(1+\exp{}(-\frac{w^Tx}{\tau}))$. Assume all $w^Tx > 0$, because anything else would mean that $x$ is on the wrong side of the decision boundary and incur infinite loss as $\tau \rightarrow 0$. Since the exponential term gets very small in the limit, we use the first order taylor expansion for $\log(1+z)$ to write $-y\log p \approx y\exp{(-\frac{w^Tx}{\tau})}$ Up to now, we've been using just the loss for a single datapoint, but the actual loss is $\sum_i y_i \exp{(-\frac{w^Tx_i}{\tau})}$. Consider only positive labels ($y_i = 1$). Then this sum is dominated by the term where $w^Tx_i$ is the smallest (closest to the decision boundary). This can be seen because the ratio between the $i$ term and the $j$ term is $\frac{\exp (-w^T x_i/\tau)}{\exp (-w^T x_j/\tau)} = \exp(\frac{w^T x_j-w^T x_i}{\tau})$ which goes to infinity or 0 as $\tau \rightarrow 0$, so only the largest $w^T x_i$ term matters. A symmetric argument can be used on the second term in the loss. Therefore, the loss of the logistic regression problem as the temperature goes to 0 is minimized by maximizing the minimum distance to the decision boundary.
Support vector machines (SVMs) are the zero temperature limit of logistic regression?
In the case of hard-margin SVM and linearly separable data, this is true. An intuitive sketch: The loss for each datapoint in logistic regression dies out almost as an exponential decay curve as you g
Support vector machines (SVMs) are the zero temperature limit of logistic regression? In the case of hard-margin SVM and linearly separable data, this is true. An intuitive sketch: The loss for each datapoint in logistic regression dies out almost as an exponential decay curve as you get farther from the decision boundary (in the correct direction of course). This exponential decay means that the points closest to the boundary incur much more loss. As the temperature drops to 0, the points closest to the boundary completely dominate the loss, and the loss is determined by exactly how close the closest points are. Binary logistic regression has the cross-entropy loss: $- y \log p - (1-y)\log (1-p)$ where $y$ is the label and $p$ is the predicted probability in $(0,1)$. Typically, $p = \sigma(w^Tx + b)$ where $\sigma$ is the sigmoid function. Based on the temperature parameter introduced in this paper, I suspect that the temperature refers to a modification of the formulation: $p = \sigma(\frac{w^Tx}{\tau})$, where $\tau$ is the temperature and I've dropped the bias term for simplicity. Considering only the first term of the loss, $-y\log p = y\log(1+\exp{}(-\frac{w^Tx}{\tau}))$. Assume all $w^Tx > 0$, because anything else would mean that $x$ is on the wrong side of the decision boundary and incur infinite loss as $\tau \rightarrow 0$. Since the exponential term gets very small in the limit, we use the first order taylor expansion for $\log(1+z)$ to write $-y\log p \approx y\exp{(-\frac{w^Tx}{\tau})}$ Up to now, we've been using just the loss for a single datapoint, but the actual loss is $\sum_i y_i \exp{(-\frac{w^Tx_i}{\tau})}$. Consider only positive labels ($y_i = 1$). Then this sum is dominated by the term where $w^Tx_i$ is the smallest (closest to the decision boundary). This can be seen because the ratio between the $i$ term and the $j$ term is $\frac{\exp (-w^T x_i/\tau)}{\exp (-w^T x_j/\tau)} = \exp(\frac{w^T x_j-w^T x_i}{\tau})$ which goes to infinity or 0 as $\tau \rightarrow 0$, so only the largest $w^T x_i$ term matters. A symmetric argument can be used on the second term in the loss. Therefore, the loss of the logistic regression problem as the temperature goes to 0 is minimized by maximizing the minimum distance to the decision boundary.
Support vector machines (SVMs) are the zero temperature limit of logistic regression? In the case of hard-margin SVM and linearly separable data, this is true. An intuitive sketch: The loss for each datapoint in logistic regression dies out almost as an exponential decay curve as you g
36,845
random forest for imbalanced data?
There are usually two methods to deal with imbalanced data while using the random forest model. One approach is cost-sensitive learning and the other is sampling. For extremely imbalanced data, random forest generally tends to be biased towards the majority class. The cost-sensitive approach would be to assign different weights to different classes. So if the minority class is assigned a higher weight and thus higher misclassification cost, then that can help reduce its biasness towards the majority class. You can use the class weight parameter of random forest in scikit-learn to assign weights to each class. Secondly, there are different methods of sampling such as oversampling the minority class or undersampling the majority class etc... Although simple sampling methods improve the overall model performance, its preferable to go for a more specialized sampling method such as SMOTE and others to get a better model. Most of the machine learning models suffer from the imbalanced data problem although there are some reasons to believe that generative models generally tend to perform better in case of imbalanced datasets.
random forest for imbalanced data?
There are usually two methods to deal with imbalanced data while using the random forest model. One approach is cost-sensitive learning and the other is sampling. For extremely imbalanced data, random
random forest for imbalanced data? There are usually two methods to deal with imbalanced data while using the random forest model. One approach is cost-sensitive learning and the other is sampling. For extremely imbalanced data, random forest generally tends to be biased towards the majority class. The cost-sensitive approach would be to assign different weights to different classes. So if the minority class is assigned a higher weight and thus higher misclassification cost, then that can help reduce its biasness towards the majority class. You can use the class weight parameter of random forest in scikit-learn to assign weights to each class. Secondly, there are different methods of sampling such as oversampling the minority class or undersampling the majority class etc... Although simple sampling methods improve the overall model performance, its preferable to go for a more specialized sampling method such as SMOTE and others to get a better model. Most of the machine learning models suffer from the imbalanced data problem although there are some reasons to believe that generative models generally tend to perform better in case of imbalanced datasets.
random forest for imbalanced data? There are usually two methods to deal with imbalanced data while using the random forest model. One approach is cost-sensitive learning and the other is sampling. For extremely imbalanced data, random
36,846
Distribution of $X+Y$ when $X$ and $Y$ are i.i.d with pdf $f(x)=\alpha\beta^{-\alpha}x^{\alpha-1}\mathbf1_{0<x<\beta}$
Since \begin{align} \int_{\max(t-\beta,0)}^{\min(t,\beta)}(y(t-y))^{\alpha-1}\,\mathrm{d}y\,\mathbf1_{0<t<2\beta}&=\begin{cases} \int_{0}^{t}(y(t-y))^{\alpha-1}\,\mathrm{d}y&\text{when }0\le t\le \beta\\ \int_{t-\beta}^{\beta}(y(t-y))^{\alpha-1}\,\mathrm{d}y&\text{when }\beta\le t\le 2\beta\\ \end{cases}\\ \end{align} we have ($t<\beta$) $$\int_{\max(t-\beta,0)}^{\min(t,\beta)}(y(t-y))^{\alpha-1}\,\mathrm{d}y= \int_{0}^{t/2}(y(t-y))^{\alpha-1}\,\mathrm{d}y+\int_{t/2}^{t}(y(t-y))^{\alpha-1}\,\mathrm{d}y$$ and by a change of variable $z=t-y$ in the second integral of the rhs $$\int_{\max(t-\beta,0)}^{\min(t,\beta)}(y(t-y))^{\alpha-1}\,\mathrm{d}y= 2\int_{0}^{t/2}(y(t-y))^{\alpha-1}\,\mathrm{d}y$$ Similarly, when $t>\beta$, \begin{align*} \int_{t-\beta}^{\beta}(y(t-y))^{\alpha-1}\,\mathrm{d}y&= \int_{t-\beta}^{t/2}(y(t-y))^{\alpha-1}\,\mathrm{d}y+\int_{t/2}^{\beta}(y(t-y))^{\alpha-1}\,\mathrm{d}y\\ &=2\int_{t/2}^{\beta}(y(t-y))^{\alpha-1}\,\mathrm{d}y \end{align*} again by a change of variable $z=t-y$ in the second integral of the rhs. I am however unable to recover the same functional expression for the density in this second case, namely$$2\int_{0}^{w/2}(z(w-z))^{\alpha-1}\,\mathrm{d}z$$ Now, as pointed out in the question, $$2\int_{0}^{w/2}(z(w-z))^{\alpha-1}\,\mathrm{d}z \propto w^{2(\alpha-1)+1}=w^{2\alpha-1}$$by a change of scale, which would imply that the distribution of interest has the density $$f(w)\propto w^{2\alpha-1} \mathbf1_{0<w<2\beta}$$ which turns it into a Beta ${\cal B}(2\alpha,1)$ distribution rescaled on $(0,2\beta)$, hence with density $$f(w) = \{2\beta\}^{-2\alpha}\dfrac{\Gamma(2\alpha+1)}{\Gamma(2\alpha)}w^{2\alpha-1} \mathbf1_{0<w<2\beta}=2\alpha\{2\beta\}^{-2\alpha}w^{2\alpha-1} \mathbf1_{0<w<2\beta}$$ This comes as a contradiction when considering the unbelievably detailed answer from W. Huber, since Uniforms are Beta ${\cal B}(1,1)$. And since the sum of two Uniforms is not a Beta ${\cal B}(2,1)$ random variable, but instead an rv with "tent" density. Aside: More generally a sum of Beta variates is not another Beta variate, the "explanation" being straightforward when looking at Betas as two Gammas normalised by their sum. Adding two Betas sees different sums in the denominator. The issue is thus with the derivation of the density of $W=U+V$: since $$(U,V) \sim 2\alpha \beta^{-2}[uv]^{\alpha-1}\,\mathbb{I}_{0<u<v<\beta}$$ a change of variables $(Z,W)=(U,U+V)$ leads to $$(Z,W) \sim 2\alpha \beta^{-2}[z(w-z)]^{\alpha-1}\,\mathbb{I}_{0<z<w-z<\beta}$$ and the indicator constraints are $$0<z \quad 2z<w \quad z<\beta \quad z>w-\beta \quad 0<w \quad\text{and}\quad w<2\beta$$ Therefore, in conclusion, $$W\sim 2\alpha^2 \beta^{-2\alpha}\int_{\max\{0,w-\beta\}}^{\min\{\beta,w/2\}}[z(w-z)]^{\alpha-1}\,\text{d}z\,\mathbb{I}_{0<w<2\beta}$$ namely (1) and not the proposed expression (2).
Distribution of $X+Y$ when $X$ and $Y$ are i.i.d with pdf $f(x)=\alpha\beta^{-\alpha}x^{\alpha-1}\ma
Since \begin{align} \int_{\max(t-\beta,0)}^{\min(t,\beta)}(y(t-y))^{\alpha-1}\,\mathrm{d}y\,\mathbf1_{0<t<2\beta}&=\begin{cases} \int_{0}^{t}(y(t-y))^{\alpha-1}\,\mathrm{d}y&\text{when }0\le t\le \bet
Distribution of $X+Y$ when $X$ and $Y$ are i.i.d with pdf $f(x)=\alpha\beta^{-\alpha}x^{\alpha-1}\mathbf1_{0<x<\beta}$ Since \begin{align} \int_{\max(t-\beta,0)}^{\min(t,\beta)}(y(t-y))^{\alpha-1}\,\mathrm{d}y\,\mathbf1_{0<t<2\beta}&=\begin{cases} \int_{0}^{t}(y(t-y))^{\alpha-1}\,\mathrm{d}y&\text{when }0\le t\le \beta\\ \int_{t-\beta}^{\beta}(y(t-y))^{\alpha-1}\,\mathrm{d}y&\text{when }\beta\le t\le 2\beta\\ \end{cases}\\ \end{align} we have ($t<\beta$) $$\int_{\max(t-\beta,0)}^{\min(t,\beta)}(y(t-y))^{\alpha-1}\,\mathrm{d}y= \int_{0}^{t/2}(y(t-y))^{\alpha-1}\,\mathrm{d}y+\int_{t/2}^{t}(y(t-y))^{\alpha-1}\,\mathrm{d}y$$ and by a change of variable $z=t-y$ in the second integral of the rhs $$\int_{\max(t-\beta,0)}^{\min(t,\beta)}(y(t-y))^{\alpha-1}\,\mathrm{d}y= 2\int_{0}^{t/2}(y(t-y))^{\alpha-1}\,\mathrm{d}y$$ Similarly, when $t>\beta$, \begin{align*} \int_{t-\beta}^{\beta}(y(t-y))^{\alpha-1}\,\mathrm{d}y&= \int_{t-\beta}^{t/2}(y(t-y))^{\alpha-1}\,\mathrm{d}y+\int_{t/2}^{\beta}(y(t-y))^{\alpha-1}\,\mathrm{d}y\\ &=2\int_{t/2}^{\beta}(y(t-y))^{\alpha-1}\,\mathrm{d}y \end{align*} again by a change of variable $z=t-y$ in the second integral of the rhs. I am however unable to recover the same functional expression for the density in this second case, namely$$2\int_{0}^{w/2}(z(w-z))^{\alpha-1}\,\mathrm{d}z$$ Now, as pointed out in the question, $$2\int_{0}^{w/2}(z(w-z))^{\alpha-1}\,\mathrm{d}z \propto w^{2(\alpha-1)+1}=w^{2\alpha-1}$$by a change of scale, which would imply that the distribution of interest has the density $$f(w)\propto w^{2\alpha-1} \mathbf1_{0<w<2\beta}$$ which turns it into a Beta ${\cal B}(2\alpha,1)$ distribution rescaled on $(0,2\beta)$, hence with density $$f(w) = \{2\beta\}^{-2\alpha}\dfrac{\Gamma(2\alpha+1)}{\Gamma(2\alpha)}w^{2\alpha-1} \mathbf1_{0<w<2\beta}=2\alpha\{2\beta\}^{-2\alpha}w^{2\alpha-1} \mathbf1_{0<w<2\beta}$$ This comes as a contradiction when considering the unbelievably detailed answer from W. Huber, since Uniforms are Beta ${\cal B}(1,1)$. And since the sum of two Uniforms is not a Beta ${\cal B}(2,1)$ random variable, but instead an rv with "tent" density. Aside: More generally a sum of Beta variates is not another Beta variate, the "explanation" being straightforward when looking at Betas as two Gammas normalised by their sum. Adding two Betas sees different sums in the denominator. The issue is thus with the derivation of the density of $W=U+V$: since $$(U,V) \sim 2\alpha \beta^{-2}[uv]^{\alpha-1}\,\mathbb{I}_{0<u<v<\beta}$$ a change of variables $(Z,W)=(U,U+V)$ leads to $$(Z,W) \sim 2\alpha \beta^{-2}[z(w-z)]^{\alpha-1}\,\mathbb{I}_{0<z<w-z<\beta}$$ and the indicator constraints are $$0<z \quad 2z<w \quad z<\beta \quad z>w-\beta \quad 0<w \quad\text{and}\quad w<2\beta$$ Therefore, in conclusion, $$W\sim 2\alpha^2 \beta^{-2\alpha}\int_{\max\{0,w-\beta\}}^{\min\{\beta,w/2\}}[z(w-z)]^{\alpha-1}\,\text{d}z\,\mathbb{I}_{0<w<2\beta}$$ namely (1) and not the proposed expression (2).
Distribution of $X+Y$ when $X$ and $Y$ are i.i.d with pdf $f(x)=\alpha\beta^{-\alpha}x^{\alpha-1}\ma Since \begin{align} \int_{\max(t-\beta,0)}^{\min(t,\beta)}(y(t-y))^{\alpha-1}\,\mathrm{d}y\,\mathbf1_{0<t<2\beta}&=\begin{cases} \int_{0}^{t}(y(t-y))^{\alpha-1}\,\mathrm{d}y&\text{when }0\le t\le \bet
36,847
What is the relationship between Cox regression and Tobit regression?
Abbreviated Model Descriptions The Cox model is a survival model that cleverly models the hazard ratios through the observed ranks of the data, without needing to make an assumption of the underlying baseline distribution, but still requires the proportional hazards assumption. The Tobit model is essentially standard linear regression, except that it can also handle censored data. The assumed distribution is then normal. Pros and Cons Cox Model: Pro: Don't need to make assumption about baseline distribution. This is very important for survival analysis: time-to-event data tends to be very not normal, often with extremely heavy right tails. Additionally, by only considering the rank of the data, you have a model that is more robust to the expected outliers. Cons: Can be very difficult to interpret coefficient effects. Tobit Model: Pro: Simple extension of a model most analysts are already familiar with to allow for censoring, i.e. if all your data were observed and appropriate for linear regression (with one caveat mentioned in Cons section), then it would be appropriate to use a Tobit model. Cons: Requires the assumption of linear effects and gaussian errors. In some applications, this is totally appropriate, but time-to-event data (i.e. survival analysis) rarely fits that criteria. Also, it's worth noting that the Tobit model is more sensitive to the normality assumption than vanilla linear regression.
What is the relationship between Cox regression and Tobit regression?
Abbreviated Model Descriptions The Cox model is a survival model that cleverly models the hazard ratios through the observed ranks of the data, without needing to make an assumption of the underlying
What is the relationship between Cox regression and Tobit regression? Abbreviated Model Descriptions The Cox model is a survival model that cleverly models the hazard ratios through the observed ranks of the data, without needing to make an assumption of the underlying baseline distribution, but still requires the proportional hazards assumption. The Tobit model is essentially standard linear regression, except that it can also handle censored data. The assumed distribution is then normal. Pros and Cons Cox Model: Pro: Don't need to make assumption about baseline distribution. This is very important for survival analysis: time-to-event data tends to be very not normal, often with extremely heavy right tails. Additionally, by only considering the rank of the data, you have a model that is more robust to the expected outliers. Cons: Can be very difficult to interpret coefficient effects. Tobit Model: Pro: Simple extension of a model most analysts are already familiar with to allow for censoring, i.e. if all your data were observed and appropriate for linear regression (with one caveat mentioned in Cons section), then it would be appropriate to use a Tobit model. Cons: Requires the assumption of linear effects and gaussian errors. In some applications, this is totally appropriate, but time-to-event data (i.e. survival analysis) rarely fits that criteria. Also, it's worth noting that the Tobit model is more sensitive to the normality assumption than vanilla linear regression.
What is the relationship between Cox regression and Tobit regression? Abbreviated Model Descriptions The Cox model is a survival model that cleverly models the hazard ratios through the observed ranks of the data, without needing to make an assumption of the underlying
36,848
What is the relationship between Cox regression and Tobit regression?
Neither a normally distributed error term nor a linear link would be an adequate choice for modeling time-to-event outcomes in most circumstances. The distribution of failure times tend to skew right by a large degree. For models with no censoring, most of the books on failure time analyses discuss parametric models. These are exponential, Gamma, or Weibull maximum likelihood procedures. Log transforming the event time could justify the application of a linear regression model, and thus the Tobit model could have some applicability for parametric models of lognormal data with censoring. The rationale for lognormal regression models for time-to-event data seems dubious in my opinion: normally distributed data arise as the sums of millions of unmeasured factors contribute to an outcome. Exponential and Weibull models, conversely, are probability models that have been discussed in more detail, derived as solutions to differential equations for Martingale processes, and are summarized by simple hazard functions. The Cox model does not bother with the distribution of failure time. It is semiparametric, and thus works for a general class of parametric models provided the hazards are proportional. The Cox model uses a partial likelihood to rank risk-sets: groups of people at risk of the disease at each outcome, and evaluates a ratio of likelihoods according to an arbitrary baseline hazard function. Censored observations are simply dropped from subsequent analyses. Most agree it makes the full use of the data while assuming as little as possible about what the underlying distribution is/is not.
What is the relationship between Cox regression and Tobit regression?
Neither a normally distributed error term nor a linear link would be an adequate choice for modeling time-to-event outcomes in most circumstances. The distribution of failure times tend to skew right
What is the relationship between Cox regression and Tobit regression? Neither a normally distributed error term nor a linear link would be an adequate choice for modeling time-to-event outcomes in most circumstances. The distribution of failure times tend to skew right by a large degree. For models with no censoring, most of the books on failure time analyses discuss parametric models. These are exponential, Gamma, or Weibull maximum likelihood procedures. Log transforming the event time could justify the application of a linear regression model, and thus the Tobit model could have some applicability for parametric models of lognormal data with censoring. The rationale for lognormal regression models for time-to-event data seems dubious in my opinion: normally distributed data arise as the sums of millions of unmeasured factors contribute to an outcome. Exponential and Weibull models, conversely, are probability models that have been discussed in more detail, derived as solutions to differential equations for Martingale processes, and are summarized by simple hazard functions. The Cox model does not bother with the distribution of failure time. It is semiparametric, and thus works for a general class of parametric models provided the hazards are proportional. The Cox model uses a partial likelihood to rank risk-sets: groups of people at risk of the disease at each outcome, and evaluates a ratio of likelihoods according to an arbitrary baseline hazard function. Censored observations are simply dropped from subsequent analyses. Most agree it makes the full use of the data while assuming as little as possible about what the underlying distribution is/is not.
What is the relationship between Cox regression and Tobit regression? Neither a normally distributed error term nor a linear link would be an adequate choice for modeling time-to-event outcomes in most circumstances. The distribution of failure times tend to skew right
36,849
Are XGBoost probabilities well-calibrated?
No, they are not well-calibrated. The predicted probabilities are likely not outright horrible as we would expect from an SVM classifier but they are not usually very well-calibrated. For that matter the estimated probability deciles are not even guaranteed to be monotonic. In Caruana et al. (2004) "Ensemble Selection from Libraries of Models" boosted trees have some of the "worst" calibration performance scores. Similarly in Niculescu-Mizil & Caruana (2005) "Predicting good probabilities with supervised learning", boosted trees have "the predicted values massed in the center of the histograms, causing a sigmoidal shape in the reliability plots". Importantly, these findings don't even touch upon the scenarios of up-sampling, down-sampling or re-weighting our data; in those cases it is very unlikely that our predicted probabilities have a direct interpretation at all. Do note that "badly" calibrated probabilities are not synonymous with a useless model but I would urge one doing an extra calibration step (i.e. Platt scaling, isotonic regression or beta calibration) if using the raw probabilities is of importance. Similarly, looking at Guo et al. (2017) "On Calibration of Modern Neural Networks" can be helpful as it provides a range of different metrics (Expected Calibration Error (ECE), Maximum Calibration Error (MCE), etc.) that can be used to quantify calibration discrepancies.
Are XGBoost probabilities well-calibrated?
No, they are not well-calibrated. The predicted probabilities are likely not outright horrible as we would expect from an SVM classifier but they are not usually very well-calibrated. For that matter
Are XGBoost probabilities well-calibrated? No, they are not well-calibrated. The predicted probabilities are likely not outright horrible as we would expect from an SVM classifier but they are not usually very well-calibrated. For that matter the estimated probability deciles are not even guaranteed to be monotonic. In Caruana et al. (2004) "Ensemble Selection from Libraries of Models" boosted trees have some of the "worst" calibration performance scores. Similarly in Niculescu-Mizil & Caruana (2005) "Predicting good probabilities with supervised learning", boosted trees have "the predicted values massed in the center of the histograms, causing a sigmoidal shape in the reliability plots". Importantly, these findings don't even touch upon the scenarios of up-sampling, down-sampling or re-weighting our data; in those cases it is very unlikely that our predicted probabilities have a direct interpretation at all. Do note that "badly" calibrated probabilities are not synonymous with a useless model but I would urge one doing an extra calibration step (i.e. Platt scaling, isotonic regression or beta calibration) if using the raw probabilities is of importance. Similarly, looking at Guo et al. (2017) "On Calibration of Modern Neural Networks" can be helpful as it provides a range of different metrics (Expected Calibration Error (ECE), Maximum Calibration Error (MCE), etc.) that can be used to quantify calibration discrepancies.
Are XGBoost probabilities well-calibrated? No, they are not well-calibrated. The predicted probabilities are likely not outright horrible as we would expect from an SVM classifier but they are not usually very well-calibrated. For that matter
36,850
Deep Learning approaches for Record Linkage
One classic method for linking text documents uses cosine similarity on TF-IDF features. A simple way to extend this would be to use Doc2Vec or similar document embeddings instead of TF-IDF - cosine similarity of word/document embeddings captures semantic similarity (some people might point out that word embeddings aren't technically deep learning, but I think that author might find these methods useful). Second approach is to try to learn distance function that corresponds to item dissimilarity. This is analogous to record linkage method that uses TF-IDF features (use of distance function is analogous to cosine similarity in this model). Siamese networks can be used to learn such distance functions. They are essentially networks that given two examples return their similarity/dissimilarity. "Siamese" comes from the use of shared weights for hidden layers (they encode both inputs in the same way). Here you can see an example talk on using Siamese Networks for similar task. If you want to read further on Siamese Networks I encourage you to look up One Shot Learning, which is somewhat similar to record linkage.
Deep Learning approaches for Record Linkage
One classic method for linking text documents uses cosine similarity on TF-IDF features. A simple way to extend this would be to use Doc2Vec or similar document embeddings instead of TF-IDF - cosine s
Deep Learning approaches for Record Linkage One classic method for linking text documents uses cosine similarity on TF-IDF features. A simple way to extend this would be to use Doc2Vec or similar document embeddings instead of TF-IDF - cosine similarity of word/document embeddings captures semantic similarity (some people might point out that word embeddings aren't technically deep learning, but I think that author might find these methods useful). Second approach is to try to learn distance function that corresponds to item dissimilarity. This is analogous to record linkage method that uses TF-IDF features (use of distance function is analogous to cosine similarity in this model). Siamese networks can be used to learn such distance functions. They are essentially networks that given two examples return their similarity/dissimilarity. "Siamese" comes from the use of shared weights for hidden layers (they encode both inputs in the same way). Here you can see an example talk on using Siamese Networks for similar task. If you want to read further on Siamese Networks I encourage you to look up One Shot Learning, which is somewhat similar to record linkage.
Deep Learning approaches for Record Linkage One classic method for linking text documents uses cosine similarity on TF-IDF features. A simple way to extend this would be to use Doc2Vec or similar document embeddings instead of TF-IDF - cosine s
36,851
The Myth of Long-Horizon Predictability
I think a simple answer is that one doesn't want to measure R^2 on original scale of the timeseries. If one's forecast is purely a copy of the last seen timeseries value, the R^2 would be huge. Example: This could be called a spurious case. I am getting the value 0.96, while this forecast is totally bullshit-ish. R^2 is going to give an honest value if it has been measured using stationary timeseires, for example, first differences of y and y-hat.
The Myth of Long-Horizon Predictability
I think a simple answer is that one doesn't want to measure R^2 on original scale of the timeseries. If one's forecast is purely a copy of the last seen timeseries value, the R^2 would be huge. Exampl
The Myth of Long-Horizon Predictability I think a simple answer is that one doesn't want to measure R^2 on original scale of the timeseries. If one's forecast is purely a copy of the last seen timeseries value, the R^2 would be huge. Example: This could be called a spurious case. I am getting the value 0.96, while this forecast is totally bullshit-ish. R^2 is going to give an honest value if it has been measured using stationary timeseires, for example, first differences of y and y-hat.
The Myth of Long-Horizon Predictability I think a simple answer is that one doesn't want to measure R^2 on original scale of the timeseries. If one's forecast is purely a copy of the last seen timeseries value, the R^2 would be huge. Exampl
36,852
The Myth of Long-Horizon Predictability
The problem does not arise because we are using the same dataset for training and validation. It arises because of the effect of the persistence of the variables on magnifying sampling errors and small effects at longer time horizons. As stated in the article, even if you cannot predict future stock market returns from your variable of interest, we expect $R^2$ as well as regression coefficients to be roughly proportional to the time horizon if the variables are persistent. This is because (pg. 1584): a) any unusual draw from the returns at time $t$ will influence the returns for $k$ periods, where $k$ is the time horizon. b) a persistant regressor will have very similar values for $t$, $t-1$, $t-2$, .., $t-k$ and thus "The impact of the unusual draw will be roughly $k$ times larger in the long-horizon regression than in the one-period regression." In the linked article citing the very high $R^2$, the time horizon is ten years, data is available quarterly, so a time horizon of 10 years (time horizon $k = 40$) the inflation in $R^2$ will likely be very substantial.
The Myth of Long-Horizon Predictability
The problem does not arise because we are using the same dataset for training and validation. It arises because of the effect of the persistence of the variables on magnifying sampling errors and smal
The Myth of Long-Horizon Predictability The problem does not arise because we are using the same dataset for training and validation. It arises because of the effect of the persistence of the variables on magnifying sampling errors and small effects at longer time horizons. As stated in the article, even if you cannot predict future stock market returns from your variable of interest, we expect $R^2$ as well as regression coefficients to be roughly proportional to the time horizon if the variables are persistent. This is because (pg. 1584): a) any unusual draw from the returns at time $t$ will influence the returns for $k$ periods, where $k$ is the time horizon. b) a persistant regressor will have very similar values for $t$, $t-1$, $t-2$, .., $t-k$ and thus "The impact of the unusual draw will be roughly $k$ times larger in the long-horizon regression than in the one-period regression." In the linked article citing the very high $R^2$, the time horizon is ten years, data is available quarterly, so a time horizon of 10 years (time horizon $k = 40$) the inflation in $R^2$ will likely be very substantial.
The Myth of Long-Horizon Predictability The problem does not arise because we are using the same dataset for training and validation. It arises because of the effect of the persistence of the variables on magnifying sampling errors and smal
36,853
What is exponential family criterion to test the sufficiency and completeness of an estimator?
The significance of that line is that if you can verify that the parameter space contains an open set in $R^k$, you know, without any further work, that the sufficient statistics $T(X)$ are also complete. That is usually a lot easier to do than trying to apply the definition of completeness directly. Completeness is a nice property, but not of overwhelming importance. One consequence is that if you have a complete sufficient statistic, you can construct a UMVUE estimator based upon it (Lehmann-Scheffe). See also: What are complete sufficient statistics?.
What is exponential family criterion to test the sufficiency and completeness of an estimator?
The significance of that line is that if you can verify that the parameter space contains an open set in $R^k$, you know, without any further work, that the sufficient statistics $T(X)$ are also compl
What is exponential family criterion to test the sufficiency and completeness of an estimator? The significance of that line is that if you can verify that the parameter space contains an open set in $R^k$, you know, without any further work, that the sufficient statistics $T(X)$ are also complete. That is usually a lot easier to do than trying to apply the definition of completeness directly. Completeness is a nice property, but not of overwhelming importance. One consequence is that if you have a complete sufficient statistic, you can construct a UMVUE estimator based upon it (Lehmann-Scheffe). See also: What are complete sufficient statistics?.
What is exponential family criterion to test the sufficiency and completeness of an estimator? The significance of that line is that if you can verify that the parameter space contains an open set in $R^k$, you know, without any further work, that the sufficient statistics $T(X)$ are also compl
36,854
Hard probability problem: How to calculate the the probability of having selected only < 70% of all marbles in a bag, given the following draw rules?
Let's solve this generalized Coupon Collector's problem generally, drawing $m=9$ out of $n=100$ balls for $r=50$ times. If $E(i;r)$ is the event that exactly $i$ distinct balls have been seen after $r$ draws, then--conditional on this--the chance in the next draw of obtaining $k$ balls that haven't yet been seen is found by counting what proportion of the $\binom{n}{m}$ possible samples consist of $m-k$ balls that have been seen and $k$ balls that have yet to be seen. Any such sample is comprised of an $m-k$-subset of the $i$ balls that have been seen together with a $k$-subset of the $n-i$ unseen balls, whence $$\Pr(k\mid E(i;r)) = \frac{\binom{i}{m-k}\binom{n-i}{k}}{\binom{n}{m}}.$$ By summing over the possible values $i=0, 1, \ldots, n$, each multiplied by $\Pr(E(i;r))$, we obtain the chance of having seen exactly $k$ distinct balls in $r+1$ draws. This update rule, which begins by observing that $m$ distinct balls will be obtained in the first draw, is a simple calculation requiring at most $(m+1)(n+1)$ fast calculations for each successive draw, and thereby requires at most $O((r-1)(m+1)(n+1))$ effort and only $O(n)$ storage. Here are the probability distributions for $r=50$ draws along with some milestones along the way. We can track the chance of observing $70$ or fewer distinct balls along the way. It rapidly drops from $1$ down to almost $0$, so it's best to plot its logarithm: After $50$ draws this chance is only $3.162808\times 10^{-51}$, in accord with your intuition.
Hard probability problem: How to calculate the the probability of having selected only < 70% of all
Let's solve this generalized Coupon Collector's problem generally, drawing $m=9$ out of $n=100$ balls for $r=50$ times. If $E(i;r)$ is the event that exactly $i$ distinct balls have been seen after $r
Hard probability problem: How to calculate the the probability of having selected only < 70% of all marbles in a bag, given the following draw rules? Let's solve this generalized Coupon Collector's problem generally, drawing $m=9$ out of $n=100$ balls for $r=50$ times. If $E(i;r)$ is the event that exactly $i$ distinct balls have been seen after $r$ draws, then--conditional on this--the chance in the next draw of obtaining $k$ balls that haven't yet been seen is found by counting what proportion of the $\binom{n}{m}$ possible samples consist of $m-k$ balls that have been seen and $k$ balls that have yet to be seen. Any such sample is comprised of an $m-k$-subset of the $i$ balls that have been seen together with a $k$-subset of the $n-i$ unseen balls, whence $$\Pr(k\mid E(i;r)) = \frac{\binom{i}{m-k}\binom{n-i}{k}}{\binom{n}{m}}.$$ By summing over the possible values $i=0, 1, \ldots, n$, each multiplied by $\Pr(E(i;r))$, we obtain the chance of having seen exactly $k$ distinct balls in $r+1$ draws. This update rule, which begins by observing that $m$ distinct balls will be obtained in the first draw, is a simple calculation requiring at most $(m+1)(n+1)$ fast calculations for each successive draw, and thereby requires at most $O((r-1)(m+1)(n+1))$ effort and only $O(n)$ storage. Here are the probability distributions for $r=50$ draws along with some milestones along the way. We can track the chance of observing $70$ or fewer distinct balls along the way. It rapidly drops from $1$ down to almost $0$, so it's best to plot its logarithm: After $50$ draws this chance is only $3.162808\times 10^{-51}$, in accord with your intuition.
Hard probability problem: How to calculate the the probability of having selected only < 70% of all Let's solve this generalized Coupon Collector's problem generally, drawing $m=9$ out of $n=100$ balls for $r=50$ times. If $E(i;r)$ is the event that exactly $i$ distinct balls have been seen after $r
36,855
Bias initialization in convolutional neural network
Just noting that the answer to this question suggests setting CNN biases to 0, quoting CS231n Stanford course: Initializing the biases. It is possible and common to initialize the biases to be zero, since the asymmetry breaking is provided by the small random numbers in the weights. For ReLU non-linearities, some people like to use small constant value such as 0.01 for all biases because this ensures that all ReLU units fire in the beginning and therefore obtain and propagate some gradient. However, it is not clear if this provides a consistent improvement (in fact some results seem to indicate that this performs worse) and it is more common to simply use 0 bias initialization. source: http://cs231n.github.io/neural-networks-2/
Bias initialization in convolutional neural network
Just noting that the answer to this question suggests setting CNN biases to 0, quoting CS231n Stanford course: Initializing the biases. It is possible and common to initialize the biases to be zero
Bias initialization in convolutional neural network Just noting that the answer to this question suggests setting CNN biases to 0, quoting CS231n Stanford course: Initializing the biases. It is possible and common to initialize the biases to be zero, since the asymmetry breaking is provided by the small random numbers in the weights. For ReLU non-linearities, some people like to use small constant value such as 0.01 for all biases because this ensures that all ReLU units fire in the beginning and therefore obtain and propagate some gradient. However, it is not clear if this provides a consistent improvement (in fact some results seem to indicate that this performs worse) and it is more common to simply use 0 bias initialization. source: http://cs231n.github.io/neural-networks-2/
Bias initialization in convolutional neural network Just noting that the answer to this question suggests setting CNN biases to 0, quoting CS231n Stanford course: Initializing the biases. It is possible and common to initialize the biases to be zero
36,856
Bias initialization in convolutional neural network
Usually you initialise them to 1.0 Biases should be trainable variables not constant, their value must be allowed to change during training. Biases are necessary in every deep network architecture I know of, without them your network will most likely be unable to learn anything. I don't know what a siamese neural network is but in architectures where weights are shared (such convolution neural networks) weights and biases are always shared together as they come in pairs, the combination of the two is what defines a layer.
Bias initialization in convolutional neural network
Usually you initialise them to 1.0 Biases should be trainable variables not constant, their value must be allowed to change during training. Biases are necessary in every deep network architecture I k
Bias initialization in convolutional neural network Usually you initialise them to 1.0 Biases should be trainable variables not constant, their value must be allowed to change during training. Biases are necessary in every deep network architecture I know of, without them your network will most likely be unable to learn anything. I don't know what a siamese neural network is but in architectures where weights are shared (such convolution neural networks) weights and biases are always shared together as they come in pairs, the combination of the two is what defines a layer.
Bias initialization in convolutional neural network Usually you initialise them to 1.0 Biases should be trainable variables not constant, their value must be allowed to change during training. Biases are necessary in every deep network architecture I k
36,857
Input layer batch normalization
It is unlikely they will be the same, since batch normalization has a gamma and beta variable on top of the normalization process. In the paper, it is mentioned that this gamma and beta is used for scaling and shifting the activations to an appropriate degree in order to correct represent the data. Here is from the paper: Note that simply normalizing each input of a layer may change what the layer can represent. For instance, normalizing the inputs of a sigmoid would constrain them to the linear regime of the nonlinearity. To address this, we make sure that the transformation inserted in the network can represent the identity transform. To accomplish this, we introduce, for each activation x (k) , a pair of parameters γ (k) , β(k) , which scale and shift the normalized value Like what the paper said, using sigmoid as an example, with just normalization, your gradients will likely to be always around the linear regime of the sigmoid function (i.e. one of the highest gradients), which may not be optimal for the model learning. However, without such scaling and shifting of the value after normalization, it is likely the learning will be restricted to only this area. I think of it as a way of shifting the activation values properly just like using biases, but in a way that is more effective (esp. since direct manipulation of biases is seen to be ineffective, as mentioned in the paper).
Input layer batch normalization
It is unlikely they will be the same, since batch normalization has a gamma and beta variable on top of the normalization process. In the paper, it is mentioned that this gamma and beta is used for sc
Input layer batch normalization It is unlikely they will be the same, since batch normalization has a gamma and beta variable on top of the normalization process. In the paper, it is mentioned that this gamma and beta is used for scaling and shifting the activations to an appropriate degree in order to correct represent the data. Here is from the paper: Note that simply normalizing each input of a layer may change what the layer can represent. For instance, normalizing the inputs of a sigmoid would constrain them to the linear regime of the nonlinearity. To address this, we make sure that the transformation inserted in the network can represent the identity transform. To accomplish this, we introduce, for each activation x (k) , a pair of parameters γ (k) , β(k) , which scale and shift the normalized value Like what the paper said, using sigmoid as an example, with just normalization, your gradients will likely to be always around the linear regime of the sigmoid function (i.e. one of the highest gradients), which may not be optimal for the model learning. However, without such scaling and shifting of the value after normalization, it is likely the learning will be restricted to only this area. I think of it as a way of shifting the activation values properly just like using biases, but in a way that is more effective (esp. since direct manipulation of biases is seen to be ineffective, as mentioned in the paper).
Input layer batch normalization It is unlikely they will be the same, since batch normalization has a gamma and beta variable on top of the normalization process. In the paper, it is mentioned that this gamma and beta is used for sc
36,858
Difference between the Fisherian and Neymanian methods for causal inference?
In short, your intuition is correct. Fisher's approach is based on randomization tests: using the randomization procedure of the treatment assignment, you assume a sharp null hypothesis of no effect and then compute the exact p-value, that is, the probability of seeing an effect as big as the one observed assuming no effect. Neyman's approach is based on estimation. You will estimate the average treatment effect and a conservative variance (the Neyman-Variance, equivalent to the HC2 variance) and, by the CLT, you can make inferences using a normal approximation. For more information about both you should check chapters 5 and 6 of Imbens and Rubin book. It's important to notice that these methods are not "causal methods" strictly speaking, they are simply estimation and testing methods, and usually discussed in the context of experiments, where it's the validity of the experiment that warrants the causal meaning. That is, there's nothing causal about them per se, in the sense that whether your estimate is causal or not depends on your identification strategy. To learn more about identification, it's definitely worth studying causal graphs --- you can find references here. On whether one is better than the other, there's no answer to that without context, it depends on your goal. For example, do you care about the sharp null hypothesis? In a lot of cases people don't, since the exact null is implausible, so they care more about estimation --- if that's your case, then randomization tests will not be very useful for your problem.
Difference between the Fisherian and Neymanian methods for causal inference?
In short, your intuition is correct. Fisher's approach is based on randomization tests: using the randomization procedure of the treatment assignment, you assume a sharp null hypothesis of no effect a
Difference between the Fisherian and Neymanian methods for causal inference? In short, your intuition is correct. Fisher's approach is based on randomization tests: using the randomization procedure of the treatment assignment, you assume a sharp null hypothesis of no effect and then compute the exact p-value, that is, the probability of seeing an effect as big as the one observed assuming no effect. Neyman's approach is based on estimation. You will estimate the average treatment effect and a conservative variance (the Neyman-Variance, equivalent to the HC2 variance) and, by the CLT, you can make inferences using a normal approximation. For more information about both you should check chapters 5 and 6 of Imbens and Rubin book. It's important to notice that these methods are not "causal methods" strictly speaking, they are simply estimation and testing methods, and usually discussed in the context of experiments, where it's the validity of the experiment that warrants the causal meaning. That is, there's nothing causal about them per se, in the sense that whether your estimate is causal or not depends on your identification strategy. To learn more about identification, it's definitely worth studying causal graphs --- you can find references here. On whether one is better than the other, there's no answer to that without context, it depends on your goal. For example, do you care about the sharp null hypothesis? In a lot of cases people don't, since the exact null is implausible, so they care more about estimation --- if that's your case, then randomization tests will not be very useful for your problem.
Difference between the Fisherian and Neymanian methods for causal inference? In short, your intuition is correct. Fisher's approach is based on randomization tests: using the randomization procedure of the treatment assignment, you assume a sharp null hypothesis of no effect a
36,859
In PCA, is there a systematic way of dropping variables to maximise the segregation of two populations?
Principal Components (PCs) are based on the variances of the predictor variables/features. There is no assurance that the most highly variable features will be those that are most highly related to your classification. That is one possible explanation for your results. Also, when you limit yourself to projections onto 2 PCs at a time as you do in your plots, you might be missing better separations that exist in higher-dimensional patterns. As you are already incorporating your predictors as linear combinations in your PC plots, you might consider setting this up as a logistic or multinomial regression model. With only 2 classes (e.g., "Aurignacian" versus "Gravettian"), a logistic regression describes the probability of class membership as a function of linear combinations of the predictor variables. A multinomial regression generalizes to more than one class. These approaches provide important flexibility with respect both to the outcome/classification variable and to the predictors. In terms of the classification outcome, you model the probability of class membership rather than making an irrevocable all-or-none choice in the model itself. Thus you can for example allow for different weights for different types of classification errors based on the same logistic/multinomial model. Particularly when you start removing predictor variables from a model (as you were doing in your examples), there is a danger that the final model will become too dependent on the particular data sample at hand. In terms of predictor variables in logistic or multinomial regression, you can use standard penalization methods like LASSO or ridge regression to potentially improve the performance of your model on new data samples. A ridge-regression logistic or multinomial model is close to what you seem to be trying to accomplish in your examples. It is fundamentally based on principal components of the feature set, but it weights the PCs in terms of their relations to the classifications rather than by the fractions of feature-set variance that they include.
In PCA, is there a systematic way of dropping variables to maximise the segregation of two populatio
Principal Components (PCs) are based on the variances of the predictor variables/features. There is no assurance that the most highly variable features will be those that are most highly related to yo
In PCA, is there a systematic way of dropping variables to maximise the segregation of two populations? Principal Components (PCs) are based on the variances of the predictor variables/features. There is no assurance that the most highly variable features will be those that are most highly related to your classification. That is one possible explanation for your results. Also, when you limit yourself to projections onto 2 PCs at a time as you do in your plots, you might be missing better separations that exist in higher-dimensional patterns. As you are already incorporating your predictors as linear combinations in your PC plots, you might consider setting this up as a logistic or multinomial regression model. With only 2 classes (e.g., "Aurignacian" versus "Gravettian"), a logistic regression describes the probability of class membership as a function of linear combinations of the predictor variables. A multinomial regression generalizes to more than one class. These approaches provide important flexibility with respect both to the outcome/classification variable and to the predictors. In terms of the classification outcome, you model the probability of class membership rather than making an irrevocable all-or-none choice in the model itself. Thus you can for example allow for different weights for different types of classification errors based on the same logistic/multinomial model. Particularly when you start removing predictor variables from a model (as you were doing in your examples), there is a danger that the final model will become too dependent on the particular data sample at hand. In terms of predictor variables in logistic or multinomial regression, you can use standard penalization methods like LASSO or ridge regression to potentially improve the performance of your model on new data samples. A ridge-regression logistic or multinomial model is close to what you seem to be trying to accomplish in your examples. It is fundamentally based on principal components of the feature set, but it weights the PCs in terms of their relations to the classifications rather than by the fractions of feature-set variance that they include.
In PCA, is there a systematic way of dropping variables to maximise the segregation of two populatio Principal Components (PCs) are based on the variances of the predictor variables/features. There is no assurance that the most highly variable features will be those that are most highly related to yo
36,860
Similarity measures for more than 2 variables
This answer will draw heavily on the ecological literature, where Jaccard and other (dis)similarity measures are commonly used to quantify the compositional (dis)similarity between species assemblages at different sites. The single best reference is Baselga (2013) Multiple site dissimilarity quantifies compositional heterogeneity among several sites, while average pairwise dissimilarity may be misleading, which is freely available here. Basically, there are several approaches to quantifying higher-order dissimilarities (higher-order than pairwise). One is to average the pairwise dissimilarities for all pairs in the sample. This metric generally performs poorly for a variety of reasons, detailed in Baselga (2013). Another possibility is to find the average distance from an observation to the multivariate centroid. There is an explicit generalization of the Sorensen index to more than two observations. Recall that the Sorensen index is $\frac{2ab}{a+b}$ where a is the number of species (ones in your case) in sample A, b is the number of species in sample B, and ab is the number of species shared by samples A and B (i.e. the dot product). The three-site generalization, formulated by Diserud and Odegaard (2007) and discussed by Chao et al (2012) is $\frac{3}{2}\frac{ab+ac+bc-abc}{a+b+c}$. Consult Diserud and Odegaard (2007) for the motivation behind this metric as well as extensions to $N>3$. The references in Baselga (2013) will also point you to a multi-site generalization of the Simpson index, as well as R packages to compute the multi-site Sorensen and Simpson metrics. Some researchers have also found it useful to examine the average number of species shared by $i$ sites, where $i$ ranges from $2$ to $N$. This reveals some interesting scaling properties and unites a variety of concepts for different values of $i$. The key reference here is Hui and McGeoch (2014) available for free here. This paper also has an associated R package called 'zetadiv'.
Similarity measures for more than 2 variables
This answer will draw heavily on the ecological literature, where Jaccard and other (dis)similarity measures are commonly used to quantify the compositional (dis)similarity between species assemblages
Similarity measures for more than 2 variables This answer will draw heavily on the ecological literature, where Jaccard and other (dis)similarity measures are commonly used to quantify the compositional (dis)similarity between species assemblages at different sites. The single best reference is Baselga (2013) Multiple site dissimilarity quantifies compositional heterogeneity among several sites, while average pairwise dissimilarity may be misleading, which is freely available here. Basically, there are several approaches to quantifying higher-order dissimilarities (higher-order than pairwise). One is to average the pairwise dissimilarities for all pairs in the sample. This metric generally performs poorly for a variety of reasons, detailed in Baselga (2013). Another possibility is to find the average distance from an observation to the multivariate centroid. There is an explicit generalization of the Sorensen index to more than two observations. Recall that the Sorensen index is $\frac{2ab}{a+b}$ where a is the number of species (ones in your case) in sample A, b is the number of species in sample B, and ab is the number of species shared by samples A and B (i.e. the dot product). The three-site generalization, formulated by Diserud and Odegaard (2007) and discussed by Chao et al (2012) is $\frac{3}{2}\frac{ab+ac+bc-abc}{a+b+c}$. Consult Diserud and Odegaard (2007) for the motivation behind this metric as well as extensions to $N>3$. The references in Baselga (2013) will also point you to a multi-site generalization of the Simpson index, as well as R packages to compute the multi-site Sorensen and Simpson metrics. Some researchers have also found it useful to examine the average number of species shared by $i$ sites, where $i$ ranges from $2$ to $N$. This reveals some interesting scaling properties and unites a variety of concepts for different values of $i$. The key reference here is Hui and McGeoch (2014) available for free here. This paper also has an associated R package called 'zetadiv'.
Similarity measures for more than 2 variables This answer will draw heavily on the ecological literature, where Jaccard and other (dis)similarity measures are commonly used to quantify the compositional (dis)similarity between species assemblages
36,861
Similarity measures for more than 2 variables
A possible direction is to represent the problem as a graph. The variables will be the nodes and the edges will be strong enough pair wise correlation. You can define many correlation measures based on the graph, and you should find the one that suites you most. In most common graph correlations, the denser the graph, the more correlated it is. Possible measures might be: The number (or ratio) of nodes (variables) with and edges (a strong correlation with some variable). It is easy to compute and gives you the number of independent variables. A variation on this metric is choosing only nodes with at least few edges. The number of edges. This metric give higher weight to the number of correlations but might give an high score to a graph with many not connected variables. The number (or ratio) of connected components. This measure is a bit more complex but it captures better our common definition os similarity. If a variable is connected to a group of variables, they tend to behave as a single unit, regardless of the number of variables. You can use dbscan in order to get the graph structure but other graph algorithms might fit your needs also.
Similarity measures for more than 2 variables
A possible direction is to represent the problem as a graph. The variables will be the nodes and the edges will be strong enough pair wise correlation. You can define many correlation measures based o
Similarity measures for more than 2 variables A possible direction is to represent the problem as a graph. The variables will be the nodes and the edges will be strong enough pair wise correlation. You can define many correlation measures based on the graph, and you should find the one that suites you most. In most common graph correlations, the denser the graph, the more correlated it is. Possible measures might be: The number (or ratio) of nodes (variables) with and edges (a strong correlation with some variable). It is easy to compute and gives you the number of independent variables. A variation on this metric is choosing only nodes with at least few edges. The number of edges. This metric give higher weight to the number of correlations but might give an high score to a graph with many not connected variables. The number (or ratio) of connected components. This measure is a bit more complex but it captures better our common definition os similarity. If a variable is connected to a group of variables, they tend to behave as a single unit, regardless of the number of variables. You can use dbscan in order to get the graph structure but other graph algorithms might fit your needs also.
Similarity measures for more than 2 variables A possible direction is to represent the problem as a graph. The variables will be the nodes and the edges will be strong enough pair wise correlation. You can define many correlation measures based o
36,862
Similarity measures for more than 2 variables
Similarity is always between two items. It can then be extended to more (say three) items by first creating single representation for two items and then finding its similarity with the third item. So, when you have 1000 items you can ask the question how similar or far is any given vector from the representation of these 1000 vectors. This representation can be the mean of these 1000 vectors or it could be anything based on how you define it.
Similarity measures for more than 2 variables
Similarity is always between two items. It can then be extended to more (say three) items by first creating single representation for two items and then finding its similarity with the third item. So,
Similarity measures for more than 2 variables Similarity is always between two items. It can then be extended to more (say three) items by first creating single representation for two items and then finding its similarity with the third item. So, when you have 1000 items you can ask the question how similar or far is any given vector from the representation of these 1000 vectors. This representation can be the mean of these 1000 vectors or it could be anything based on how you define it.
Similarity measures for more than 2 variables Similarity is always between two items. It can then be extended to more (say three) items by first creating single representation for two items and then finding its similarity with the third item. So,
36,863
Suspiciously high shrinkage in random effects logistic regression
I suspect that the answer here has to do with the definition of "effective sample size". A rule of thumb (from Harrell's Regression Modeling Strategies book) is that effective sample size for a Bernoulli variable is the minimum of the number of successes and failures; e.g. a sample of size 10,000 with only 4 successes is more like having $n=4$ than $n=10^4$. The effective sample sizes here are not tiny, but they're a lot smaller than the number of observations. Effective sample sizes per group: summary(with(SimData,tapply(Res,list(ID), function(x) min(sum(x==0),sum(x==1))))) Min. 1st Qu. Median Mean 3rd Qu. Max. 4.00 11.00 16.00 21.63 29.00 55.00 Sample sizes per group: summary(c(table(SimData$ID))) Min. 1st Qu. Median Mean 3rd Qu. Max. 83.0 172.5 199.0 243.8 295.0 528.0 One way to test this explanation would be to do an analogous example with continuously varying (Gamma or Gaussian) responses.
Suspiciously high shrinkage in random effects logistic regression
I suspect that the answer here has to do with the definition of "effective sample size". A rule of thumb (from Harrell's Regression Modeling Strategies book) is that effective sample size for a Bernou
Suspiciously high shrinkage in random effects logistic regression I suspect that the answer here has to do with the definition of "effective sample size". A rule of thumb (from Harrell's Regression Modeling Strategies book) is that effective sample size for a Bernoulli variable is the minimum of the number of successes and failures; e.g. a sample of size 10,000 with only 4 successes is more like having $n=4$ than $n=10^4$. The effective sample sizes here are not tiny, but they're a lot smaller than the number of observations. Effective sample sizes per group: summary(with(SimData,tapply(Res,list(ID), function(x) min(sum(x==0),sum(x==1))))) Min. 1st Qu. Median Mean 3rd Qu. Max. 4.00 11.00 16.00 21.63 29.00 55.00 Sample sizes per group: summary(c(table(SimData$ID))) Min. 1st Qu. Median Mean 3rd Qu. Max. 83.0 172.5 199.0 243.8 295.0 528.0 One way to test this explanation would be to do an analogous example with continuously varying (Gamma or Gaussian) responses.
Suspiciously high shrinkage in random effects logistic regression I suspect that the answer here has to do with the definition of "effective sample size". A rule of thumb (from Harrell's Regression Modeling Strategies book) is that effective sample size for a Bernou
36,864
Is feature selection with dummy coding of categorical variables problematic? [duplicate]
(I'm writing this here just to be sure the question isn't left "unanswered".) Yes, it can be a problem if we run lasso on a design matrix with dummy variable coding. Perhaps only some levels will be selected by the model. Like you mention, this makes the coding we choose a "tuning parameter" of the model, something that will change our estimate and that the user has to specify. This alone is undesirable, but it's also undesirable from a practicality stand point. If any levels of a factor are in the model, we will have to measure the factor, but then we only get to use it's value when it happens to be in the selected levels! This is especially problematic when the factor is expensive to measure.
Is feature selection with dummy coding of categorical variables problematic? [duplicate]
(I'm writing this here just to be sure the question isn't left "unanswered".) Yes, it can be a problem if we run lasso on a design matrix with dummy variable coding. Perhaps only some levels will be s
Is feature selection with dummy coding of categorical variables problematic? [duplicate] (I'm writing this here just to be sure the question isn't left "unanswered".) Yes, it can be a problem if we run lasso on a design matrix with dummy variable coding. Perhaps only some levels will be selected by the model. Like you mention, this makes the coding we choose a "tuning parameter" of the model, something that will change our estimate and that the user has to specify. This alone is undesirable, but it's also undesirable from a practicality stand point. If any levels of a factor are in the model, we will have to measure the factor, but then we only get to use it's value when it happens to be in the selected levels! This is especially problematic when the factor is expensive to measure.
Is feature selection with dummy coding of categorical variables problematic? [duplicate] (I'm writing this here just to be sure the question isn't left "unanswered".) Yes, it can be a problem if we run lasso on a design matrix with dummy variable coding. Perhaps only some levels will be s
36,865
Is there a symbol for the median of a population?
I have seen a number of symbols used to denote the population median. One I have seen quite a few times is $\tilde\mu$, but as whuber suggests in comments, you should define whatever you do use. So if you were to use this suggestion, you could say something like: $H_0: \ \tilde{\mu}_{a} = \tilde{\mu}_{b}$ $H_a: \ \tilde{\mu}_{a} \neq \tilde{\mu}_{b}$ $\text{where }\tilde{\mu}\text{ denotes the population median.}$ It would be okay to use $m$ for that, just as you have it in your question (as long as you define it) -- though keep in mind that conventionally population quantities are denoted by Greek symbols, which is probably why $\tilde\mu$ tends to crop up. You'd also need to be clear (somewhere) about the meaning of the subscripts $_a$ and $_b$. (Note also the use of $\neq$ in preference to $<>$.)
Is there a symbol for the median of a population?
I have seen a number of symbols used to denote the population median. One I have seen quite a few times is $\tilde\mu$, but as whuber suggests in comments, you should define whatever you do use. So if
Is there a symbol for the median of a population? I have seen a number of symbols used to denote the population median. One I have seen quite a few times is $\tilde\mu$, but as whuber suggests in comments, you should define whatever you do use. So if you were to use this suggestion, you could say something like: $H_0: \ \tilde{\mu}_{a} = \tilde{\mu}_{b}$ $H_a: \ \tilde{\mu}_{a} \neq \tilde{\mu}_{b}$ $\text{where }\tilde{\mu}\text{ denotes the population median.}$ It would be okay to use $m$ for that, just as you have it in your question (as long as you define it) -- though keep in mind that conventionally population quantities are denoted by Greek symbols, which is probably why $\tilde\mu$ tends to crop up. You'd also need to be clear (somewhere) about the meaning of the subscripts $_a$ and $_b$. (Note also the use of $\neq$ in preference to $<>$.)
Is there a symbol for the median of a population? I have seen a number of symbols used to denote the population median. One I have seen quite a few times is $\tilde\mu$, but as whuber suggests in comments, you should define whatever you do use. So if
36,866
Is there a symbol for the median of a population?
I am currently in an introduction to statistics class and the textbook says that the median of a statistic can be represented with an "M" (a capital m). And the median of a parameter can be represented with a "θ" (a theta). This is as straightforward of an answer as you are going to get, though I am sure the way you handled the situation would work just fine also.
Is there a symbol for the median of a population?
I am currently in an introduction to statistics class and the textbook says that the median of a statistic can be represented with an "M" (a capital m). And the median of a parameter can be represente
Is there a symbol for the median of a population? I am currently in an introduction to statistics class and the textbook says that the median of a statistic can be represented with an "M" (a capital m). And the median of a parameter can be represented with a "θ" (a theta). This is as straightforward of an answer as you are going to get, though I am sure the way you handled the situation would work just fine also.
Is there a symbol for the median of a population? I am currently in an introduction to statistics class and the textbook says that the median of a statistic can be represented with an "M" (a capital m). And the median of a parameter can be represente
36,867
Is there a symbol for the median of a population?
The Greek letter Eta (η) can also be used to represent the population median. Check this from Wikipedia: In Statistics, η2 is the "partial regression coefficient". η is the symbol for the linear predictor of a generalized linear model, and can also be used to denote the median of a population, or thresholding parameter in Sparse Partial Least Squares regression.
Is there a symbol for the median of a population?
The Greek letter Eta (η) can also be used to represent the population median. Check this from Wikipedia: In Statistics, η2 is the "partial regression coefficient". η is the symbol for the linear predi
Is there a symbol for the median of a population? The Greek letter Eta (η) can also be used to represent the population median. Check this from Wikipedia: In Statistics, η2 is the "partial regression coefficient". η is the symbol for the linear predictor of a generalized linear model, and can also be used to denote the median of a population, or thresholding parameter in Sparse Partial Least Squares regression.
Is there a symbol for the median of a population? The Greek letter Eta (η) can also be used to represent the population median. Check this from Wikipedia: In Statistics, η2 is the "partial regression coefficient". η is the symbol for the linear predi
36,868
Is there a symbol for the median of a population?
You can refer to the median as $Q(a,50\%)$, i.e. the 50th percentile, and write the null hypothesis as: $$H_0: Q(a,50\%)=Q(b,50\%)$$ This notation is longer than the other notations here, but its meaning is probably more obvious. So you probably would want to clarify that $Q$ is the quantile function, but readers might understand the reference to the median automatically.
Is there a symbol for the median of a population?
You can refer to the median as $Q(a,50\%)$, i.e. the 50th percentile, and write the null hypothesis as: $$H_0: Q(a,50\%)=Q(b,50\%)$$ This notation is longer than the other notations here, but its mean
Is there a symbol for the median of a population? You can refer to the median as $Q(a,50\%)$, i.e. the 50th percentile, and write the null hypothesis as: $$H_0: Q(a,50\%)=Q(b,50\%)$$ This notation is longer than the other notations here, but its meaning is probably more obvious. So you probably would want to clarify that $Q$ is the quantile function, but readers might understand the reference to the median automatically.
Is there a symbol for the median of a population? You can refer to the median as $Q(a,50\%)$, i.e. the 50th percentile, and write the null hypothesis as: $$H_0: Q(a,50\%)=Q(b,50\%)$$ This notation is longer than the other notations here, but its mean
36,869
As sample size increases, why does the standard deviation of results get smaller? Can someone please provide a laymen example and explain why
As sample size increases (for example, a trading strategy with an 80% edge), why does the standard deviation of results get smaller? The key concept here is "results." What are these results? The results are the variances of estimators of population parameters such as mean $\mu$. For instance, if you're measuring the sample variance $s^2_j$ of values $x_{i_j}$ in your sample $j$, it doesn't get any smaller with larger sample size $n_j$: $$s^2_j=\frac 1 {n_j-1}\sum_{i_j} (x_{i_j}-\bar x_j)^2$$ where $\bar x_j=\frac 1 n_j\sum_{i_j}x_{i_j}$ is a sample mean. However, the estimator of the variance $s^2_\mu$ of a sample mean $\bar x_j$ will decrease with the sample size: $$\frac 1 n_js^2_j$$ The layman explanation goes like this. Suppose the whole population size is $n$. If we looked at every value $x_{j=1\dots n}$, our sample mean would have been equal to the true mean: $\bar x_j=\mu$. In other words the uncertainty would be zero, and the variance of the estimator would be zero too: $s^2_j=0$ However, when you're only looking at the sample of size $n_j$. You calculate the sample mean estimator $\bar x_j$ with uncertainty $s^2_j>0$. So, somewhere between sample size $n_j$ and $n$ the uncertainty (variance) of the sample mean $\bar x_j$ decreased from non-zero to zero. That's the simplest explanation I can come up with.
As sample size increases, why does the standard deviation of results get smaller? Can someone please
As sample size increases (for example, a trading strategy with an 80% edge), why does the standard deviation of results get smaller? The key concept here is "results." What are these results? The r
As sample size increases, why does the standard deviation of results get smaller? Can someone please provide a laymen example and explain why As sample size increases (for example, a trading strategy with an 80% edge), why does the standard deviation of results get smaller? The key concept here is "results." What are these results? The results are the variances of estimators of population parameters such as mean $\mu$. For instance, if you're measuring the sample variance $s^2_j$ of values $x_{i_j}$ in your sample $j$, it doesn't get any smaller with larger sample size $n_j$: $$s^2_j=\frac 1 {n_j-1}\sum_{i_j} (x_{i_j}-\bar x_j)^2$$ where $\bar x_j=\frac 1 n_j\sum_{i_j}x_{i_j}$ is a sample mean. However, the estimator of the variance $s^2_\mu$ of a sample mean $\bar x_j$ will decrease with the sample size: $$\frac 1 n_js^2_j$$ The layman explanation goes like this. Suppose the whole population size is $n$. If we looked at every value $x_{j=1\dots n}$, our sample mean would have been equal to the true mean: $\bar x_j=\mu$. In other words the uncertainty would be zero, and the variance of the estimator would be zero too: $s^2_j=0$ However, when you're only looking at the sample of size $n_j$. You calculate the sample mean estimator $\bar x_j$ with uncertainty $s^2_j>0$. So, somewhere between sample size $n_j$ and $n$ the uncertainty (variance) of the sample mean $\bar x_j$ decreased from non-zero to zero. That's the simplest explanation I can come up with.
As sample size increases, why does the standard deviation of results get smaller? Can someone please As sample size increases (for example, a trading strategy with an 80% edge), why does the standard deviation of results get smaller? The key concept here is "results." What are these results? The r
36,870
As sample size increases, why does the standard deviation of results get smaller? Can someone please provide a laymen example and explain why
Maybe the easiest way to think about it is with regards to the difference between a population and a sample. If I ask you what the mean of a variable is in your sample, you don't give me an estimate, do you? You just calculate it and tell me, because, by definition, you have all the data that comprises the sample and can therefore directly observe the statistic of interest. Correlation coefficients are no different in this sense: if I ask you what the correlation is between X and Y in your sample, and I clearly don't care about what it is outside the sample and in the larger population (real or metaphysical) from which it's drawn, then you just crunch the numbers and tell me, no probability theory involved. Now, what if we do care about the correlation between these two variables outside the sample, i.e. in either some unobserved population or in the unobservable and in some sense constant causal dynamics of reality? (If we're conceiving of it as the latter then the population is a "superpopulation"; see for example https://www.jstor.org/stable/2529429.) Then of course we do significance tests and otherwise use what we know, in the sample, to estimate what we don't, in the population, including the population's standard deviation which starts to get to your question. But first let's think about it from the other extreme, where we gather a sample that's so large then it simply becomes the population. Imagine census data if the research question is about the country's entire real population, or perhaps it's a general scientific theory and we have an infinite "sample": then, again, if I want to know how the world works, I leverage my omnipotence and just calculate, rather than merely estimate, my statistic of interest. What if I then have a brainfart and am no longer omnipotent, but am still close to it, so that I am missing one observation, and my sample is now one observation short of capturing the entire population? Now I need to make estimates again, with a range of values that it could take with varying probabilities - I can no longer pinpoint it - but the thing I'm estimating is still, in reality, a single number - a point on the number line, not a range - and I still have tons of data, so I can say with 95% confidence that the true statistic of interest lies somewhere within some very tiny range. It all depends of course on what the value(s) of that last observation happen to be, but it's just one observation, so it would need to be crazily out of the ordinary in order to change my statistic of interest much, which, of course, is unlikely and reflected in my narrow confidence interval. The other side of this coin tells the same story: the mountain of data that I do have could, by sheer coincidence, be leading me to calculate sample statistics that are very different from what I would calculate if I could just augment that data with the observation(s) I'm missing, but the odds of having drawn such a misleading, biased sample purely by chance are really, really low. That's basically what I am accounting for and communicating when I report my very narrow confidence interval for where the population statistic of interest really lies. Now if we walk backwards from there, of course, the confidence starts to decrease, and thus the interval of plausible population values - no matter where that interval lies on the number line - starts to widen. My sample is still deterministic as always, and I can calculate sample means and correlations, and I can treat those statistics as if they are claims about what I would be calculating if I had complete data on the population, but the smaller the sample, the more skeptical I need to be about those claims, and the more credence I need to give to the possibility that what I would really see in population data would be way off what I see in this sample. So all this is to sort of answer your question in reverse: our estimates of any out-of-sample statistics get more confident and converge on a single point, representing certain knowledge with complete data, for the same reason that they become less certain and range more widely the less data we have. It's also important to understand that the standard deviation of a statistic specifically refers to and quantifies the probabilities of getting different sample statistics in different samples all randomly drawn from the same population, which, again, itself has just one true value for that statistic of interest. There is no standard deviation of that statistic at all in the population itself - it's a constant number and doesn't vary. A variable, on the other hand, has a standard deviation all its own, both in the population and in any given sample, and then there's the estimate of that population standard deviation that you can make given the known standard deviation of that variable within a given sample of a given size. So it's important to keep all the references straight, when you can have a standard deviation (or rather, a standard error) around a point estimate of a population variable's standard deviation, based off the standard deviation of that variable in your sample. There's just no simpler way to talk about it. And lastly, note that, yes, it is certainly possible for a sample to give you a biased representation of the variances in the population, so, while it's relatively unlikely, it is always possible that a smaller sample will not just lie to you about the population statistic of interest but also lie to you about how much you should expect that statistic of interest to vary from sample to sample. There's no way around that. Think of it like if someone makes a claim and then you ask them if they're lying. Maybe they say yes, in which case you can be sure that they're not telling you anything worth considering. But if they say no, you're kinda back at square one. Either they're lying or they're not, and if you have no one else to ask, you just have to choose whether or not to believe them. (Bayesians seem to think they have some better way to make that decision but I humbly disagree.)
As sample size increases, why does the standard deviation of results get smaller? Can someone please
Maybe the easiest way to think about it is with regards to the difference between a population and a sample. If I ask you what the mean of a variable is in your sample, you don't give me an estimate,
As sample size increases, why does the standard deviation of results get smaller? Can someone please provide a laymen example and explain why Maybe the easiest way to think about it is with regards to the difference between a population and a sample. If I ask you what the mean of a variable is in your sample, you don't give me an estimate, do you? You just calculate it and tell me, because, by definition, you have all the data that comprises the sample and can therefore directly observe the statistic of interest. Correlation coefficients are no different in this sense: if I ask you what the correlation is between X and Y in your sample, and I clearly don't care about what it is outside the sample and in the larger population (real or metaphysical) from which it's drawn, then you just crunch the numbers and tell me, no probability theory involved. Now, what if we do care about the correlation between these two variables outside the sample, i.e. in either some unobserved population or in the unobservable and in some sense constant causal dynamics of reality? (If we're conceiving of it as the latter then the population is a "superpopulation"; see for example https://www.jstor.org/stable/2529429.) Then of course we do significance tests and otherwise use what we know, in the sample, to estimate what we don't, in the population, including the population's standard deviation which starts to get to your question. But first let's think about it from the other extreme, where we gather a sample that's so large then it simply becomes the population. Imagine census data if the research question is about the country's entire real population, or perhaps it's a general scientific theory and we have an infinite "sample": then, again, if I want to know how the world works, I leverage my omnipotence and just calculate, rather than merely estimate, my statistic of interest. What if I then have a brainfart and am no longer omnipotent, but am still close to it, so that I am missing one observation, and my sample is now one observation short of capturing the entire population? Now I need to make estimates again, with a range of values that it could take with varying probabilities - I can no longer pinpoint it - but the thing I'm estimating is still, in reality, a single number - a point on the number line, not a range - and I still have tons of data, so I can say with 95% confidence that the true statistic of interest lies somewhere within some very tiny range. It all depends of course on what the value(s) of that last observation happen to be, but it's just one observation, so it would need to be crazily out of the ordinary in order to change my statistic of interest much, which, of course, is unlikely and reflected in my narrow confidence interval. The other side of this coin tells the same story: the mountain of data that I do have could, by sheer coincidence, be leading me to calculate sample statistics that are very different from what I would calculate if I could just augment that data with the observation(s) I'm missing, but the odds of having drawn such a misleading, biased sample purely by chance are really, really low. That's basically what I am accounting for and communicating when I report my very narrow confidence interval for where the population statistic of interest really lies. Now if we walk backwards from there, of course, the confidence starts to decrease, and thus the interval of plausible population values - no matter where that interval lies on the number line - starts to widen. My sample is still deterministic as always, and I can calculate sample means and correlations, and I can treat those statistics as if they are claims about what I would be calculating if I had complete data on the population, but the smaller the sample, the more skeptical I need to be about those claims, and the more credence I need to give to the possibility that what I would really see in population data would be way off what I see in this sample. So all this is to sort of answer your question in reverse: our estimates of any out-of-sample statistics get more confident and converge on a single point, representing certain knowledge with complete data, for the same reason that they become less certain and range more widely the less data we have. It's also important to understand that the standard deviation of a statistic specifically refers to and quantifies the probabilities of getting different sample statistics in different samples all randomly drawn from the same population, which, again, itself has just one true value for that statistic of interest. There is no standard deviation of that statistic at all in the population itself - it's a constant number and doesn't vary. A variable, on the other hand, has a standard deviation all its own, both in the population and in any given sample, and then there's the estimate of that population standard deviation that you can make given the known standard deviation of that variable within a given sample of a given size. So it's important to keep all the references straight, when you can have a standard deviation (or rather, a standard error) around a point estimate of a population variable's standard deviation, based off the standard deviation of that variable in your sample. There's just no simpler way to talk about it. And lastly, note that, yes, it is certainly possible for a sample to give you a biased representation of the variances in the population, so, while it's relatively unlikely, it is always possible that a smaller sample will not just lie to you about the population statistic of interest but also lie to you about how much you should expect that statistic of interest to vary from sample to sample. There's no way around that. Think of it like if someone makes a claim and then you ask them if they're lying. Maybe they say yes, in which case you can be sure that they're not telling you anything worth considering. But if they say no, you're kinda back at square one. Either they're lying or they're not, and if you have no one else to ask, you just have to choose whether or not to believe them. (Bayesians seem to think they have some better way to make that decision but I humbly disagree.)
As sample size increases, why does the standard deviation of results get smaller? Can someone please Maybe the easiest way to think about it is with regards to the difference between a population and a sample. If I ask you what the mean of a variable is in your sample, you don't give me an estimate,
36,871
Total Sum of Squares, Covariance between residuals and the predicted values
I'm going to assume this is all in the context of a linear model $Y = X\beta + \varepsilon$. Letting $H = X(X^T X)^{-1}X^T$, we have fitted values $\hat Y = H Y$ and residuals $e = Y - \hat Y = (I - H)Y$. For the second term in your expression, $$ \sum_i (y_i - \hat y_i)(\hat y_i - \bar y) = \langle e, HY - \bar y \mathbb 1\rangle $$ (where $\mathbb 1$ is the vector of all $1$'s and $\langle ., .\rangle$ is the standard inner product) $$ = \langle (I-H)Y, HY - \bar y \mathbb 1\rangle = Y^T (I-H)HY - \bar y Y^T (I-H) \mathbb 1. $$ Assuming we have an intercept in our model, $\mathbb 1$ is in the span of the columns of $X$ so $(I-H)\mathbb 1 = 0$. We also know that $H$ is idempotent so $(I-H)H = H-H^2 = H-H = 0$ therefore $\sum_i (y_i - \hat y_i)(\hat y_i - \bar y) = 0$. This tells us that the residuals are necessarily uncorrelated with the fitted values. This makes sense because the fitted values are the projection of $Y$ into the column space, while the residuals are the projection of $Y$ into the space orthogonal to the column space of $X$. These two vectors are necessarily orthogonal, i.e. uncorrelated. By showing that, under this model, $\sum_i (y_i - \hat y_i)(\hat y_i - \bar y) = 0$, we have proved that $$ \sum_i(y_i - \bar y)^2 = \sum_i(y_i - \hat y_i)^2 + \sum_i(\hat y_i - \bar y)^2 $$ which is a well-known decomposition. To answer your question about why correlation between $e$ and $\hat Y$ means there are better values possible, I think you really need to consider the geometric picture of linear regression as shown below, for example: (taken from random_guy's answer here). If we have two centered vectors $a$ and $b$, the (sample) correlation between them is $$ cor(a, b) = \frac{\sum_i a_ib_i}{\sqrt{\sum_i a_i^2 \sum b_i^2}} = \cos \theta $$ where $\theta$ is the angle between them. If this is new to you, you can read more about it here. Linear regression by definition seeks to minimize $\sum_i e_i^2$. Looking at the picture, we can see that this is the squared length of the vector $\hat \varepsilon$, and we know that this length will be the shortest when the angle between $\hat \varepsilon$ and $\hat Y$ is $90^o$ (if that's not clear, imagine moving the point given by the tip of the vector $\hat Y$ in the picture and see what happens to the length of $\hat \varepsilon$). Since $\cos 90^o = 0$ these two vectors are uncorrelated. If this angle is not $90^o$, i.e. $\sum_i e_i \hat y_i \neq 0 \implies \cos \theta \neq 0$, then we don't have the $\hat Y$ that's as close as possible. To answer your question about how the term $\sum_i (y_i - \hat y_i)(\hat y_i - \bar y)$ is a covariance, you need to remember that this is a sample covariance, not the covariance between random variables. As I showed above, that's always 0. Note that $$ \sum_i (y_i - \hat y_i)(\hat y_i - \bar y) = \sum_i ([y_i - \hat y_i] - 0)([\hat y_i] - \bar y). $$ Noting that the sample average of $y_i - \hat y_i = 0$, and the sample average of $\hat y_i = \bar y$, we have that this is a sample covariance by definition.
Total Sum of Squares, Covariance between residuals and the predicted values
I'm going to assume this is all in the context of a linear model $Y = X\beta + \varepsilon$. Letting $H = X(X^T X)^{-1}X^T$, we have fitted values $\hat Y = H Y$ and residuals $e = Y - \hat Y = (I - H
Total Sum of Squares, Covariance between residuals and the predicted values I'm going to assume this is all in the context of a linear model $Y = X\beta + \varepsilon$. Letting $H = X(X^T X)^{-1}X^T$, we have fitted values $\hat Y = H Y$ and residuals $e = Y - \hat Y = (I - H)Y$. For the second term in your expression, $$ \sum_i (y_i - \hat y_i)(\hat y_i - \bar y) = \langle e, HY - \bar y \mathbb 1\rangle $$ (where $\mathbb 1$ is the vector of all $1$'s and $\langle ., .\rangle$ is the standard inner product) $$ = \langle (I-H)Y, HY - \bar y \mathbb 1\rangle = Y^T (I-H)HY - \bar y Y^T (I-H) \mathbb 1. $$ Assuming we have an intercept in our model, $\mathbb 1$ is in the span of the columns of $X$ so $(I-H)\mathbb 1 = 0$. We also know that $H$ is idempotent so $(I-H)H = H-H^2 = H-H = 0$ therefore $\sum_i (y_i - \hat y_i)(\hat y_i - \bar y) = 0$. This tells us that the residuals are necessarily uncorrelated with the fitted values. This makes sense because the fitted values are the projection of $Y$ into the column space, while the residuals are the projection of $Y$ into the space orthogonal to the column space of $X$. These two vectors are necessarily orthogonal, i.e. uncorrelated. By showing that, under this model, $\sum_i (y_i - \hat y_i)(\hat y_i - \bar y) = 0$, we have proved that $$ \sum_i(y_i - \bar y)^2 = \sum_i(y_i - \hat y_i)^2 + \sum_i(\hat y_i - \bar y)^2 $$ which is a well-known decomposition. To answer your question about why correlation between $e$ and $\hat Y$ means there are better values possible, I think you really need to consider the geometric picture of linear regression as shown below, for example: (taken from random_guy's answer here). If we have two centered vectors $a$ and $b$, the (sample) correlation between them is $$ cor(a, b) = \frac{\sum_i a_ib_i}{\sqrt{\sum_i a_i^2 \sum b_i^2}} = \cos \theta $$ where $\theta$ is the angle between them. If this is new to you, you can read more about it here. Linear regression by definition seeks to minimize $\sum_i e_i^2$. Looking at the picture, we can see that this is the squared length of the vector $\hat \varepsilon$, and we know that this length will be the shortest when the angle between $\hat \varepsilon$ and $\hat Y$ is $90^o$ (if that's not clear, imagine moving the point given by the tip of the vector $\hat Y$ in the picture and see what happens to the length of $\hat \varepsilon$). Since $\cos 90^o = 0$ these two vectors are uncorrelated. If this angle is not $90^o$, i.e. $\sum_i e_i \hat y_i \neq 0 \implies \cos \theta \neq 0$, then we don't have the $\hat Y$ that's as close as possible. To answer your question about how the term $\sum_i (y_i - \hat y_i)(\hat y_i - \bar y)$ is a covariance, you need to remember that this is a sample covariance, not the covariance between random variables. As I showed above, that's always 0. Note that $$ \sum_i (y_i - \hat y_i)(\hat y_i - \bar y) = \sum_i ([y_i - \hat y_i] - 0)([\hat y_i] - \bar y). $$ Noting that the sample average of $y_i - \hat y_i = 0$, and the sample average of $\hat y_i = \bar y$, we have that this is a sample covariance by definition.
Total Sum of Squares, Covariance between residuals and the predicted values I'm going to assume this is all in the context of a linear model $Y = X\beta + \varepsilon$. Letting $H = X(X^T X)^{-1}X^T$, we have fitted values $\hat Y = H Y$ and residuals $e = Y - \hat Y = (I - H
36,872
What is Bayesian melding?
This question certainly deserves an answer, so I will do my best and hope that others will improve this answer. I have been reading the paper by Raftery and Poole which introduced the technique. A variety of applications can be found by searching for "Bayesian melding" but most of them just seem to repeat the notation and content of Raftery and Poole in their methodology sections. The situation is that there is a parameter $\theta$, possibly a vector, another parameter $\phi$, possibly a vector, and a (deterministic) function $M$ such that $\phi = M(\theta)$. In the original example, $\theta=(P_0, MSY)$ where $P_0$ is the size of a whale population and $MSY$ is the maximum sustainable yield (number of whales that can be hunted) and $\phi = P_{1993}$ the population of whales $1993$ years after time zero. The function $M$ is given by a complicated difference equation, iterated $1993$ times. The researcher has some prior information about $\theta$, given by a probability distribution $q_1(\theta)$, and some prior information about $\phi$, given by a probability distribution $q_2(\phi)$. It is desried to make inferences about $\theta$ and $\phi$, given these prior distributions and the function $M$. Unfortunately, the fact that $\phi = M(\theta)$ completely determines the distribution of $\phi$, so there is no way to use the information about $\phi$ given by the distribution $q_2(\phi)$ to conclude anything about $\phi$. The simpler approach is to ignore the prior $q_2(\phi)$ and just make inferences about $\phi$ using the distribution $q_1^*(\phi)$ induced from the fact that $\phi = M(\theta)$ and $\theta \sim q_1(\theta)$. The Bayesian melding approach is to replace the prior on $\phi$ by $$q^{[\phi]}(\phi) \propto q_2(\phi)^{1/2}q_1^*(\phi)^{1/2}$$ which is then also used to get a distribution on $\theta$. The idea seems to be that this distribution somehow combines the prior knowledge about $\phi$ with the fact that $\phi= M(\theta)$, even though these two facts are mathematically incompatible unless $q_2(\phi) = q_1^*(\phi)$. The evidence given in Section 5 of the Raftery and Poole paper seems to be simulating some values of $P_0, MSY$ and $P_{1993}$ for the whales, then adding some noise to $P_{1993}$ and concluding that Bayesian melding gives a better estimate of $P_{1993}$ than the simpler approach, which is not very surprising, since it seems that the added noise is just literally sampled from $q_2(P_{1993})$ in this case. (However, I may well have misunderstood this, and the fact that Bayesian melding seems to be somehat popular suggests that there is more going on here than I have understood.) Edit: from the point of view of a statistician, the more natural thing to do would be to make the function $M$ stochastic in some way, for example by adding noise. After some further thought, it seems that the method can be thought of in this way. Assume that either $\phi = M(\theta)$ (with probability $\alpha$) or $\phi$ is independent of $\theta$ (with probability $1-\alpha$). Then it follows that the distribution of $\phi$ is a mixture $$\alpha q_1^*(\phi) + (1-\alpha) q_2(\phi)$$ and if $q_2(\phi)$ is approximately equal to $q_1^*(\phi)$, which ought to be the case unless the model $M$ is obviously wrong, then this is approximately equal to $$q_2(\phi)^{1-\alpha}q_1^*(\phi)^{\alpha}$$ because $(1+x)^\alpha \approx 1+\alpha x$ when $x \approx 0$.
What is Bayesian melding?
This question certainly deserves an answer, so I will do my best and hope that others will improve this answer. I have been reading the paper by Raftery and Poole which introduced the technique. A var
What is Bayesian melding? This question certainly deserves an answer, so I will do my best and hope that others will improve this answer. I have been reading the paper by Raftery and Poole which introduced the technique. A variety of applications can be found by searching for "Bayesian melding" but most of them just seem to repeat the notation and content of Raftery and Poole in their methodology sections. The situation is that there is a parameter $\theta$, possibly a vector, another parameter $\phi$, possibly a vector, and a (deterministic) function $M$ such that $\phi = M(\theta)$. In the original example, $\theta=(P_0, MSY)$ where $P_0$ is the size of a whale population and $MSY$ is the maximum sustainable yield (number of whales that can be hunted) and $\phi = P_{1993}$ the population of whales $1993$ years after time zero. The function $M$ is given by a complicated difference equation, iterated $1993$ times. The researcher has some prior information about $\theta$, given by a probability distribution $q_1(\theta)$, and some prior information about $\phi$, given by a probability distribution $q_2(\phi)$. It is desried to make inferences about $\theta$ and $\phi$, given these prior distributions and the function $M$. Unfortunately, the fact that $\phi = M(\theta)$ completely determines the distribution of $\phi$, so there is no way to use the information about $\phi$ given by the distribution $q_2(\phi)$ to conclude anything about $\phi$. The simpler approach is to ignore the prior $q_2(\phi)$ and just make inferences about $\phi$ using the distribution $q_1^*(\phi)$ induced from the fact that $\phi = M(\theta)$ and $\theta \sim q_1(\theta)$. The Bayesian melding approach is to replace the prior on $\phi$ by $$q^{[\phi]}(\phi) \propto q_2(\phi)^{1/2}q_1^*(\phi)^{1/2}$$ which is then also used to get a distribution on $\theta$. The idea seems to be that this distribution somehow combines the prior knowledge about $\phi$ with the fact that $\phi= M(\theta)$, even though these two facts are mathematically incompatible unless $q_2(\phi) = q_1^*(\phi)$. The evidence given in Section 5 of the Raftery and Poole paper seems to be simulating some values of $P_0, MSY$ and $P_{1993}$ for the whales, then adding some noise to $P_{1993}$ and concluding that Bayesian melding gives a better estimate of $P_{1993}$ than the simpler approach, which is not very surprising, since it seems that the added noise is just literally sampled from $q_2(P_{1993})$ in this case. (However, I may well have misunderstood this, and the fact that Bayesian melding seems to be somehat popular suggests that there is more going on here than I have understood.) Edit: from the point of view of a statistician, the more natural thing to do would be to make the function $M$ stochastic in some way, for example by adding noise. After some further thought, it seems that the method can be thought of in this way. Assume that either $\phi = M(\theta)$ (with probability $\alpha$) or $\phi$ is independent of $\theta$ (with probability $1-\alpha$). Then it follows that the distribution of $\phi$ is a mixture $$\alpha q_1^*(\phi) + (1-\alpha) q_2(\phi)$$ and if $q_2(\phi)$ is approximately equal to $q_1^*(\phi)$, which ought to be the case unless the model $M$ is obviously wrong, then this is approximately equal to $$q_2(\phi)^{1-\alpha}q_1^*(\phi)^{\alpha}$$ because $(1+x)^\alpha \approx 1+\alpha x$ when $x \approx 0$.
What is Bayesian melding? This question certainly deserves an answer, so I will do my best and hope that others will improve this answer. I have been reading the paper by Raftery and Poole which introduced the technique. A var
36,873
Is there a parametric joint distribution such that $X$ and $Y$ are both uniform and $\mathbb{E}[Y \;|\; X]$ is linear?
We can develop rich parametric families from the trivial solution with copula $F(x,y) = \min(x,y)$, the case of perfect (positive) correlation, and its counterpart for perfect negative correlation. Concentrating the probability instead along the line segment connecting $(0,\alpha)$ to $(1,\beta)$ with $\beta\gt \alpha$ gives the copula $$F(x,y;\alpha,\beta) = \cases{\matrix{x y,&0\le y \lt \alpha\text{ or }\beta \lt y \le 1 \\ \beta x,&x(\beta-\alpha)\le y-\alpha \\ \alpha x + y-\alpha&\text{otherwise.}}}$$ A similar copula arises when $\beta \lt \alpha$, which I will also designate $F(x,y;\alpha,\beta)$. Think of these as mixtures: when $\beta \gt \alpha$, there are uniform components on the horizontal rectangles $[0,1]\times [0,\alpha]$, $[0,1]\times[\beta,1]$, and on the central rectangle $[0,1]\times[\alpha,\beta]$ there is a perfect correlation (whose distribution is that of $(U, \alpha+(\beta-\alpha)U)$ for a uniformly distributed variable $U$). This conception of $F$ makes it easy to compute the regression: it's a weighted sum of the three conditional means, $$\mathbb{E}(Y\mid X) = \alpha\left(\frac{\alpha}{2}\right) + (\beta-\alpha)\left(\alpha + (\beta-\alpha)X\right) + (1-\beta)\left(\frac{1+\beta}{2}\right).$$ This evidently is linear in $X$: the intercept equals $(1+(\beta-\alpha)^2)/2$ and the slope is $(\beta-\alpha)^2$ times the sign of $\beta-\alpha$. Moreover, it has been constructed to have uniform marginals. To create a parametric family, choose any parametric distribution for $(\alpha,\beta)$ with parameter $\theta$. Let $G(\alpha,\beta;\theta)$ be the distribution function. It describes a mixture of the $F(;\alpha,\beta)$ via integration: $$\tilde F(x,y;\theta) = \iint F(x,y;\alpha,\beta)dG(\alpha,\beta;\theta)$$ is the distribution function (copula). Because each $F(;\alpha,\beta)$ has uniform marginals, so does $\tilde F(;\theta)$. Moreover, its regression is linear because $$\eqalign{ \mathbb{E}_{\tilde F(;\theta)}(Y\mid X) &= \iint \mathbb{E}_{F(;\alpha,\beta)}(Y\mid X)dG(\alpha,\beta;\theta)\\ &=\iint ((1+(\beta-\alpha)^2)/2 + \operatorname{sgn}(\beta-\alpha)(\beta-\alpha)^2 X)dG(\alpha,\beta;\theta) \\ &= \iint (1+(\beta-\alpha)^2)/2 dG(\alpha,\beta;\theta) + \iint \operatorname{sgn}(\beta-\alpha)(\beta-\alpha)^2 dG(\alpha,\beta;\theta)\,X\\ &= \mathbb{E}_{G(;\theta)}((1+(\beta-\alpha)^2)/2) + \mathbb{E}_{G(;\theta)}(\operatorname{sgn}(\beta-\alpha)(\beta-\alpha)^2)X. }$$ This shows how the intercept and slope are the expectations of the intercept and slope (with respect to $G$), providing useful information for selecting appropriate families $G(;\theta)$. These graphics document a simulation from one such family. Here, $\alpha$ was drawn from a Beta$(5,1)$ distribution and $\beta$ was drawn independently from a Beta$(3,10)$ distribution. The first column shows histograms of the realizations of these parameters. The second column shows histograms of the marginal distributions of $X$ and $Y$: they are satisfactorily close to uniform. The rightmost column shows a random subset of the 100,000 simulated values, along with an estimate of its regression (the red line) and an approximation to the theoretical regression (black dotted line): they agree closely. The estimated regression was obtained by computing the means of $X$ and $Y$ within windows of $X$, then smoothing their trace with Loess. (The "theoretical" regression line is only an approximation obtained by replacing $\alpha$ and $\beta$ in the expectation formulas by their expectations. Exact formulas are straightforward to work out in this case, but are long and messy to code.) The R code that produced this figure can readily be used to study other families $G(;\theta)$. # # Draw `n` variates from the mixture copula. # `alpha` and `beta` are intended to be realizations of G(;theta). # runif.xy <- function(n, alpha=0, beta=1) { a <- pmin(alpha, beta) b <- pmax(alpha, beta) xy <- matrix(runif(2*n), nrow=2) # Start with a uniform distribution i <- xy[2,] > a & xy[2,] < b # Select the middle rectangle xy[2, i] <- (xy[1,]*(beta - alpha) + alpha)[i]# Create perfect correlation return(xy) } # # Specify the parameters ("theta"). # a.alpha <- 5 b.alpha <- 1 a.beta <- 3 b.beta <- 10 # # Draw the slope `beta` and intercept `alpha` from G(;theta). # n.sim <- 1e5 alpha <- rbeta(n.sim, a.alpha, b.alpha) beta <- rbeta(n.sim, a.beta, b.beta) # # Draw (X,Y) from the mixture. # sim <- runif.xy(n.sim, alpha, beta) # # Plot histograms of alpha, beta, X, Y. # par(mfcol=c(2,3)) hist(alpha); abline(v=a.alpha/(a.alpha+b.alpha), col="Red", lwd=2) hist(beta); abline(v=a.beta/(a.beta+b.beta), col="Red", lwd=2) hist(sim[1,], main="X Marginal", xlab="X") hist(sim[2,], main="Y Marginal", xlab="Y") # # Plot the simulation and its regression curve. # i <- sample.int(n.sim, min(5e3, n.sim)) # Limit how many points are shown plot(t(sim[, i]), asp=1, pch=19, col="#00000002", main="Simulation", xlab="X", ylab="Y") library(zoo) i <- order(sim[1,]) x <- as.vector(rollapply(ts(sim[1, i]), ceiling(n.sim/100), mean)) y <- as.vector(rollapply(ts(sim[2, i]), ceiling(n.sim/100), mean)) lines(lowess(y ~ x), col="Red", lwd=2) # # Overplot the theoretical regression curve. # a <- a.alpha / (a.alpha + b.alpha) # Expectation of `alpha` b <- a.beta / (a.beta + b.beta) # Expectation of `beta` intercept <- (1 + (b-a)^2)/2 slope <- (b - a)^2 * sign(b-a) abline(c(intercept, slope), lty=3, lwd=3)
Is there a parametric joint distribution such that $X$ and $Y$ are both uniform and $\mathbb{E}[Y \;
We can develop rich parametric families from the trivial solution with copula $F(x,y) = \min(x,y)$, the case of perfect (positive) correlation, and its counterpart for perfect negative correlation. C
Is there a parametric joint distribution such that $X$ and $Y$ are both uniform and $\mathbb{E}[Y \;|\; X]$ is linear? We can develop rich parametric families from the trivial solution with copula $F(x,y) = \min(x,y)$, the case of perfect (positive) correlation, and its counterpart for perfect negative correlation. Concentrating the probability instead along the line segment connecting $(0,\alpha)$ to $(1,\beta)$ with $\beta\gt \alpha$ gives the copula $$F(x,y;\alpha,\beta) = \cases{\matrix{x y,&0\le y \lt \alpha\text{ or }\beta \lt y \le 1 \\ \beta x,&x(\beta-\alpha)\le y-\alpha \\ \alpha x + y-\alpha&\text{otherwise.}}}$$ A similar copula arises when $\beta \lt \alpha$, which I will also designate $F(x,y;\alpha,\beta)$. Think of these as mixtures: when $\beta \gt \alpha$, there are uniform components on the horizontal rectangles $[0,1]\times [0,\alpha]$, $[0,1]\times[\beta,1]$, and on the central rectangle $[0,1]\times[\alpha,\beta]$ there is a perfect correlation (whose distribution is that of $(U, \alpha+(\beta-\alpha)U)$ for a uniformly distributed variable $U$). This conception of $F$ makes it easy to compute the regression: it's a weighted sum of the three conditional means, $$\mathbb{E}(Y\mid X) = \alpha\left(\frac{\alpha}{2}\right) + (\beta-\alpha)\left(\alpha + (\beta-\alpha)X\right) + (1-\beta)\left(\frac{1+\beta}{2}\right).$$ This evidently is linear in $X$: the intercept equals $(1+(\beta-\alpha)^2)/2$ and the slope is $(\beta-\alpha)^2$ times the sign of $\beta-\alpha$. Moreover, it has been constructed to have uniform marginals. To create a parametric family, choose any parametric distribution for $(\alpha,\beta)$ with parameter $\theta$. Let $G(\alpha,\beta;\theta)$ be the distribution function. It describes a mixture of the $F(;\alpha,\beta)$ via integration: $$\tilde F(x,y;\theta) = \iint F(x,y;\alpha,\beta)dG(\alpha,\beta;\theta)$$ is the distribution function (copula). Because each $F(;\alpha,\beta)$ has uniform marginals, so does $\tilde F(;\theta)$. Moreover, its regression is linear because $$\eqalign{ \mathbb{E}_{\tilde F(;\theta)}(Y\mid X) &= \iint \mathbb{E}_{F(;\alpha,\beta)}(Y\mid X)dG(\alpha,\beta;\theta)\\ &=\iint ((1+(\beta-\alpha)^2)/2 + \operatorname{sgn}(\beta-\alpha)(\beta-\alpha)^2 X)dG(\alpha,\beta;\theta) \\ &= \iint (1+(\beta-\alpha)^2)/2 dG(\alpha,\beta;\theta) + \iint \operatorname{sgn}(\beta-\alpha)(\beta-\alpha)^2 dG(\alpha,\beta;\theta)\,X\\ &= \mathbb{E}_{G(;\theta)}((1+(\beta-\alpha)^2)/2) + \mathbb{E}_{G(;\theta)}(\operatorname{sgn}(\beta-\alpha)(\beta-\alpha)^2)X. }$$ This shows how the intercept and slope are the expectations of the intercept and slope (with respect to $G$), providing useful information for selecting appropriate families $G(;\theta)$. These graphics document a simulation from one such family. Here, $\alpha$ was drawn from a Beta$(5,1)$ distribution and $\beta$ was drawn independently from a Beta$(3,10)$ distribution. The first column shows histograms of the realizations of these parameters. The second column shows histograms of the marginal distributions of $X$ and $Y$: they are satisfactorily close to uniform. The rightmost column shows a random subset of the 100,000 simulated values, along with an estimate of its regression (the red line) and an approximation to the theoretical regression (black dotted line): they agree closely. The estimated regression was obtained by computing the means of $X$ and $Y$ within windows of $X$, then smoothing their trace with Loess. (The "theoretical" regression line is only an approximation obtained by replacing $\alpha$ and $\beta$ in the expectation formulas by their expectations. Exact formulas are straightforward to work out in this case, but are long and messy to code.) The R code that produced this figure can readily be used to study other families $G(;\theta)$. # # Draw `n` variates from the mixture copula. # `alpha` and `beta` are intended to be realizations of G(;theta). # runif.xy <- function(n, alpha=0, beta=1) { a <- pmin(alpha, beta) b <- pmax(alpha, beta) xy <- matrix(runif(2*n), nrow=2) # Start with a uniform distribution i <- xy[2,] > a & xy[2,] < b # Select the middle rectangle xy[2, i] <- (xy[1,]*(beta - alpha) + alpha)[i]# Create perfect correlation return(xy) } # # Specify the parameters ("theta"). # a.alpha <- 5 b.alpha <- 1 a.beta <- 3 b.beta <- 10 # # Draw the slope `beta` and intercept `alpha` from G(;theta). # n.sim <- 1e5 alpha <- rbeta(n.sim, a.alpha, b.alpha) beta <- rbeta(n.sim, a.beta, b.beta) # # Draw (X,Y) from the mixture. # sim <- runif.xy(n.sim, alpha, beta) # # Plot histograms of alpha, beta, X, Y. # par(mfcol=c(2,3)) hist(alpha); abline(v=a.alpha/(a.alpha+b.alpha), col="Red", lwd=2) hist(beta); abline(v=a.beta/(a.beta+b.beta), col="Red", lwd=2) hist(sim[1,], main="X Marginal", xlab="X") hist(sim[2,], main="Y Marginal", xlab="Y") # # Plot the simulation and its regression curve. # i <- sample.int(n.sim, min(5e3, n.sim)) # Limit how many points are shown plot(t(sim[, i]), asp=1, pch=19, col="#00000002", main="Simulation", xlab="X", ylab="Y") library(zoo) i <- order(sim[1,]) x <- as.vector(rollapply(ts(sim[1, i]), ceiling(n.sim/100), mean)) y <- as.vector(rollapply(ts(sim[2, i]), ceiling(n.sim/100), mean)) lines(lowess(y ~ x), col="Red", lwd=2) # # Overplot the theoretical regression curve. # a <- a.alpha / (a.alpha + b.alpha) # Expectation of `alpha` b <- a.beta / (a.beta + b.beta) # Expectation of `beta` intercept <- (1 + (b-a)^2)/2 slope <- (b - a)^2 * sign(b-a) abline(c(intercept, slope), lty=3, lwd=3)
Is there a parametric joint distribution such that $X$ and $Y$ are both uniform and $\mathbb{E}[Y \; We can develop rich parametric families from the trivial solution with copula $F(x,y) = \min(x,y)$, the case of perfect (positive) correlation, and its counterpart for perfect negative correlation. C
36,874
Relating $f(\mathrm{Var}[X])$ to $\mathrm{Var}[f(X)]$ for Positive, Increasing, and Concave $f(X)$
There is no relation between the two quantities $f(\text{Var}[X])$ and $\text{Var}[f(X)]$ for concave $f$. Here are the examples to demonstrate this: Ex 1: Suppose random variable $X$ has the pmf: $p_X(0) = \frac{1}{2}$ and $p_X(4) = \frac{1}{2}$, and $f(x) = \sqrt{x}$. We get $\text{Var}[f(X)] = 1$ and $f(\text{Var}[X]) = f(4) = 2$. So, $\text{Var}[f(X)] < f(\text{Var}[X])$. Ex 2: Suppose random variable $X$ is same as before i.e. it has the pmf: $p_X(0) = \frac{1}{2}$ and $p_X(4) = \frac{1}{2}$, but $f$ changed to $f(x) = \sqrt{x} - 100$. Note that $\text{Var}[f(X)] = 1$ still, but now $f(\text{Var}[X]) = f(4) = 2-100=-98$. So, $\text{Var}[f(X)] > f(\text{Var}[X])$.
Relating $f(\mathrm{Var}[X])$ to $\mathrm{Var}[f(X)]$ for Positive, Increasing, and Concave $f(X)$
There is no relation between the two quantities $f(\text{Var}[X])$ and $\text{Var}[f(X)]$ for concave $f$. Here are the examples to demonstrate this: Ex 1: Suppose random variable $X$ has the pmf: $p
Relating $f(\mathrm{Var}[X])$ to $\mathrm{Var}[f(X)]$ for Positive, Increasing, and Concave $f(X)$ There is no relation between the two quantities $f(\text{Var}[X])$ and $\text{Var}[f(X)]$ for concave $f$. Here are the examples to demonstrate this: Ex 1: Suppose random variable $X$ has the pmf: $p_X(0) = \frac{1}{2}$ and $p_X(4) = \frac{1}{2}$, and $f(x) = \sqrt{x}$. We get $\text{Var}[f(X)] = 1$ and $f(\text{Var}[X]) = f(4) = 2$. So, $\text{Var}[f(X)] < f(\text{Var}[X])$. Ex 2: Suppose random variable $X$ is same as before i.e. it has the pmf: $p_X(0) = \frac{1}{2}$ and $p_X(4) = \frac{1}{2}$, but $f$ changed to $f(x) = \sqrt{x} - 100$. Note that $\text{Var}[f(X)] = 1$ still, but now $f(\text{Var}[X]) = f(4) = 2-100=-98$. So, $\text{Var}[f(X)] > f(\text{Var}[X])$.
Relating $f(\mathrm{Var}[X])$ to $\mathrm{Var}[f(X)]$ for Positive, Increasing, and Concave $f(X)$ There is no relation between the two quantities $f(\text{Var}[X])$ and $\text{Var}[f(X)]$ for concave $f$. Here are the examples to demonstrate this: Ex 1: Suppose random variable $X$ has the pmf: $p
36,875
How to use cross validation for model comparison
Are the previously mentioned steps follow any standard procedure? Yes! You are using hold-out validation set for final classifier comparison and k-fold cross-validation for the parameter (model) selection. If not, how can I use Repeated/Nested CV in my case? Since you are considering different models, one way to improve that would be : For each method Use k-fold cross-validation for model selection After selecting the optimal parameters (model fitting), use k-fold cross-validation to get the generalisation error. This gives you the variation in errors in different folds, so you can calculate, variance (or standard deviation) to report on the reliability/consistency of the model, or even generate some plot. UPDATE You don't need to split the data for step 1 and step 2. Use 10000 data points in k-fold cross-validation, i.e., if k = 10, then you will use 9000 for training and 1000 for validation for model selection. Once model is selected again use the same 10000 samples in the similar k-fold cross-validation but this time your parameters will be fixed. You can choose to run k-fold cross-validation once and get k error measures for each of the subset; 2*k if you consider training set which you could also look into. So, with those k or 2*k values you can perform some statistical tests or draw some plots. It is also good to repeat the cross-validation process n times, giving you n *k error measures for statistical analysis.
How to use cross validation for model comparison
Are the previously mentioned steps follow any standard procedure? Yes! You are using hold-out validation set for final classifier comparison and k-fold cross-validation for the parameter (model) selec
How to use cross validation for model comparison Are the previously mentioned steps follow any standard procedure? Yes! You are using hold-out validation set for final classifier comparison and k-fold cross-validation for the parameter (model) selection. If not, how can I use Repeated/Nested CV in my case? Since you are considering different models, one way to improve that would be : For each method Use k-fold cross-validation for model selection After selecting the optimal parameters (model fitting), use k-fold cross-validation to get the generalisation error. This gives you the variation in errors in different folds, so you can calculate, variance (or standard deviation) to report on the reliability/consistency of the model, or even generate some plot. UPDATE You don't need to split the data for step 1 and step 2. Use 10000 data points in k-fold cross-validation, i.e., if k = 10, then you will use 9000 for training and 1000 for validation for model selection. Once model is selected again use the same 10000 samples in the similar k-fold cross-validation but this time your parameters will be fixed. You can choose to run k-fold cross-validation once and get k error measures for each of the subset; 2*k if you consider training set which you could also look into. So, with those k or 2*k values you can perform some statistical tests or draw some plots. It is also good to repeat the cross-validation process n times, giving you n *k error measures for statistical analysis.
How to use cross validation for model comparison Are the previously mentioned steps follow any standard procedure? Yes! You are using hold-out validation set for final classifier comparison and k-fold cross-validation for the parameter (model) selec
36,876
How to use cross validation for model comparison
I'd like to propose an argument, that @discipulus answer is not entirely correct. What is the standard procedure? Normally, the setup looks like this: Split the dataset (for example, training 60%, cross-validation 20%, test 20%). [Cross-validation set] Find the best model (comparing different models and/or different hyperparameters for each). Model selection ends with this step. [Test set] Get an estimate of how the model might perform in "the real world". Caveats If you don't need to compare models, and don't need to optimize hyperparameters for those models, you can skip step 2 and not allocate the cross-validation subset (20% in our case). If you don't need an estimate of the actual performance in the real world, you can skip step 3 and not allocate the test subset (20% in our case). Do not choose the model based on the test set performance. Let's imagine that on cross-validation (step 2), model A (with some specific hyperparameters) gets 90% accuracy, and model B (with some specific hyperparameters) gets 80%. Now, let's say you got curious and ran both models on the test set (step 3), and the results are model A gets 80%, while model B gets 90% (opposite than before). What to do? Use only the cross-validation results to select the model, i.e. the correct answer here is to use the model A (Related answer). Why I can't choose based on test set? Because you'd be essentially selecting some model out of many models, and you might get lucky to find the one which just so happens to perform well on the test set, therefore you will not be able to trust your test set accuracy anymore. More detailed explanation is available here. Applied to your example You use step 2 and step 3 for exactly the same reason - selecting the best model and it's hyperparameters combination. You could have your setup like this: Create training set with 800 data points, keep 200 for cross-validation (no test set, since you didn't mentioned you want to evaluate estimate of the "real world" performance). Use the cross-validation dataset with each of the classifiers to find the best hyper-parameters (such as regularizer or number of hidden nodes) Let's say your results are: Model1: LogisticRegression, regularizer=0.1, accuracy 80% Model2: LogisticRegression, regularizer=0.01, accuracy 80% Model3: LogisticRegression, regularizer=0.001, accuracy 81% Model4: NeuralNetwork, hidden_nodes=5, accuracy 71% Model5: NeuralNetwork, hidden_nodes=10, accuracy 82% Model6: NeuralNetwork, hidden_nodes=25, accuracy 76% That's it, NeuralNetwork is the better model than LogisticRegression What if I want to evaluate performance? You can't use the 82% as your accuracy estimate "in the real world", because it now has optimistic bias, due to you selecting for it. If you'd want to have an estimate of how your model performs, you need to add the 3rd step as described in the "standard procedure" section. In your setup it would look like this: Create training set with 600 data points, keep 200 for cross-validation, also keep 200 for test. [same actions, same results as previously]. Train the NeuralNetwork with 10 hidden nodes on 800 data points (training set + cross-validation set) and test on the 200 data points (test set) Repeatability: How to use nested k-fold cross validation Imagine, you reshuffle your 1000 data points, then do the step 2, and you get entirely different accuracies, and now the best model is LogisticRegression with regularizer=0.01. This is a problem since just by shuffling the dataset we got a different outcome. One way of how to get a stable accuracy estimate would be to use k-fold cross validation for the step 2 (exactly as you described in your original post). But we could do k-fold cross validation for the step 3 as well, to get a better accuracy estimate. It would be called nested k-fold cross validation and would go like this: Use k-fold cross validation (for example, if k=5, then the 1000 data points are split to `trainval` dataset with 800 data points, and `test` dataset with 200 data points). FOR EACH of the 5 800+200 (trainval+test) datapoints splits { Take the `trainval` 800 datapoints and use k-fold cross validation (for example, if k=8, then the 800 datapoints are split to `train` dataset with 700 data points and `val` dataset with 100 data points FOR EACH of the 4 700+100 (train+val) splits { Train a model with some specific hyperparameters with 700 data points, then calculate accuracy with the 100 `val` set. } Calculate accuracy of the best model+hyperparameter pair for the 200 datapoints. } You should have trained 3 (model+hyperparameter pairs) * 5 (outer cross-validation) * 4 (inner c-v) = 60 models. More resources on Nested k-fold crossvalidation There's an excellent blog post by Weina Jin, which includes more detailed description and implementation pseudo-code Nested k-fold cross validation can be visualized like this (image source): Pseudo-code is also available here, and here. Here and here are quick summaries of the nested k-fold cross validation. Here is a longer one. Here is a bit more information on when nested k-fold validation is useful. Regarding the t-test statistical analysis This question alone could warrant a separate post on the stack exchange, but this Article explains why it might not be the best idea and among the suggestions is to use McNemar’s test or 5×2 Cross-Validation instead. We could then select and use the paired Student’s t-test to check if the difference in the mean accuracy between the two models is statistically significant, e.g. reject the null hypothesis that assumes that the two samples have the same distribution. [...] The problem is, a key assumption of the paired Student’s t-test has been violated. Namely, the observations in each sample are not independent. As part of the k-fold cross-validation procedure, a given observation will be used in the training dataset (k-1) times. This means that the estimated skill scores are dependent, not independent, and in turn that the calculation of the t-statistic in the test will be misleadingly wrong along with any interpretations of the statistic and p-value. Regarding reporting deviations and confidence intervals It might not be the best choice as well. There appears to be some confusion among researchers, however, about best practices for cross-validation, and about the interpretation of cross-validation results. In particular, [...] standard deviations, confidence intervals, or an indication of ”significance”. In this paper, we argue that, under many practical circumstances, when the goal of the experiments is to see how well the model returned by a learner will perform in practice in a particular domain, repeated cross-validation is not useful, and the reporting of confidence intervals or significance is misleading. Source: On Estimating Model Accuracy with Repeated Cross-Validation. Gitte Vanwinckelen, Hendrik Blockeel. Department of Computer Science, KU Leuven; Heverlee, Belgium.
How to use cross validation for model comparison
I'd like to propose an argument, that @discipulus answer is not entirely correct. What is the standard procedure? Normally, the setup looks like this: Split the dataset (for example, training 60%, cr
How to use cross validation for model comparison I'd like to propose an argument, that @discipulus answer is not entirely correct. What is the standard procedure? Normally, the setup looks like this: Split the dataset (for example, training 60%, cross-validation 20%, test 20%). [Cross-validation set] Find the best model (comparing different models and/or different hyperparameters for each). Model selection ends with this step. [Test set] Get an estimate of how the model might perform in "the real world". Caveats If you don't need to compare models, and don't need to optimize hyperparameters for those models, you can skip step 2 and not allocate the cross-validation subset (20% in our case). If you don't need an estimate of the actual performance in the real world, you can skip step 3 and not allocate the test subset (20% in our case). Do not choose the model based on the test set performance. Let's imagine that on cross-validation (step 2), model A (with some specific hyperparameters) gets 90% accuracy, and model B (with some specific hyperparameters) gets 80%. Now, let's say you got curious and ran both models on the test set (step 3), and the results are model A gets 80%, while model B gets 90% (opposite than before). What to do? Use only the cross-validation results to select the model, i.e. the correct answer here is to use the model A (Related answer). Why I can't choose based on test set? Because you'd be essentially selecting some model out of many models, and you might get lucky to find the one which just so happens to perform well on the test set, therefore you will not be able to trust your test set accuracy anymore. More detailed explanation is available here. Applied to your example You use step 2 and step 3 for exactly the same reason - selecting the best model and it's hyperparameters combination. You could have your setup like this: Create training set with 800 data points, keep 200 for cross-validation (no test set, since you didn't mentioned you want to evaluate estimate of the "real world" performance). Use the cross-validation dataset with each of the classifiers to find the best hyper-parameters (such as regularizer or number of hidden nodes) Let's say your results are: Model1: LogisticRegression, regularizer=0.1, accuracy 80% Model2: LogisticRegression, regularizer=0.01, accuracy 80% Model3: LogisticRegression, regularizer=0.001, accuracy 81% Model4: NeuralNetwork, hidden_nodes=5, accuracy 71% Model5: NeuralNetwork, hidden_nodes=10, accuracy 82% Model6: NeuralNetwork, hidden_nodes=25, accuracy 76% That's it, NeuralNetwork is the better model than LogisticRegression What if I want to evaluate performance? You can't use the 82% as your accuracy estimate "in the real world", because it now has optimistic bias, due to you selecting for it. If you'd want to have an estimate of how your model performs, you need to add the 3rd step as described in the "standard procedure" section. In your setup it would look like this: Create training set with 600 data points, keep 200 for cross-validation, also keep 200 for test. [same actions, same results as previously]. Train the NeuralNetwork with 10 hidden nodes on 800 data points (training set + cross-validation set) and test on the 200 data points (test set) Repeatability: How to use nested k-fold cross validation Imagine, you reshuffle your 1000 data points, then do the step 2, and you get entirely different accuracies, and now the best model is LogisticRegression with regularizer=0.01. This is a problem since just by shuffling the dataset we got a different outcome. One way of how to get a stable accuracy estimate would be to use k-fold cross validation for the step 2 (exactly as you described in your original post). But we could do k-fold cross validation for the step 3 as well, to get a better accuracy estimate. It would be called nested k-fold cross validation and would go like this: Use k-fold cross validation (for example, if k=5, then the 1000 data points are split to `trainval` dataset with 800 data points, and `test` dataset with 200 data points). FOR EACH of the 5 800+200 (trainval+test) datapoints splits { Take the `trainval` 800 datapoints and use k-fold cross validation (for example, if k=8, then the 800 datapoints are split to `train` dataset with 700 data points and `val` dataset with 100 data points FOR EACH of the 4 700+100 (train+val) splits { Train a model with some specific hyperparameters with 700 data points, then calculate accuracy with the 100 `val` set. } Calculate accuracy of the best model+hyperparameter pair for the 200 datapoints. } You should have trained 3 (model+hyperparameter pairs) * 5 (outer cross-validation) * 4 (inner c-v) = 60 models. More resources on Nested k-fold crossvalidation There's an excellent blog post by Weina Jin, which includes more detailed description and implementation pseudo-code Nested k-fold cross validation can be visualized like this (image source): Pseudo-code is also available here, and here. Here and here are quick summaries of the nested k-fold cross validation. Here is a longer one. Here is a bit more information on when nested k-fold validation is useful. Regarding the t-test statistical analysis This question alone could warrant a separate post on the stack exchange, but this Article explains why it might not be the best idea and among the suggestions is to use McNemar’s test or 5×2 Cross-Validation instead. We could then select and use the paired Student’s t-test to check if the difference in the mean accuracy between the two models is statistically significant, e.g. reject the null hypothesis that assumes that the two samples have the same distribution. [...] The problem is, a key assumption of the paired Student’s t-test has been violated. Namely, the observations in each sample are not independent. As part of the k-fold cross-validation procedure, a given observation will be used in the training dataset (k-1) times. This means that the estimated skill scores are dependent, not independent, and in turn that the calculation of the t-statistic in the test will be misleadingly wrong along with any interpretations of the statistic and p-value. Regarding reporting deviations and confidence intervals It might not be the best choice as well. There appears to be some confusion among researchers, however, about best practices for cross-validation, and about the interpretation of cross-validation results. In particular, [...] standard deviations, confidence intervals, or an indication of ”significance”. In this paper, we argue that, under many practical circumstances, when the goal of the experiments is to see how well the model returned by a learner will perform in practice in a particular domain, repeated cross-validation is not useful, and the reporting of confidence intervals or significance is misleading. Source: On Estimating Model Accuracy with Repeated Cross-Validation. Gitte Vanwinckelen, Hendrik Blockeel. Department of Computer Science, KU Leuven; Heverlee, Belgium.
How to use cross validation for model comparison I'd like to propose an argument, that @discipulus answer is not entirely correct. What is the standard procedure? Normally, the setup looks like this: Split the dataset (for example, training 60%, cr
36,877
How to derive the asymptotic distribution of the test statistic of a large sample test for equality of two binomial populations?
The OP is correct that you just cannot "add" convergence in distribution, for this you have to be aware of the covariance structure, which is fortunately enough trivial in this case. update: Thanks for pointing out the earlier, crucial (and pretty bad) mistake. Hopefully the following answer is more or less correct... Suppose, for the moment, that $n_Y = \lceil c n_X \rceil$, for some $c > 0$. Define $$ \theta_X = \frac {\hat p_X - p} { \sqrt{ \left(\frac 1 {n_X} + \frac 1 {n_Y}\right) p(1-p)}} \quad \mbox{and} \quad \theta_Y = \frac {\hat p_Y - p} { \sqrt{ \left(\frac 1 {n_X} + \frac 1 {n_Y}\right) p(1-p)}},$$ so that $$T = \sqrt{\frac{p(1-p)}{\hat p(1 - \hat p)}} (\theta_X - \theta_Y).$$ Now $\theta_X$ converges in distribution to $N\left(0,\frac c {1+c} \right)$, whereas $\theta_Y$ converges in distribution to $N\left(0, \frac 1 {1 + c} \right)$, and jointly they converge to the independent product of these two distributions. It follows that $\theta_X - \theta_Y$ converges (using the continuous mapping theorem, hopefully correctly this time) to a $N(0,1)$ distribution. Since $\hat p(1-\hat p) \rightarrow p(1-p)$ almost surely, it follows that $T \stackrel{d}{\rightarrow} N(0,1)$. In a similar way, when e.g. $n_Y = n_X^2$ you can find that $\theta_X \stackrel{d}{\rightarrow} N(0,1)$ and $\theta_Y \stackrel{a.s.}{\rightarrow} 0$, so that again $T \stackrel{d}{\rightarrow} N(0,1)$. Unfortunately I don't see how to avoid making some assumption on the relative growth of $n_X$ and $n_Y$, but perhaps a more general argument is possible.
How to derive the asymptotic distribution of the test statistic of a large sample test for equality
The OP is correct that you just cannot "add" convergence in distribution, for this you have to be aware of the covariance structure, which is fortunately enough trivial in this case. update: Thanks fo
How to derive the asymptotic distribution of the test statistic of a large sample test for equality of two binomial populations? The OP is correct that you just cannot "add" convergence in distribution, for this you have to be aware of the covariance structure, which is fortunately enough trivial in this case. update: Thanks for pointing out the earlier, crucial (and pretty bad) mistake. Hopefully the following answer is more or less correct... Suppose, for the moment, that $n_Y = \lceil c n_X \rceil$, for some $c > 0$. Define $$ \theta_X = \frac {\hat p_X - p} { \sqrt{ \left(\frac 1 {n_X} + \frac 1 {n_Y}\right) p(1-p)}} \quad \mbox{and} \quad \theta_Y = \frac {\hat p_Y - p} { \sqrt{ \left(\frac 1 {n_X} + \frac 1 {n_Y}\right) p(1-p)}},$$ so that $$T = \sqrt{\frac{p(1-p)}{\hat p(1 - \hat p)}} (\theta_X - \theta_Y).$$ Now $\theta_X$ converges in distribution to $N\left(0,\frac c {1+c} \right)$, whereas $\theta_Y$ converges in distribution to $N\left(0, \frac 1 {1 + c} \right)$, and jointly they converge to the independent product of these two distributions. It follows that $\theta_X - \theta_Y$ converges (using the continuous mapping theorem, hopefully correctly this time) to a $N(0,1)$ distribution. Since $\hat p(1-\hat p) \rightarrow p(1-p)$ almost surely, it follows that $T \stackrel{d}{\rightarrow} N(0,1)$. In a similar way, when e.g. $n_Y = n_X^2$ you can find that $\theta_X \stackrel{d}{\rightarrow} N(0,1)$ and $\theta_Y \stackrel{a.s.}{\rightarrow} 0$, so that again $T \stackrel{d}{\rightarrow} N(0,1)$. Unfortunately I don't see how to avoid making some assumption on the relative growth of $n_X$ and $n_Y$, but perhaps a more general argument is possible.
How to derive the asymptotic distribution of the test statistic of a large sample test for equality The OP is correct that you just cannot "add" convergence in distribution, for this you have to be aware of the covariance structure, which is fortunately enough trivial in this case. update: Thanks fo
36,878
In propensity score analysis, what are options to deal with very small or large propensities?
This is a good detect. You are referring to positivity assumption. It requires that there be both exposed and unexposed participants at every combination of the values of the observed confounder(s) in the population under study. Positivity violations occur when certain subgroups in a sample rarely or never receive some treatments of interest. There are many papers on this topic, such as Austin and Stuart (2015) and Peterson et al. (2012). You may search for more online.
In propensity score analysis, what are options to deal with very small or large propensities?
This is a good detect. You are referring to positivity assumption. It requires that there be both exposed and unexposed participants at every combination of the values of the observed confounder(s) in
In propensity score analysis, what are options to deal with very small or large propensities? This is a good detect. You are referring to positivity assumption. It requires that there be both exposed and unexposed participants at every combination of the values of the observed confounder(s) in the population under study. Positivity violations occur when certain subgroups in a sample rarely or never receive some treatments of interest. There are many papers on this topic, such as Austin and Stuart (2015) and Peterson et al. (2012). You may search for more online.
In propensity score analysis, what are options to deal with very small or large propensities? This is a good detect. You are referring to positivity assumption. It requires that there be both exposed and unexposed participants at every combination of the values of the observed confounder(s) in
36,879
Are predictions from Bayesian Gaussian Process Regression normally distributed?
GPR does not make any statistical assumptions about the predictors. They don't even have to be numbers! All you need is a prior mean function and a covariance function, which can also be defined for non-numeric data (discrete joint unions, strings, sets, etc.). This is sort of true or assumed when people talk about GPR, because its most interesting aspect is that it allows for exact inference: it essentially just boils down to linear algebra. The moment you introduce more flexibility, e.g. non-Gaussian noise, priors over hyperparameters, you lose this important property and have to resort to approximate inference. That said, even then there typically are computational advantages when using GPR-based models.
Are predictions from Bayesian Gaussian Process Regression normally distributed?
GPR does not make any statistical assumptions about the predictors. They don't even have to be numbers! All you need is a prior mean function and a covariance function, which can also be defined for
Are predictions from Bayesian Gaussian Process Regression normally distributed? GPR does not make any statistical assumptions about the predictors. They don't even have to be numbers! All you need is a prior mean function and a covariance function, which can also be defined for non-numeric data (discrete joint unions, strings, sets, etc.). This is sort of true or assumed when people talk about GPR, because its most interesting aspect is that it allows for exact inference: it essentially just boils down to linear algebra. The moment you introduce more flexibility, e.g. non-Gaussian noise, priors over hyperparameters, you lose this important property and have to resort to approximate inference. That said, even then there typically are computational advantages when using GPR-based models.
Are predictions from Bayesian Gaussian Process Regression normally distributed? GPR does not make any statistical assumptions about the predictors. They don't even have to be numbers! All you need is a prior mean function and a covariance function, which can also be defined for
36,880
In three-way ANOVA, how to interpret the three-way interaction?
A three way interaction means that the interaction among the two factors (A * B) is different across the levels of the third factor (C). If the interaction of A * B differs a lot among the levels of C then it sounds reasonable that the two way interaction A * B should not appear as significant. This could be the case of your data. To put it another way: A two way interaction A * B exists in reality (not statistically) along with a three order interaction A * B * C only if the way that the factors A and B interacts among the levels of the factor C is similar. So, use a table or an appropriate error chart in order to visualize the way that the interaction of A, B differs between the levels of C and try to interpret those findings. If you want to emphasize the differences that you will notice then you may apply standard statistical methods (t - test, Kruskal Wallis etc) and confirm the differences with a statistical test. Keep in mind that in that case it is a good idea to make a Bonferroni correction for the rejection level.
In three-way ANOVA, how to interpret the three-way interaction?
A three way interaction means that the interaction among the two factors (A * B) is different across the levels of the third factor (C). If the interaction of A * B differs a lot among the levels of C
In three-way ANOVA, how to interpret the three-way interaction? A three way interaction means that the interaction among the two factors (A * B) is different across the levels of the third factor (C). If the interaction of A * B differs a lot among the levels of C then it sounds reasonable that the two way interaction A * B should not appear as significant. This could be the case of your data. To put it another way: A two way interaction A * B exists in reality (not statistically) along with a three order interaction A * B * C only if the way that the factors A and B interacts among the levels of the factor C is similar. So, use a table or an appropriate error chart in order to visualize the way that the interaction of A, B differs between the levels of C and try to interpret those findings. If you want to emphasize the differences that you will notice then you may apply standard statistical methods (t - test, Kruskal Wallis etc) and confirm the differences with a statistical test. Keep in mind that in that case it is a good idea to make a Bonferroni correction for the rejection level.
In three-way ANOVA, how to interpret the three-way interaction? A three way interaction means that the interaction among the two factors (A * B) is different across the levels of the third factor (C). If the interaction of A * B differs a lot among the levels of C
36,881
In three-way ANOVA, how to interpret the three-way interaction?
The three-way ANOVA is used to determine if there is an interaction effect (independent variables interact if the effect of one of the variables differs depending on the level of the other variable) between three independent variables on a continuous dependent variable. Therefore you would only be interested in the significance value of ABC. If it is significant, you would report that there is a 3-way interaction which means that at least one of the 2-way interactions changes across the third independent variable. If ABC is not significant, it would be better to apply 2-way ANOVA. Because when interaction effects are present, it means that interpretation of the main effects or underlying lower level interactions is incomplete or misleading. Hence the significances of A, B, C, AB, BC, A*C are not important.
In three-way ANOVA, how to interpret the three-way interaction?
The three-way ANOVA is used to determine if there is an interaction effect (independent variables interact if the effect of one of the variables differs depending on the level of the other variable) b
In three-way ANOVA, how to interpret the three-way interaction? The three-way ANOVA is used to determine if there is an interaction effect (independent variables interact if the effect of one of the variables differs depending on the level of the other variable) between three independent variables on a continuous dependent variable. Therefore you would only be interested in the significance value of ABC. If it is significant, you would report that there is a 3-way interaction which means that at least one of the 2-way interactions changes across the third independent variable. If ABC is not significant, it would be better to apply 2-way ANOVA. Because when interaction effects are present, it means that interpretation of the main effects or underlying lower level interactions is incomplete or misleading. Hence the significances of A, B, C, AB, BC, A*C are not important.
In three-way ANOVA, how to interpret the three-way interaction? The three-way ANOVA is used to determine if there is an interaction effect (independent variables interact if the effect of one of the variables differs depending on the level of the other variable) b
36,882
can you explicitly show me the first iteration of newton-raphson and fisher scoring?
For Newton-Raphson, yes, we have $$ \pi_1 = \pi_0 - u(\pi_0)/J(\pi_0).$$ For Fisher scoring, as you mentioned, there is unknown parameter ($\pi$) in the expected information $I(\pi)$. Given $I(\pi)=-E(J(\pi))=E[u(\pi)u^{'}(\pi)]$, we use the sample first derivative to approximate the expected second derivative $$\hat I(\pi_0) = \sum_i^n u_i(\pi_0)u_i^{'}(\pi_0),$$ where $u_i(\pi)= \frac{x_i}{\pi} - \frac{1-x_i}{1-\pi}$, and $x_i$ is the indicator of head for each draw. Then $$ \pi_1 = \pi_0 + u(\pi_0)/\hat I(\pi_0).$$ Note that we need large $n$ since the approximation is based on asymptotic theory. I revised I_hat(pi) in @ihadanny's Python code. Now Newton-Raphson and Fisher scoring provide identical results. import random import numpy as np pi_t = random.random() n = 1000 draws = [1 if x < pi_t else 0 for x in np.random.rand(n)] x_bar = np.mean(draws) def u(pi): return n*x_bar/pi - n*(1-x_bar)/(1-pi) def J(pi): return -n*x_bar/pi**2 - n*(1-x_bar)/((1-pi)**2) def I_hat(pi): x = 0 for i in range(0, n): x = x + (draws[i]/pi - (1-draws[i])/(1-pi))**2 return x def Newton(pi): return pi - u(pi)/J(pi) def Fisher(pi): return pi + u(pi)/I_hat(pi) def dance(method_name, method): print("starting iterations for: " + method_name) pi, prev_pi, i = 0.5, None, 0 while i == 0 or (abs(pi-pi_t) > 0.001 and abs(pi-prev_pi) > 0.001 and i < 10): prev_pi, pi = pi, method(pi) i += 1 print(method_name, i, "delta: ", abs(pi-pi_t)) dance("Newton", Newton) dance("Fisher", Fisher) Log Message starting iterations for: Newton Newton 1 delta: 0.00899203081545 Newton 2 delta: 0.00899203081545 starting iterations for: Fisher Fisher 1 delta: 0.00899203081545 Fisher 2 delta: 0.00899203081545 Update This is a special case that Newton-Raphson and Fisher scoring are identical, because $$\hat I(\pi)=\sum_i^n \left(\frac{x_i}{\pi} - \frac{1-x_i}{1-\pi}\right)^2= \frac{\sum_i^n x_i}{\pi^2} + \frac{(n-\sum_i^n x_i)}{(1-\pi)^2} = -J(\pi),$$ which just requires standard algebra.
can you explicitly show me the first iteration of newton-raphson and fisher scoring?
For Newton-Raphson, yes, we have $$ \pi_1 = \pi_0 - u(\pi_0)/J(\pi_0).$$ For Fisher scoring, as you mentioned, there is unknown parameter ($\pi$) in the expected information $I(\pi)$. Given $I(\pi)=-E
can you explicitly show me the first iteration of newton-raphson and fisher scoring? For Newton-Raphson, yes, we have $$ \pi_1 = \pi_0 - u(\pi_0)/J(\pi_0).$$ For Fisher scoring, as you mentioned, there is unknown parameter ($\pi$) in the expected information $I(\pi)$. Given $I(\pi)=-E(J(\pi))=E[u(\pi)u^{'}(\pi)]$, we use the sample first derivative to approximate the expected second derivative $$\hat I(\pi_0) = \sum_i^n u_i(\pi_0)u_i^{'}(\pi_0),$$ where $u_i(\pi)= \frac{x_i}{\pi} - \frac{1-x_i}{1-\pi}$, and $x_i$ is the indicator of head for each draw. Then $$ \pi_1 = \pi_0 + u(\pi_0)/\hat I(\pi_0).$$ Note that we need large $n$ since the approximation is based on asymptotic theory. I revised I_hat(pi) in @ihadanny's Python code. Now Newton-Raphson and Fisher scoring provide identical results. import random import numpy as np pi_t = random.random() n = 1000 draws = [1 if x < pi_t else 0 for x in np.random.rand(n)] x_bar = np.mean(draws) def u(pi): return n*x_bar/pi - n*(1-x_bar)/(1-pi) def J(pi): return -n*x_bar/pi**2 - n*(1-x_bar)/((1-pi)**2) def I_hat(pi): x = 0 for i in range(0, n): x = x + (draws[i]/pi - (1-draws[i])/(1-pi))**2 return x def Newton(pi): return pi - u(pi)/J(pi) def Fisher(pi): return pi + u(pi)/I_hat(pi) def dance(method_name, method): print("starting iterations for: " + method_name) pi, prev_pi, i = 0.5, None, 0 while i == 0 or (abs(pi-pi_t) > 0.001 and abs(pi-prev_pi) > 0.001 and i < 10): prev_pi, pi = pi, method(pi) i += 1 print(method_name, i, "delta: ", abs(pi-pi_t)) dance("Newton", Newton) dance("Fisher", Fisher) Log Message starting iterations for: Newton Newton 1 delta: 0.00899203081545 Newton 2 delta: 0.00899203081545 starting iterations for: Fisher Fisher 1 delta: 0.00899203081545 Fisher 2 delta: 0.00899203081545 Update This is a special case that Newton-Raphson and Fisher scoring are identical, because $$\hat I(\pi)=\sum_i^n \left(\frac{x_i}{\pi} - \frac{1-x_i}{1-\pi}\right)^2= \frac{\sum_i^n x_i}{\pi^2} + \frac{(n-\sum_i^n x_i)}{(1-\pi)^2} = -J(\pi),$$ which just requires standard algebra.
can you explicitly show me the first iteration of newton-raphson and fisher scoring? For Newton-Raphson, yes, we have $$ \pi_1 = \pi_0 - u(\pi_0)/J(\pi_0).$$ For Fisher scoring, as you mentioned, there is unknown parameter ($\pi$) in the expected information $I(\pi)$. Given $I(\pi)=-E
36,883
What is the reason behind major difference in selection of mtry in RandomForest for Classification & regression?
The only useful source I found for this is the original paper of RF itself: http://machinelearning202.pbworks.com/w/file/fetch/60606349/breiman_randomforests.pdf To quote "An interesting difference between regression and classification is that the correlation increases quite slowly as the number of features used increases. The major effect is the decrease in PE*( tree). Therefore, a relatively large number of features are required to reduce PE*(tree) and get near optimal testset error." So basically in classification the strength did not increase much with increasing features for the split but the correlation did, so they recommend using less number of features. While in regression the strength of the tree increases(error decreases) while correlation increases slowly so more number of features are used for optimal performance. I guess you could just read their experiments on different datasets with number of features for both classification and regression and draw your own conclusion.
What is the reason behind major difference in selection of mtry in RandomForest for Classification &
The only useful source I found for this is the original paper of RF itself: http://machinelearning202.pbworks.com/w/file/fetch/60606349/breiman_randomforests.pdf To quote "An interesting difference be
What is the reason behind major difference in selection of mtry in RandomForest for Classification & regression? The only useful source I found for this is the original paper of RF itself: http://machinelearning202.pbworks.com/w/file/fetch/60606349/breiman_randomforests.pdf To quote "An interesting difference between regression and classification is that the correlation increases quite slowly as the number of features used increases. The major effect is the decrease in PE*( tree). Therefore, a relatively large number of features are required to reduce PE*(tree) and get near optimal testset error." So basically in classification the strength did not increase much with increasing features for the split but the correlation did, so they recommend using less number of features. While in regression the strength of the tree increases(error decreases) while correlation increases slowly so more number of features are used for optimal performance. I guess you could just read their experiments on different datasets with number of features for both classification and regression and draw your own conclusion.
What is the reason behind major difference in selection of mtry in RandomForest for Classification & The only useful source I found for this is the original paper of RF itself: http://machinelearning202.pbworks.com/w/file/fetch/60606349/breiman_randomforests.pdf To quote "An interesting difference be
36,884
What is the reason behind major difference in selection of mtry in RandomForest for Classification & regression?
Good defaults of hyperparameters in machine learning algorithms have to be found empirically on datasets (if there would be a good theory for setting them, they would not be a hyperparameter anymore). Probably it has shown good performances for the creator of the specific package on some datasets, so he has chosen this value. I am doing some studies on many datasets and one of my aims is to find the best defaults in general.
What is the reason behind major difference in selection of mtry in RandomForest for Classification &
Good defaults of hyperparameters in machine learning algorithms have to be found empirically on datasets (if there would be a good theory for setting them, they would not be a hyperparameter anymore).
What is the reason behind major difference in selection of mtry in RandomForest for Classification & regression? Good defaults of hyperparameters in machine learning algorithms have to be found empirically on datasets (if there would be a good theory for setting them, they would not be a hyperparameter anymore). Probably it has shown good performances for the creator of the specific package on some datasets, so he has chosen this value. I am doing some studies on many datasets and one of my aims is to find the best defaults in general.
What is the reason behind major difference in selection of mtry in RandomForest for Classification & Good defaults of hyperparameters in machine learning algorithms have to be found empirically on datasets (if there would be a good theory for setting them, they would not be a hyperparameter anymore).
36,885
Bias-Variance decomposition derivation
Here is a hint: consider $Y - \hat f = (Y - f) + (f - \hat f)$, and remember that $E(Y-f)=0$ and that $f$ is not random. Also, as @GeoMatt22 pointed out, you'll need $Cov(\varepsilon_0, \hat f) = 0$, which we get by virtue of iid errors. (Basically I think you're probably making this more complicated than it needs to be, and it really just boils down to my hint) Regarding whether or not $\hat f \perp Y$, generally our predictions are not just functions of $X$ but also of $Y$ so they can't be independent. In linear regression, for example, our fitted values $\hat Y = X(X^T X)^{-1}X^T Y$ so certainly it is not the case that $\hat Y \perp Y$ in general. Update I think the issue is that we've both been a little careless with what '$\varepsilon$' is. We observed data $(\bf y, \bf X)$ where in our data $y_i = f(x_i) + \varepsilon_i$, so that $\hat f$ is a function of $\bf y$, $\bf X$, and $\varepsilon_i$ for $i = 1, \dots, n$. We now observe a new point $(y_0, x_0)$ where we assume that $y_0 = f(x_0) + \varepsilon_0$. This is the key: this new point has its own error $\varepsilon_0$ that is independent of everything that went into $\hat f$ by the usual assumption of iid errors. So for $i = 1, \dots, n$ it definitely is not the case that $\varepsilon_i \perp \hat f$; but the error for a new point is indeed uncorrelated.
Bias-Variance decomposition derivation
Here is a hint: consider $Y - \hat f = (Y - f) + (f - \hat f)$, and remember that $E(Y-f)=0$ and that $f$ is not random. Also, as @GeoMatt22 pointed out, you'll need $Cov(\varepsilon_0, \hat f) = 0$,
Bias-Variance decomposition derivation Here is a hint: consider $Y - \hat f = (Y - f) + (f - \hat f)$, and remember that $E(Y-f)=0$ and that $f$ is not random. Also, as @GeoMatt22 pointed out, you'll need $Cov(\varepsilon_0, \hat f) = 0$, which we get by virtue of iid errors. (Basically I think you're probably making this more complicated than it needs to be, and it really just boils down to my hint) Regarding whether or not $\hat f \perp Y$, generally our predictions are not just functions of $X$ but also of $Y$ so they can't be independent. In linear regression, for example, our fitted values $\hat Y = X(X^T X)^{-1}X^T Y$ so certainly it is not the case that $\hat Y \perp Y$ in general. Update I think the issue is that we've both been a little careless with what '$\varepsilon$' is. We observed data $(\bf y, \bf X)$ where in our data $y_i = f(x_i) + \varepsilon_i$, so that $\hat f$ is a function of $\bf y$, $\bf X$, and $\varepsilon_i$ for $i = 1, \dots, n$. We now observe a new point $(y_0, x_0)$ where we assume that $y_0 = f(x_0) + \varepsilon_0$. This is the key: this new point has its own error $\varepsilon_0$ that is independent of everything that went into $\hat f$ by the usual assumption of iid errors. So for $i = 1, \dots, n$ it definitely is not the case that $\varepsilon_i \perp \hat f$; but the error for a new point is indeed uncorrelated.
Bias-Variance decomposition derivation Here is a hint: consider $Y - \hat f = (Y - f) + (f - \hat f)$, and remember that $E(Y-f)=0$ and that $f$ is not random. Also, as @GeoMatt22 pointed out, you'll need $Cov(\varepsilon_0, \hat f) = 0$,
36,886
What is the correct analysis for this type of question? (Conditional Logistic Regression?)
First: "Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful." Box, G. E. P.; Draper, N. R. (1987), Empirical Model-Building and Response Surfaces, John Wiley & Sons. Thus there is no one and only correct way to model your data. The research question is broadly inquiring about the association of background type on the perception of category, however there are 2 distinct questions: Is background=3(face) associated with response=0 (face) (research hypothesis: Yes) Is background=2(object) associated with response=1 (object) (research hypothesis: No) Your setup is factorial. You are varying the levels of factors (Category, Noise and Background) and measuring the response for different combinations of these. Your particular interest is in the association of background (a 3-level factor) with the response (a binary variable), therefore a logistic regression analysis would seem to answer the questions. The estimate for background=2 answers the question: what is the difference in the log-odds of responding with object when the background is an object compared to when the background is black. This answers research question 2. To be consistent with the research hypothesis, this estimate should be small and/or not statistically significant. The estimate for background=3 answers the question: what is the difference in the log-odds of responding with object when the background is a face, compared to when the background is black. The negative of this estimate is therefore the difference in the log-odds of responding with face when the background is a face, compared to when the background is black. This answers research question 1. To be consistent with the research hypothesis, this estimate should be small and/or not statistically significant. However, that is not the end of the story.... Obviously you have repeated measures on participants, and this needs to be controlled for, since the responses of one participant will be more like other responses of the same participant, than those of other participants (that is there is likely to be correlation of measurements within each participant). This can be controlled for by including random intercepts for Participant or by including Participant as a fixed effect. 5 is considered by many as the minimum number of levels for a factor to be used as a random effect and since you intend to add more participants to the study, this would be my recommendation. Either method controls for repeated measures so you could run both models and I will present both below. You also have repeated measures on each picture, where each picture is measured 3 times. Thus there may also be correlation within each picture. Since you have 420 different pictures, it would not be a great idea to include picture as a fixed effect to control for this, so a random intercept is appropriate. So, my starting model would be a mixed effects model with random intercepts for Picture_ID and Participant, with fixed effects for Category,Background and Noise (with noise being coded as numeric). Participants are not nested within pictures, and pictures and not nested within participants so these are crossed random effects. In R using the lme4 package, this would be specified as: glmer(Response ~ Category + Background + Noise + (1|Participant) + (1|Picture_ID), data=dt, family=binomial(link=logit)) Due to the small number of participants, an alternative model is: glmer(Response ~ Category + Background + Noise + Participant + (1|Picture_ID), data=dt, family=binomial(link=logit)) The analysis can be extended to allow for: interactions between the fixed effects non-linear association between the response and Noise (by including quadratic and possibly higher order terms for Noise) the association of Noise to vary between participants and/or picture (by including random coefficients for Noise) The above is based on contrasts of the desired background with the black background - that is face vs black and object vs black. If face vs object is required this can be handled by recoding the factor or specifying the reference level directly. If face vs not face or object vs not object is required then this can easily be accomplished by creating dummy variables.
What is the correct analysis for this type of question? (Conditional Logistic Regression?)
First: "Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful." Box, G. E. P.; Draper, N. R. (1987), Empirical Model-Building and Response Surfac
What is the correct analysis for this type of question? (Conditional Logistic Regression?) First: "Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful." Box, G. E. P.; Draper, N. R. (1987), Empirical Model-Building and Response Surfaces, John Wiley & Sons. Thus there is no one and only correct way to model your data. The research question is broadly inquiring about the association of background type on the perception of category, however there are 2 distinct questions: Is background=3(face) associated with response=0 (face) (research hypothesis: Yes) Is background=2(object) associated with response=1 (object) (research hypothesis: No) Your setup is factorial. You are varying the levels of factors (Category, Noise and Background) and measuring the response for different combinations of these. Your particular interest is in the association of background (a 3-level factor) with the response (a binary variable), therefore a logistic regression analysis would seem to answer the questions. The estimate for background=2 answers the question: what is the difference in the log-odds of responding with object when the background is an object compared to when the background is black. This answers research question 2. To be consistent with the research hypothesis, this estimate should be small and/or not statistically significant. The estimate for background=3 answers the question: what is the difference in the log-odds of responding with object when the background is a face, compared to when the background is black. The negative of this estimate is therefore the difference in the log-odds of responding with face when the background is a face, compared to when the background is black. This answers research question 1. To be consistent with the research hypothesis, this estimate should be small and/or not statistically significant. However, that is not the end of the story.... Obviously you have repeated measures on participants, and this needs to be controlled for, since the responses of one participant will be more like other responses of the same participant, than those of other participants (that is there is likely to be correlation of measurements within each participant). This can be controlled for by including random intercepts for Participant or by including Participant as a fixed effect. 5 is considered by many as the minimum number of levels for a factor to be used as a random effect and since you intend to add more participants to the study, this would be my recommendation. Either method controls for repeated measures so you could run both models and I will present both below. You also have repeated measures on each picture, where each picture is measured 3 times. Thus there may also be correlation within each picture. Since you have 420 different pictures, it would not be a great idea to include picture as a fixed effect to control for this, so a random intercept is appropriate. So, my starting model would be a mixed effects model with random intercepts for Picture_ID and Participant, with fixed effects for Category,Background and Noise (with noise being coded as numeric). Participants are not nested within pictures, and pictures and not nested within participants so these are crossed random effects. In R using the lme4 package, this would be specified as: glmer(Response ~ Category + Background + Noise + (1|Participant) + (1|Picture_ID), data=dt, family=binomial(link=logit)) Due to the small number of participants, an alternative model is: glmer(Response ~ Category + Background + Noise + Participant + (1|Picture_ID), data=dt, family=binomial(link=logit)) The analysis can be extended to allow for: interactions between the fixed effects non-linear association between the response and Noise (by including quadratic and possibly higher order terms for Noise) the association of Noise to vary between participants and/or picture (by including random coefficients for Noise) The above is based on contrasts of the desired background with the black background - that is face vs black and object vs black. If face vs object is required this can be handled by recoding the factor or specifying the reference level directly. If face vs not face or object vs not object is required then this can easily be accomplished by creating dummy variables.
What is the correct analysis for this type of question? (Conditional Logistic Regression?) First: "Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful." Box, G. E. P.; Draper, N. R. (1987), Empirical Model-Building and Response Surfac
36,887
What is the correct analysis for this type of question? (Conditional Logistic Regression?)
I believe that conditional logistic regression will give you the desired results. You correctly identify the need to use a repeated measures convention when analyzing this data. You have 5 respondents evaluated for the binary outcome of correct/incorrect face/object recognition over multiple conditions. The numerous responses from one person generate the need for a repeated measures approach. If your intent is actually better stated as whether the respondent chooses face or object, you could use the same analytic approach, but note that you would be interpreting the respondents choice not correct/incorrect classification. For a third category of "both" you would need multinomial logistic regression. I will assume you are interested in correct/incorrect classification in what follows. You state "Since every participant is measured in all 3 background conditions, it is a dependent design. Since, for one individual picture, I keep the noise constant over the 3 background conditions, it is somehow paired or matched." The conditions under evaluation, while limited in their value or quality, are not "conditioning" your analysis. The use of grey background, face picture with 45% noise is just one vector of covariates that present when a response is recorded. Grey background, object, 45% noise is another vector, while white, face, 10% noise is another. The regression will suggest to you whether background (dummy coded), noise or additional variables are associated with the correct response. The association between correct identification and change in any one value, holding all other values constant, is the interpretation of multivariable regression. Thus, you will obtain a sense of the association between background OR a one unit difference in noise OR whether a face/object was shown by using conditional logistic regression. Your model in R would be something like: install.packages("survival") require("survival") clogit(correct ~ background + noise + pic_type + strata(person), data) A more complicated model for each specific face or object among pictures could be considered, but you will dilute your ability to detect the desired effect of background.
What is the correct analysis for this type of question? (Conditional Logistic Regression?)
I believe that conditional logistic regression will give you the desired results. You correctly identify the need to use a repeated measures convention when analyzing this data. You have 5 respondent
What is the correct analysis for this type of question? (Conditional Logistic Regression?) I believe that conditional logistic regression will give you the desired results. You correctly identify the need to use a repeated measures convention when analyzing this data. You have 5 respondents evaluated for the binary outcome of correct/incorrect face/object recognition over multiple conditions. The numerous responses from one person generate the need for a repeated measures approach. If your intent is actually better stated as whether the respondent chooses face or object, you could use the same analytic approach, but note that you would be interpreting the respondents choice not correct/incorrect classification. For a third category of "both" you would need multinomial logistic regression. I will assume you are interested in correct/incorrect classification in what follows. You state "Since every participant is measured in all 3 background conditions, it is a dependent design. Since, for one individual picture, I keep the noise constant over the 3 background conditions, it is somehow paired or matched." The conditions under evaluation, while limited in their value or quality, are not "conditioning" your analysis. The use of grey background, face picture with 45% noise is just one vector of covariates that present when a response is recorded. Grey background, object, 45% noise is another vector, while white, face, 10% noise is another. The regression will suggest to you whether background (dummy coded), noise or additional variables are associated with the correct response. The association between correct identification and change in any one value, holding all other values constant, is the interpretation of multivariable regression. Thus, you will obtain a sense of the association between background OR a one unit difference in noise OR whether a face/object was shown by using conditional logistic regression. Your model in R would be something like: install.packages("survival") require("survival") clogit(correct ~ background + noise + pic_type + strata(person), data) A more complicated model for each specific face or object among pictures could be considered, but you will dilute your ability to detect the desired effect of background.
What is the correct analysis for this type of question? (Conditional Logistic Regression?) I believe that conditional logistic regression will give you the desired results. You correctly identify the need to use a repeated measures convention when analyzing this data. You have 5 respondent
36,888
Multiplication, addition, and concatenation in deep neural networks
Addition and concatenation are special cases of multiplication, where the weights are equal to 0 or 1. As a result, one can view using addition and concatenation as assumptions of what the network should be doing. E.g., in https://arxiv.org/abs/1606.03475, figure 1, we used concatenation to create the token emdeddings $e_i$ from the characters as we want to motivate the higher layers to consider the information from both the forward character-based RNN and the backward character-based RNN.
Multiplication, addition, and concatenation in deep neural networks
Addition and concatenation are special cases of multiplication, where the weights are equal to 0 or 1. As a result, one can view using addition and concatenation as assumptions of what the network sh
Multiplication, addition, and concatenation in deep neural networks Addition and concatenation are special cases of multiplication, where the weights are equal to 0 or 1. As a result, one can view using addition and concatenation as assumptions of what the network should be doing. E.g., in https://arxiv.org/abs/1606.03475, figure 1, we used concatenation to create the token emdeddings $e_i$ from the characters as we want to motivate the higher layers to consider the information from both the forward character-based RNN and the backward character-based RNN.
Multiplication, addition, and concatenation in deep neural networks Addition and concatenation are special cases of multiplication, where the weights are equal to 0 or 1. As a result, one can view using addition and concatenation as assumptions of what the network sh
36,889
Why do we want an objective function to be a convex function?
First of all as @whuber mentioned many people are involved in non-convex optimisation. Having said, your definition of "as long as the local minimum is the global minimum" is rather... lax. See for instance the Easom function ($f(x,y) = -\cos(x)\cos(y)\exp(-(x-\pi)^2 - (y-\pi)^2)$. It has a single minimum if that is your concern but if your even remotely away from it you are... staffed. Standard gradient-based methods like BFGS (BFGS in R's optim )and Conjugate Gradient (CG in R's optim) will suffer greatly. You will have to essentially make an "educated guess" about your answer (eg. Simulated Annealing - SANN in R's optim), which a very computationally expensive routine. In R: easom <- function(x){ -cos(x[1]) * cos(x[2]) * exp( -(x[1] -pi)^2 - (x[2] - pi)^2) } optim(easom,par=c(0,0), method='BFGS')$par # 1.664149e-06 1.664149e-06 # Junk optim(easom,par=c(0,0), method='CG')$par # 0 0 # Insulting Junk optim(easom,par=c(0,0), method='SANN')$par # 3.382556 2.052309 # Some success! There are other even worse surfaces to optimise against. See for example Michalewicz's or Schwefel's functions where you might have multiple local minima and/or flat regions. This flatness is a real problem. For example in generalised as well as standard linear mixed effects model as the number of estimation parameters increases, the log-likelihood function, even after profiling out the residual variance and the fixed-effects parameters can still be very flat. This will lead the model to converge on the boundary of the parameter space or simply to a suboptimal solution (this is actually one of the reason some people myself included are skeptical with the 'keep it maximal' idea for LME's). Therefore "how much convex" your objective function might have big impact to your model as well as your later inference.
Why do we want an objective function to be a convex function?
First of all as @whuber mentioned many people are involved in non-convex optimisation. Having said, your definition of "as long as the local minimum is the global minimum" is rather... lax. See for in
Why do we want an objective function to be a convex function? First of all as @whuber mentioned many people are involved in non-convex optimisation. Having said, your definition of "as long as the local minimum is the global minimum" is rather... lax. See for instance the Easom function ($f(x,y) = -\cos(x)\cos(y)\exp(-(x-\pi)^2 - (y-\pi)^2)$. It has a single minimum if that is your concern but if your even remotely away from it you are... staffed. Standard gradient-based methods like BFGS (BFGS in R's optim )and Conjugate Gradient (CG in R's optim) will suffer greatly. You will have to essentially make an "educated guess" about your answer (eg. Simulated Annealing - SANN in R's optim), which a very computationally expensive routine. In R: easom <- function(x){ -cos(x[1]) * cos(x[2]) * exp( -(x[1] -pi)^2 - (x[2] - pi)^2) } optim(easom,par=c(0,0), method='BFGS')$par # 1.664149e-06 1.664149e-06 # Junk optim(easom,par=c(0,0), method='CG')$par # 0 0 # Insulting Junk optim(easom,par=c(0,0), method='SANN')$par # 3.382556 2.052309 # Some success! There are other even worse surfaces to optimise against. See for example Michalewicz's or Schwefel's functions where you might have multiple local minima and/or flat regions. This flatness is a real problem. For example in generalised as well as standard linear mixed effects model as the number of estimation parameters increases, the log-likelihood function, even after profiling out the residual variance and the fixed-effects parameters can still be very flat. This will lead the model to converge on the boundary of the parameter space or simply to a suboptimal solution (this is actually one of the reason some people myself included are skeptical with the 'keep it maximal' idea for LME's). Therefore "how much convex" your objective function might have big impact to your model as well as your later inference.
Why do we want an objective function to be a convex function? First of all as @whuber mentioned many people are involved in non-convex optimisation. Having said, your definition of "as long as the local minimum is the global minimum" is rather... lax. See for in
36,890
Strange likelihood trace from MCMC chain
Log-likelihood is a sum of log-densities over some datapoints, given some parameter values. Recall that densities are relative measures of "probability per foot". This means that they can be arbitrary low, or high, as in this example of uniform density. Since you sum density estimates for different points they will be always at least $N$ times the minimal value that is possible given your data and parameters. Since your MCMC algorithm wanders around some parameter space the similarity of log-likelihoods would be proportional to how "far" does it jump in subsequent steps. So given the limited information you provided, there is nothing strange in such values since there is no "typical" likelihood values.
Strange likelihood trace from MCMC chain
Log-likelihood is a sum of log-densities over some datapoints, given some parameter values. Recall that densities are relative measures of "probability per foot". This means that they can be arbitrary
Strange likelihood trace from MCMC chain Log-likelihood is a sum of log-densities over some datapoints, given some parameter values. Recall that densities are relative measures of "probability per foot". This means that they can be arbitrary low, or high, as in this example of uniform density. Since you sum density estimates for different points they will be always at least $N$ times the minimal value that is possible given your data and parameters. Since your MCMC algorithm wanders around some parameter space the similarity of log-likelihoods would be proportional to how "far" does it jump in subsequent steps. So given the limited information you provided, there is nothing strange in such values since there is no "typical" likelihood values.
Strange likelihood trace from MCMC chain Log-likelihood is a sum of log-densities over some datapoints, given some parameter values. Recall that densities are relative measures of "probability per foot". This means that they can be arbitrary
36,891
Hidden Markov Model and Naive Bayes similarity
Hidden Markov Model assumes a relationship between $y_n$ and $y_{n+1}$. For example say we are doing natural language processing, and $y_n$ denotes the $n$-th world in a sentence. If we know $y_n$ is "stack" then the probability of $y_{n+1}$ being "overflow" might be higher than knowing $y_n$ being something else say "cat". While Naive Bayes does not make that assumption, instead it assumes that the observation sequence is i.i.d. Its more like $y$ is a random word from a random sentence, then knowing $y_n$ does not affect $y_{n+1}$. Moreover, "plugging in the previous state(Y-1) as a feature(Xn) on the Naive Bayes" would make it a "reversed" Markov chain, as the arrow is now from $y_n$ to $y_{n-1}$. It assumes the same relationship if, in that natural language processing case, you read from right to left.
Hidden Markov Model and Naive Bayes similarity
Hidden Markov Model assumes a relationship between $y_n$ and $y_{n+1}$. For example say we are doing natural language processing, and $y_n$ denotes the $n$-th world in a sentence. If we know $y_n$ is
Hidden Markov Model and Naive Bayes similarity Hidden Markov Model assumes a relationship between $y_n$ and $y_{n+1}$. For example say we are doing natural language processing, and $y_n$ denotes the $n$-th world in a sentence. If we know $y_n$ is "stack" then the probability of $y_{n+1}$ being "overflow" might be higher than knowing $y_n$ being something else say "cat". While Naive Bayes does not make that assumption, instead it assumes that the observation sequence is i.i.d. Its more like $y$ is a random word from a random sentence, then knowing $y_n$ does not affect $y_{n+1}$. Moreover, "plugging in the previous state(Y-1) as a feature(Xn) on the Naive Bayes" would make it a "reversed" Markov chain, as the arrow is now from $y_n$ to $y_{n-1}$. It assumes the same relationship if, in that natural language processing case, you read from right to left.
Hidden Markov Model and Naive Bayes similarity Hidden Markov Model assumes a relationship between $y_n$ and $y_{n+1}$. For example say we are doing natural language processing, and $y_n$ denotes the $n$-th world in a sentence. If we know $y_n$ is
36,892
Compare MLR model to model $Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)^{\beta_3}$?
No (at least not with nls) From its documentation, nls fits functions of the form $Y_i| \theta, X_i = f(\theta, X_i) + \epsilon$ (and is the MLE in the case that $\epsilon$ is iid Normal), so your relationship is not in the non-linear least squares class. Let's see if we can describe the distribution $Y$ might follow. Let $Z_i = \beta_0+\beta_1 x_{1i} + \beta_2 x_{2i} + \epsilon_i$ Given that $\epsilon_i$ is $N(0, 1)$, then $Z_i \sim N(\beta_0+\beta_1 x_{1i} + \beta_2 x_{2i}, 1)$. If $\beta_3 = 2$ then for example, we could have that $Y_i$ is non-central $\chi^2_1$. Yes (using Box-cox transformations) If $Y_i = Z_{i}^{\beta_3}$ is a one-to-one transformation (ie, at a minimum, $\beta_3$ is not even) then you have just rediscovered the box-cox family of transformations: $$ Y(\lambda) = \begin{cases} (\lambda Z + 1)^{1/\lambda}, \lambda >0 \\ e^Z, \lambda = 0 \end{cases}, $$ which clearly includes the scenario you describe. Classically, $\lambda$ is estimated through the profile likelihood, ie, plugging in different values of $\lambda$ and checking the RSS to the least-squares fit. An Analysis of Transformations Revisited (1981) appears to give a good review of the theory. The function boxcox in the package MASS does such an estimation. If $\beta_3$ is a parameter of interest rather than a nuisance you may need to do something more sophisticated.
Compare MLR model to model $Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)^{\beta_3}$?
No (at least not with nls) From its documentation, nls fits functions of the form $Y_i| \theta, X_i = f(\theta, X_i) + \epsilon$ (and is the MLE in the case that $\epsilon$ is iid Normal), so your rel
Compare MLR model to model $Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)^{\beta_3}$? No (at least not with nls) From its documentation, nls fits functions of the form $Y_i| \theta, X_i = f(\theta, X_i) + \epsilon$ (and is the MLE in the case that $\epsilon$ is iid Normal), so your relationship is not in the non-linear least squares class. Let's see if we can describe the distribution $Y$ might follow. Let $Z_i = \beta_0+\beta_1 x_{1i} + \beta_2 x_{2i} + \epsilon_i$ Given that $\epsilon_i$ is $N(0, 1)$, then $Z_i \sim N(\beta_0+\beta_1 x_{1i} + \beta_2 x_{2i}, 1)$. If $\beta_3 = 2$ then for example, we could have that $Y_i$ is non-central $\chi^2_1$. Yes (using Box-cox transformations) If $Y_i = Z_{i}^{\beta_3}$ is a one-to-one transformation (ie, at a minimum, $\beta_3$ is not even) then you have just rediscovered the box-cox family of transformations: $$ Y(\lambda) = \begin{cases} (\lambda Z + 1)^{1/\lambda}, \lambda >0 \\ e^Z, \lambda = 0 \end{cases}, $$ which clearly includes the scenario you describe. Classically, $\lambda$ is estimated through the profile likelihood, ie, plugging in different values of $\lambda$ and checking the RSS to the least-squares fit. An Analysis of Transformations Revisited (1981) appears to give a good review of the theory. The function boxcox in the package MASS does such an estimation. If $\beta_3$ is a parameter of interest rather than a nuisance you may need to do something more sophisticated.
Compare MLR model to model $Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)^{\beta_3}$? No (at least not with nls) From its documentation, nls fits functions of the form $Y_i| \theta, X_i = f(\theta, X_i) + \epsilon$ (and is the MLE in the case that $\epsilon$ is iid Normal), so your rel
36,893
Compare MLR model to model $Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)^{\beta_3}$?
I think Andrew M have given a good answer; I just want to make a few related points. As Andrew M indicates you can't do the model as is directly with nonlinear least squares; however, you can fit this closely related model with nonlinear LS: $Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i})^{\beta_3} + \epsilon_i$ This might not seem much use, but it would have value in obtaining an initial estimate of $\beta_3$ to get a good starting point for optimization of the actual model (whether performed directly, or via Box-Cox). Note also that if $Y$ is strictly positive, you can consider this transformation: $\log(Y_i) = \beta_3 \log(\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)$ Again, a slight modification (pulling the error term outside the parentheses) allows nonlinear least squares fitting. You could then reweight using the resulting estimate of $\beta_3$ to improve the estimates. The only difficulty would be if you hit a situation where the fitted value inside the log wasn't strictly positive. [If you're prepare to consider Weibull regression (that is, where the Y's are Weibull with mean dependent on the X's), you might find that you can do something useful with that. It would change the form of the relationship with the x's however. A related approach would be that given a value for $\beta_3$ you could consider transforming $Y$ ($Y^*=Y^{1/\beta_3}$)and fit an exponential GLM with identity link to $Y^*$ rather than a Gaussian. This would again correspond to a Weibull model for $Y$, but with the parameters entering in the way you suggest). This could be done over a grid of $\beta_3$ values to maximize the likelihood for it.]
Compare MLR model to model $Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)^{\beta_3}$?
I think Andrew M have given a good answer; I just want to make a few related points. As Andrew M indicates you can't do the model as is directly with nonlinear least squares; however, you can fit this
Compare MLR model to model $Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)^{\beta_3}$? I think Andrew M have given a good answer; I just want to make a few related points. As Andrew M indicates you can't do the model as is directly with nonlinear least squares; however, you can fit this closely related model with nonlinear LS: $Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i})^{\beta_3} + \epsilon_i$ This might not seem much use, but it would have value in obtaining an initial estimate of $\beta_3$ to get a good starting point for optimization of the actual model (whether performed directly, or via Box-Cox). Note also that if $Y$ is strictly positive, you can consider this transformation: $\log(Y_i) = \beta_3 \log(\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)$ Again, a slight modification (pulling the error term outside the parentheses) allows nonlinear least squares fitting. You could then reweight using the resulting estimate of $\beta_3$ to improve the estimates. The only difficulty would be if you hit a situation where the fitted value inside the log wasn't strictly positive. [If you're prepare to consider Weibull regression (that is, where the Y's are Weibull with mean dependent on the X's), you might find that you can do something useful with that. It would change the form of the relationship with the x's however. A related approach would be that given a value for $\beta_3$ you could consider transforming $Y$ ($Y^*=Y^{1/\beta_3}$)and fit an exponential GLM with identity link to $Y^*$ rather than a Gaussian. This would again correspond to a Weibull model for $Y$, but with the parameters entering in the way you suggest). This could be done over a grid of $\beta_3$ values to maximize the likelihood for it.]
Compare MLR model to model $Y_i = (\beta_0 + \beta_1x_{1i} + \beta_2x_{2i} + \epsilon_i)^{\beta_3}$? I think Andrew M have given a good answer; I just want to make a few related points. As Andrew M indicates you can't do the model as is directly with nonlinear least squares; however, you can fit this
36,894
What is the meaning of the notation $P_\theta()$, where a probability has a subscript Greek letter?
It means under the distribution $\theta$, the probability of that statistic $T(x)$ being equal to $t$ is zero. Another way you can write it is: $Pr(T(x) = t | \Theta = \theta) = 0$.
What is the meaning of the notation $P_\theta()$, where a probability has a subscript Greek letter?
It means under the distribution $\theta$, the probability of that statistic $T(x)$ being equal to $t$ is zero. Another way you can write it is: $Pr(T(x) = t | \Theta = \theta) = 0$.
What is the meaning of the notation $P_\theta()$, where a probability has a subscript Greek letter? It means under the distribution $\theta$, the probability of that statistic $T(x)$ being equal to $t$ is zero. Another way you can write it is: $Pr(T(x) = t | \Theta = \theta) = 0$.
What is the meaning of the notation $P_\theta()$, where a probability has a subscript Greek letter? It means under the distribution $\theta$, the probability of that statistic $T(x)$ being equal to $t$ is zero. Another way you can write it is: $Pr(T(x) = t | \Theta = \theta) = 0$.
36,895
Cook's distance in detecting outliers
Like you said Cook’s Distance measures the change in the regression by removing each individual point. If things change quite a bit by the omission of a single point, than that point was having a lot of influence on your model. Define $\hat{Y}_{j(i)}$ to be the fitted value for the jth observation when the ith observation is deleted from the data set. Cook’s Distance measures how much $i$ changes all the predictions. $$D_i = \frac{\sum_{j=1}^{n}\hat{Y}_j - \hat{Y}_{j(i)})^2}{pMSE}$$ $$= \frac{e_i^2}{pMSE}[\frac{h_{ii}}{(1-h_{ii})^2}]$$ If $D_i \geq 1$ it is extreme (for small to medium datasets). Cook’s Distance shows the effect of the ith case on all the fitted values. Note that the ith case can be influenced by big $e_i$ and moderate $h_{ii}$ moderate $e_i$ and big $h_{ii}$ big $e_i$ and big $h_{ii}$ In R, use the influence.measures package with cooks.distance(model)
Cook's distance in detecting outliers
Like you said Cook’s Distance measures the change in the regression by removing each individual point. If things change quite a bit by the omission of a single point, than that point was having a lot
Cook's distance in detecting outliers Like you said Cook’s Distance measures the change in the regression by removing each individual point. If things change quite a bit by the omission of a single point, than that point was having a lot of influence on your model. Define $\hat{Y}_{j(i)}$ to be the fitted value for the jth observation when the ith observation is deleted from the data set. Cook’s Distance measures how much $i$ changes all the predictions. $$D_i = \frac{\sum_{j=1}^{n}\hat{Y}_j - \hat{Y}_{j(i)})^2}{pMSE}$$ $$= \frac{e_i^2}{pMSE}[\frac{h_{ii}}{(1-h_{ii})^2}]$$ If $D_i \geq 1$ it is extreme (for small to medium datasets). Cook’s Distance shows the effect of the ith case on all the fitted values. Note that the ith case can be influenced by big $e_i$ and moderate $h_{ii}$ moderate $e_i$ and big $h_{ii}$ big $e_i$ and big $h_{ii}$ In R, use the influence.measures package with cooks.distance(model)
Cook's distance in detecting outliers Like you said Cook’s Distance measures the change in the regression by removing each individual point. If things change quite a bit by the omission of a single point, than that point was having a lot
36,896
Cook's distance in detecting outliers
Cook's D is ineffective in detecting cluster of outliers because removing one of those will not affect the model too much (there're still other outliers). You could use the residual as a measure, which is sensitive to clusters. A simple implementation of k-means is also effective.
Cook's distance in detecting outliers
Cook's D is ineffective in detecting cluster of outliers because removing one of those will not affect the model too much (there're still other outliers). You could use the residual as a measure, whic
Cook's distance in detecting outliers Cook's D is ineffective in detecting cluster of outliers because removing one of those will not affect the model too much (there're still other outliers). You could use the residual as a measure, which is sensitive to clusters. A simple implementation of k-means is also effective.
Cook's distance in detecting outliers Cook's D is ineffective in detecting cluster of outliers because removing one of those will not affect the model too much (there're still other outliers). You could use the residual as a measure, whic
36,897
FDR correction when tests are correlated
You're looking for the Benjamini-Yekutieli procedure: Benjamini, Yoav; Yekutieli, Daniel. The control of the false discovery rate in multiple testing under dependency. Ann. Statist. 29 (2001), no. 4, 1165--1188. doi:10.1214/aos/1013699998. http://projecteuclid.org/euclid.aos/1013699998 The procedure is available in R using the method = "BY" option in p.adjust(). For more info, try ?p.adjust.
FDR correction when tests are correlated
You're looking for the Benjamini-Yekutieli procedure: Benjamini, Yoav; Yekutieli, Daniel. The control of the false discovery rate in multiple testing under dependency. Ann. Statist. 29 (2001), no. 4,
FDR correction when tests are correlated You're looking for the Benjamini-Yekutieli procedure: Benjamini, Yoav; Yekutieli, Daniel. The control of the false discovery rate in multiple testing under dependency. Ann. Statist. 29 (2001), no. 4, 1165--1188. doi:10.1214/aos/1013699998. http://projecteuclid.org/euclid.aos/1013699998 The procedure is available in R using the method = "BY" option in p.adjust(). For more info, try ?p.adjust.
FDR correction when tests are correlated You're looking for the Benjamini-Yekutieli procedure: Benjamini, Yoav; Yekutieli, Daniel. The control of the false discovery rate in multiple testing under dependency. Ann. Statist. 29 (2001), no. 4,
36,898
Different ways to understand neural networks
Neural networks are a generalization of regression. http://briandolhansky.com/blog/artificial-neural-networks-linear-regression-part-1 neural networks are universal function approximators (Wikipedia) Turing Machines are Recurrent Neural Networks (http://lipas.uwasa.fi/stes/step96/step96/hyotyniemi1/)
Different ways to understand neural networks
Neural networks are a generalization of regression. http://briandolhansky.com/blog/artificial-neural-networks-linear-regression-part-1 neural networks are universal function approximators (Wikipedia)
Different ways to understand neural networks Neural networks are a generalization of regression. http://briandolhansky.com/blog/artificial-neural-networks-linear-regression-part-1 neural networks are universal function approximators (Wikipedia) Turing Machines are Recurrent Neural Networks (http://lipas.uwasa.fi/stes/step96/step96/hyotyniemi1/)
Different ways to understand neural networks Neural networks are a generalization of regression. http://briandolhansky.com/blog/artificial-neural-networks-linear-regression-part-1 neural networks are universal function approximators (Wikipedia)
36,899
Different ways to understand neural networks
Neural networks as finite approximations of continuous functions: https://en.wikipedia.org/wiki/Universal_approximation_theorem In short, neural networks can approximate continuous functions with arbitrary accuracy (that depends on the number of neurons) and consequently form a dense set in the space of continuous functions on any closed and bounded sets. This is tangentially related to stuff like fourier series, and polynomial approximation.
Different ways to understand neural networks
Neural networks as finite approximations of continuous functions: https://en.wikipedia.org/wiki/Universal_approximation_theorem In short, neural networks can approximate continuous functions with arbi
Different ways to understand neural networks Neural networks as finite approximations of continuous functions: https://en.wikipedia.org/wiki/Universal_approximation_theorem In short, neural networks can approximate continuous functions with arbitrary accuracy (that depends on the number of neurons) and consequently form a dense set in the space of continuous functions on any closed and bounded sets. This is tangentially related to stuff like fourier series, and polynomial approximation.
Different ways to understand neural networks Neural networks as finite approximations of continuous functions: https://en.wikipedia.org/wiki/Universal_approximation_theorem In short, neural networks can approximate continuous functions with arbi
36,900
Are there two definitions of the word bias?
The term "bias" has a specific definition in the statistical literature (the difference between the expected value of an estimator and the thing being estimated), but that isn't to say it loses its original, more general meaning. Which one is intended will depend on context, and oftentimes you will have a mixture of the two. I would say the first usage is in general the less precise kind, as data imputation is a method that's used in applied problems where one need not assume that any true value of the parameter even exists. Here it's basically synonymous with "shrunk towards zero." As far as the second usage is concerned the term bias-variance trade-off does originally derive from the more formal definition of bias, but nonetheless I would still say this refers more to the general "inflexibility" of a model fitting procedure, and not necessarily the question of whether or not an estimated regression function is correct on average.
Are there two definitions of the word bias?
The term "bias" has a specific definition in the statistical literature (the difference between the expected value of an estimator and the thing being estimated), but that isn't to say it loses its or
Are there two definitions of the word bias? The term "bias" has a specific definition in the statistical literature (the difference between the expected value of an estimator and the thing being estimated), but that isn't to say it loses its original, more general meaning. Which one is intended will depend on context, and oftentimes you will have a mixture of the two. I would say the first usage is in general the less precise kind, as data imputation is a method that's used in applied problems where one need not assume that any true value of the parameter even exists. Here it's basically synonymous with "shrunk towards zero." As far as the second usage is concerned the term bias-variance trade-off does originally derive from the more formal definition of bias, but nonetheless I would still say this refers more to the general "inflexibility" of a model fitting procedure, and not necessarily the question of whether or not an estimated regression function is correct on average.
Are there two definitions of the word bias? The term "bias" has a specific definition in the statistical literature (the difference between the expected value of an estimator and the thing being estimated), but that isn't to say it loses its or