idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
34,301
Boxplots as tables
It depends on your objective, if it is a quick visualization I will stick with boxplot but if it is a more detailed examination I would stay with the data
Boxplots as tables
It depends on your objective, if it is a quick visualization I will stick with boxplot but if it is a more detailed examination I would stay with the data
Boxplots as tables It depends on your objective, if it is a quick visualization I will stick with boxplot but if it is a more detailed examination I would stay with the data
Boxplots as tables It depends on your objective, if it is a quick visualization I will stick with boxplot but if it is a more detailed examination I would stay with the data
34,302
What are the ANOVA's benefits over a normal linear model?
Let's look at what you get when you actually use the anova() function (the numbers are different than in your example, since I don't know what seed you used for generating the random numbers, but the point remains the same): > anova(model) Analysis of Variance Table Response: x$rand Df Sum Sq Mean Sq F value Pr(>F) x$factor 2 4.142 2.0708 1.8948 0.1559 Residuals 97 106.009 1.0929 The F-test for the factor is testing simultaneously $H_0: \beta_1 = \beta_2 = 0$, i.e., the hypothesis that the factor in general is not significant. A common strategy is to first test this omnibus hypothesis before digging down which of the levels of the factor are different from each other. Also, you can use the anova() function for full versus reduced model tests. For example: > x <- data.frame(rand=rnorm(100), factor=sample(c("A","B","C"),100,replace=TRUE), y1=rnorm(100), y2=rnorm(100)) > model1 <- lm(x$rand ~ x$factor + x$y1 + x$y2) > model2 <- lm(x$rand ~ x$factor) > anova(model2, model1) Analysis of Variance Table Model 1: x$rand ~ x$factor Model 2: x$rand ~ x$factor + x$y1 + x$y2 Res.Df RSS Df Sum of Sq F Pr(>F) 1 97 105.06 2 95 104.92 2 0.13651 0.0618 0.9401 which is a comparison of the full model with the factor and two covariates (y1 and y2) and the reduced model, where we assume that the slopes of the two covariates are both simultaneously equal to zero.
What are the ANOVA's benefits over a normal linear model?
Let's look at what you get when you actually use the anova() function (the numbers are different than in your example, since I don't know what seed you used for generating the random numbers, but the
What are the ANOVA's benefits over a normal linear model? Let's look at what you get when you actually use the anova() function (the numbers are different than in your example, since I don't know what seed you used for generating the random numbers, but the point remains the same): > anova(model) Analysis of Variance Table Response: x$rand Df Sum Sq Mean Sq F value Pr(>F) x$factor 2 4.142 2.0708 1.8948 0.1559 Residuals 97 106.009 1.0929 The F-test for the factor is testing simultaneously $H_0: \beta_1 = \beta_2 = 0$, i.e., the hypothesis that the factor in general is not significant. A common strategy is to first test this omnibus hypothesis before digging down which of the levels of the factor are different from each other. Also, you can use the anova() function for full versus reduced model tests. For example: > x <- data.frame(rand=rnorm(100), factor=sample(c("A","B","C"),100,replace=TRUE), y1=rnorm(100), y2=rnorm(100)) > model1 <- lm(x$rand ~ x$factor + x$y1 + x$y2) > model2 <- lm(x$rand ~ x$factor) > anova(model2, model1) Analysis of Variance Table Model 1: x$rand ~ x$factor Model 2: x$rand ~ x$factor + x$y1 + x$y2 Res.Df RSS Df Sum of Sq F Pr(>F) 1 97 105.06 2 95 104.92 2 0.13651 0.0618 0.9401 which is a comparison of the full model with the factor and two covariates (y1 and y2) and the reduced model, where we assume that the slopes of the two covariates are both simultaneously equal to zero.
What are the ANOVA's benefits over a normal linear model? Let's look at what you get when you actually use the anova() function (the numbers are different than in your example, since I don't know what seed you used for generating the random numbers, but the
34,303
Density estimation methods?
Dirichlet Process mixture models can be very flexible nonparametric Bayesian approach for density modeling, and can also be used as building blocks in more complex models. They are essentially an infinite generalization of parametric Gaussian mixture models and don't require specifying in advance the number of components in the mixture.
Density estimation methods?
Dirichlet Process mixture models can be very flexible nonparametric Bayesian approach for density modeling, and can also be used as building blocks in more complex models. They are essentially an infi
Density estimation methods? Dirichlet Process mixture models can be very flexible nonparametric Bayesian approach for density modeling, and can also be used as building blocks in more complex models. They are essentially an infinite generalization of parametric Gaussian mixture models and don't require specifying in advance the number of components in the mixture.
Density estimation methods? Dirichlet Process mixture models can be very flexible nonparametric Bayesian approach for density modeling, and can also be used as building blocks in more complex models. They are essentially an infi
34,304
Density estimation methods?
Gaussian Processes can also be another nonparametric Bayesian approach for density estimation. See this Gaussian Process Density Sampler paper.
Density estimation methods?
Gaussian Processes can also be another nonparametric Bayesian approach for density estimation. See this Gaussian Process Density Sampler paper.
Density estimation methods? Gaussian Processes can also be another nonparametric Bayesian approach for density estimation. See this Gaussian Process Density Sampler paper.
Density estimation methods? Gaussian Processes can also be another nonparametric Bayesian approach for density estimation. See this Gaussian Process Density Sampler paper.
34,305
Density estimation methods?
I use Silverman's Adaptive Kernel Density estimator. see e.g akj help page
Density estimation methods?
I use Silverman's Adaptive Kernel Density estimator. see e.g akj help page
Density estimation methods? I use Silverman's Adaptive Kernel Density estimator. see e.g akj help page
Density estimation methods? I use Silverman's Adaptive Kernel Density estimator. see e.g akj help page
34,306
Density estimation methods?
Half-space depth a.k.a. bag-plots. http://www.r-project.org/user-2006/Slides/Mizera.pdf
Density estimation methods?
Half-space depth a.k.a. bag-plots. http://www.r-project.org/user-2006/Slides/Mizera.pdf
Density estimation methods? Half-space depth a.k.a. bag-plots. http://www.r-project.org/user-2006/Slides/Mizera.pdf
Density estimation methods? Half-space depth a.k.a. bag-plots. http://www.r-project.org/user-2006/Slides/Mizera.pdf
34,307
Density estimation methods?
A nice short paper by Jose Bernardo here gives a useful Bayesian method to estimate a density. But as with most things Bayesian, the computational cost must be paid for this method.
Density estimation methods?
A nice short paper by Jose Bernardo here gives a useful Bayesian method to estimate a density. But as with most things Bayesian, the computational cost must be paid for this method.
Density estimation methods? A nice short paper by Jose Bernardo here gives a useful Bayesian method to estimate a density. But as with most things Bayesian, the computational cost must be paid for this method.
Density estimation methods? A nice short paper by Jose Bernardo here gives a useful Bayesian method to estimate a density. But as with most things Bayesian, the computational cost must be paid for this method.
34,308
What is a rank?
The question seems to seek a high-brow formal and abstract definition of ranks. This answer is for anyone -- especially those attracted by the thread title, not so much the detailed question -- interested in a low-brow informal treatment. Just to remind you of basics if that would be helpful: Suppose you have some values, say 42, 1, 2, 2.71828, 3.14159. We can sort (order) those values to 1, 2, 2.71828, 3.14159, 42. So how might we rank them as well? Five values all different might be ranked 1 to 5, but how? In ranking, there are at first sight at least three practical questions that arise: 1. Is the largest value to be assigned 1 or the smallest value to be assigned 1? The general statistical convention seems to be to assign the smallest value to rank 1. This is tied up with conventions to do with order statistics. In 1999 in a paper on pp.5-7 of Stata Technical Bulletin 51 field and track were suggested as terms for two kinds of ranks, Field ranks are those for which high values get low ranks (jumping or throwing the greatest height or distance wins). Track ranks are those for which low values get low ranks (in running, hurdling, walking, etc. the shortest time wins). This terminology is now used in Stata. Negation of a variable or argument should be enough to flip between field and track ranks. Hence suppose that 2, 3, 5, 7, 11 are to be ranked 11 first. Ranking -11, -7, -5, -3, -2 is how to do it if your software does not provide a switch. 2. What should be done about ties? Now suppose that ties may exist, so that two or more observations may have the same value. This is very common in real data, especially if variables are counted or categorical. A common convention given ties is to assign the mean of the ranks that would have been given otherwise, so that the sum of the ranks is preserved. Thus values 1, 2, 2, 3, 3, 3 would be ranked 1, 2.5, 2.5, 5, 5, 5. The two values of 2 would have been ranked 2 and 3 (or 3 and 2!) had their values been slightly different. The three values of 3 would have been ranked 4, 5, 6 in some order had their values all been slightly different. 2.5 + 2.5 = 2 + 3 and 5 + 5 + 5 = 4 + 5 + 6, so the sum of ranks is what it would have been otherwise. This may seem small print, but the calculations behind nonparametric tests and associated procedures typically use the sum of ranks directly or indirectly. Substantively, such tied ranks are often reported as 1, 2nd equal, 2nd equal, 4th equal, 4th equal, 4th equal as used to be quite common in sports (less often now with say precise timers, photofinishes, detailed rules for breaking ties) and in education (the expression schoolmaster's rank is one I've seen in literature but do not recommend). 3. How to plot ordered or ranked data? The standard definition of order statistics as defined say for a sample of size $n$ of a variable $x$, namely $x_1, \dots, x_n$, by the inequalities $$x_{(1)} \le x_{(2)} \le \dots \le x_{(n-1)} \le x_{(n)} $$ carries with it a recognition that tied values may arbitrarily but usefully be assigned different ranks, or at least tags, $1$ to $n$, so that each of those ranks or tags occurs exactly once. This convention can be helpful for plotting values against their rank, or vice versa, in a quantile plot or rank-size plot (many other names can be found). Otherwise tied values all assigned the same rank would necessarily be plotted with the same coordinates, which makes recognition of ties more difficult. This convention is linked in turn to various slightly different ways of defining plotting positions or percentile ranks. a topic introduced for example (with some references and historical details) in this FAQ.
What is a rank?
The question seems to seek a high-brow formal and abstract definition of ranks. This answer is for anyone -- especially those attracted by the thread title, not so much the detailed question -- intere
What is a rank? The question seems to seek a high-brow formal and abstract definition of ranks. This answer is for anyone -- especially those attracted by the thread title, not so much the detailed question -- interested in a low-brow informal treatment. Just to remind you of basics if that would be helpful: Suppose you have some values, say 42, 1, 2, 2.71828, 3.14159. We can sort (order) those values to 1, 2, 2.71828, 3.14159, 42. So how might we rank them as well? Five values all different might be ranked 1 to 5, but how? In ranking, there are at first sight at least three practical questions that arise: 1. Is the largest value to be assigned 1 or the smallest value to be assigned 1? The general statistical convention seems to be to assign the smallest value to rank 1. This is tied up with conventions to do with order statistics. In 1999 in a paper on pp.5-7 of Stata Technical Bulletin 51 field and track were suggested as terms for two kinds of ranks, Field ranks are those for which high values get low ranks (jumping or throwing the greatest height or distance wins). Track ranks are those for which low values get low ranks (in running, hurdling, walking, etc. the shortest time wins). This terminology is now used in Stata. Negation of a variable or argument should be enough to flip between field and track ranks. Hence suppose that 2, 3, 5, 7, 11 are to be ranked 11 first. Ranking -11, -7, -5, -3, -2 is how to do it if your software does not provide a switch. 2. What should be done about ties? Now suppose that ties may exist, so that two or more observations may have the same value. This is very common in real data, especially if variables are counted or categorical. A common convention given ties is to assign the mean of the ranks that would have been given otherwise, so that the sum of the ranks is preserved. Thus values 1, 2, 2, 3, 3, 3 would be ranked 1, 2.5, 2.5, 5, 5, 5. The two values of 2 would have been ranked 2 and 3 (or 3 and 2!) had their values been slightly different. The three values of 3 would have been ranked 4, 5, 6 in some order had their values all been slightly different. 2.5 + 2.5 = 2 + 3 and 5 + 5 + 5 = 4 + 5 + 6, so the sum of ranks is what it would have been otherwise. This may seem small print, but the calculations behind nonparametric tests and associated procedures typically use the sum of ranks directly or indirectly. Substantively, such tied ranks are often reported as 1, 2nd equal, 2nd equal, 4th equal, 4th equal, 4th equal as used to be quite common in sports (less often now with say precise timers, photofinishes, detailed rules for breaking ties) and in education (the expression schoolmaster's rank is one I've seen in literature but do not recommend). 3. How to plot ordered or ranked data? The standard definition of order statistics as defined say for a sample of size $n$ of a variable $x$, namely $x_1, \dots, x_n$, by the inequalities $$x_{(1)} \le x_{(2)} \le \dots \le x_{(n-1)} \le x_{(n)} $$ carries with it a recognition that tied values may arbitrarily but usefully be assigned different ranks, or at least tags, $1$ to $n$, so that each of those ranks or tags occurs exactly once. This convention can be helpful for plotting values against their rank, or vice versa, in a quantile plot or rank-size plot (many other names can be found). Otherwise tied values all assigned the same rank would necessarily be plotted with the same coordinates, which makes recognition of ties more difficult. This convention is linked in turn to various slightly different ways of defining plotting positions or percentile ranks. a topic introduced for example (with some references and historical details) in this FAQ.
What is a rank? The question seems to seek a high-brow formal and abstract definition of ranks. This answer is for anyone -- especially those attracted by the thread title, not so much the detailed question -- intere
34,309
What is a rank?
I've included my attempt at defining a rank in order to illustrate some of the properties I'm interested in, but this should not be taken as a definitive answer. This definition reflects my cobbled together thinking from examples I have seen of "ranks". Assume a collection random variables $\{X_1(\omega), \ldots, X_n(\omega) \}$ on outcome space $\Omega$, and partial order $\leq$. An abstract ranking $\rho: \prod_{i=1}^n X_i(\omega) \mapsto \mathbb{R}_{\geq 0}^n$ is a function such that there exists a non-decreasing function $\kappa:\mathbb{N} \mapsto \mathbb{R}_{\geq0}$ that satisfies $\rho(\vec x)_i \leq \kappa(n)$ for all $i\in \{1, \ldots, n\}$. It must also hold that $\rho(\vec x)_i \leq \rho(\vec x)_j \iff x_i \leq x_j$ for all $i,j \in \{1, \ldots, n\}$ and for all $\omega \in \Omega$ exor $\rho(\vec x)_i \geq \rho(\vec x)_j \iff x_i \leq x_j$ for all $i,j \in \{1, \ldots, n\}$ and for all $\omega \in \Omega$. An component of an image element of an abstract ranking is called an abstract rank. Here are some rationalizations for why I tried to define ranks this way. Why is $\rho$ non-negative? I think for three reasons: It is (for me) a little easier to keep track of positive numbers. Empirical induction: all the examples I've seen of "ranks" or "grade" meet this criterion. Allowing $\rho$ to have zero in its image can be a programing convenience where the rank can coincide with the indices of a data structure of sorted elements. Is adding one so difficult? No, but zero doesn't bother me either. Aesthetic: This feels right. ¯_(ツ)_/¯ Why monotonic rather than nondecreasing in particular? While grades are order-preserving, sometimes we rank quantities in an order-reversing fashion. For example, in a contest of weight lifting the 1st place might go to the person who lifts the most weight. Why this bounding $\kappa$ function? A function being monotone and non-negative just didn't seem specific enough. I've noted as a matter of empirical induction that such a bound occurs with what have been usually described as ranks, and it is likewise true of grades, so I am content to include it. I'm still ruminating about this potentially additional property: It is also required of $\kappa$ that for any finite $n$ there exists $\omega$ in outcome space $\Omega$ such that $\max_i \rho (\vec x_{\omega})_i = \kappa(n)$.
What is a rank?
I've included my attempt at defining a rank in order to illustrate some of the properties I'm interested in, but this should not be taken as a definitive answer. This definition reflects my cobbled to
What is a rank? I've included my attempt at defining a rank in order to illustrate some of the properties I'm interested in, but this should not be taken as a definitive answer. This definition reflects my cobbled together thinking from examples I have seen of "ranks". Assume a collection random variables $\{X_1(\omega), \ldots, X_n(\omega) \}$ on outcome space $\Omega$, and partial order $\leq$. An abstract ranking $\rho: \prod_{i=1}^n X_i(\omega) \mapsto \mathbb{R}_{\geq 0}^n$ is a function such that there exists a non-decreasing function $\kappa:\mathbb{N} \mapsto \mathbb{R}_{\geq0}$ that satisfies $\rho(\vec x)_i \leq \kappa(n)$ for all $i\in \{1, \ldots, n\}$. It must also hold that $\rho(\vec x)_i \leq \rho(\vec x)_j \iff x_i \leq x_j$ for all $i,j \in \{1, \ldots, n\}$ and for all $\omega \in \Omega$ exor $\rho(\vec x)_i \geq \rho(\vec x)_j \iff x_i \leq x_j$ for all $i,j \in \{1, \ldots, n\}$ and for all $\omega \in \Omega$. An component of an image element of an abstract ranking is called an abstract rank. Here are some rationalizations for why I tried to define ranks this way. Why is $\rho$ non-negative? I think for three reasons: It is (for me) a little easier to keep track of positive numbers. Empirical induction: all the examples I've seen of "ranks" or "grade" meet this criterion. Allowing $\rho$ to have zero in its image can be a programing convenience where the rank can coincide with the indices of a data structure of sorted elements. Is adding one so difficult? No, but zero doesn't bother me either. Aesthetic: This feels right. ¯_(ツ)_/¯ Why monotonic rather than nondecreasing in particular? While grades are order-preserving, sometimes we rank quantities in an order-reversing fashion. For example, in a contest of weight lifting the 1st place might go to the person who lifts the most weight. Why this bounding $\kappa$ function? A function being monotone and non-negative just didn't seem specific enough. I've noted as a matter of empirical induction that such a bound occurs with what have been usually described as ranks, and it is likewise true of grades, so I am content to include it. I'm still ruminating about this potentially additional property: It is also required of $\kappa$ that for any finite $n$ there exists $\omega$ in outcome space $\Omega$ such that $\max_i \rho (\vec x_{\omega})_i = \kappa(n)$.
What is a rank? I've included my attempt at defining a rank in order to illustrate some of the properties I'm interested in, but this should not be taken as a definitive answer. This definition reflects my cobbled to
34,310
What is a rank?
How about: A ranking is an order preserving or order reversing (surjective) mapping of a set of numbers to an interval of natural numbers starting from 0 or 1. A rank is an element from the image of a ranking. This definition has some limits when there are ties. In that case people sometimes define the rank as an average (not a natural number) or some places are skipped (not a surjective function). For example we can have $$\begin{array}{r} \text{input} & \{1,&2,&3,&4,&4,&5\} \\\hline \text{output1} & \{1,&2,&3,&4,&4,&5\} \\ \text{output2} & \{1,&2,&3,&5,&5,&6\} \\ \text{output3} & \{1,&2,&3,&4.5,&4.5,&5\} \\ \end{array}$$ Output 1 relates to the rank of a number $x$ defined as 'the number of unique values equal to or below $x$'. Output 2 relates to the rank of a number $x$ defined as 'the number of numbers equal to or below $x$'. Output 3 relates to the rank of a number $x$ defined as 'the number of numbers equal to or below $x$ if a value is unique, and when several numbers are the same/tied the average value of the ranks given to these numbers if they would not be tied'. The definition as a ranking being 'surjective' and mapping to 'natural' numbers does not coincide with cases 2 and 3. But if we would adjust the definition to include cases 2 and 3, then the definition becomes very general and is just any order preserving mapping. The idea of a ranking as a counting process is lost. A definition that can reconcile examples 1 and 2 is The rank of the number $x_j$ in a list, is the number of numbers $x_i$ in that list, either counted with or without multiplicity, for which we have $x_i R x_j$, where $R$ is the binary relation $\leq$ or the binary relation $\geq$.
What is a rank?
How about: A ranking is an order preserving or order reversing (surjective) mapping of a set of numbers to an interval of natural numbers starting from 0 or 1. A rank is an element from the image of a
What is a rank? How about: A ranking is an order preserving or order reversing (surjective) mapping of a set of numbers to an interval of natural numbers starting from 0 or 1. A rank is an element from the image of a ranking. This definition has some limits when there are ties. In that case people sometimes define the rank as an average (not a natural number) or some places are skipped (not a surjective function). For example we can have $$\begin{array}{r} \text{input} & \{1,&2,&3,&4,&4,&5\} \\\hline \text{output1} & \{1,&2,&3,&4,&4,&5\} \\ \text{output2} & \{1,&2,&3,&5,&5,&6\} \\ \text{output3} & \{1,&2,&3,&4.5,&4.5,&5\} \\ \end{array}$$ Output 1 relates to the rank of a number $x$ defined as 'the number of unique values equal to or below $x$'. Output 2 relates to the rank of a number $x$ defined as 'the number of numbers equal to or below $x$'. Output 3 relates to the rank of a number $x$ defined as 'the number of numbers equal to or below $x$ if a value is unique, and when several numbers are the same/tied the average value of the ranks given to these numbers if they would not be tied'. The definition as a ranking being 'surjective' and mapping to 'natural' numbers does not coincide with cases 2 and 3. But if we would adjust the definition to include cases 2 and 3, then the definition becomes very general and is just any order preserving mapping. The idea of a ranking as a counting process is lost. A definition that can reconcile examples 1 and 2 is The rank of the number $x_j$ in a list, is the number of numbers $x_i$ in that list, either counted with or without multiplicity, for which we have $x_i R x_j$, where $R$ is the binary relation $\leq$ or the binary relation $\geq$.
What is a rank? How about: A ranking is an order preserving or order reversing (surjective) mapping of a set of numbers to an interval of natural numbers starting from 0 or 1. A rank is an element from the image of a
34,311
What is a rank?
There are various definitions of ranks depending on context and the purpose for which they are defined. However, a common definition relates them directly to order statistics. Given a set of numbers $x_1,...,x_n$ with corresponding order statistics $x_{(1)} \leqslant \cdots \leqslant x_{(n)}$ the typical way to define the ranks $r_1,...,r_n$ for the variables is to require that they satisfy the defining requirement: $$x_{(r_i)} = x_i \quad \quad \quad \quad \quad \text{for all } i = 1,...,n.$$ This requirement is sufficient to define the ranks in the case where all the initial values are distinct. In the case of ties there are various definitions of the ranks that will meet this requirement.
What is a rank?
There are various definitions of ranks depending on context and the purpose for which they are defined. However, a common definition relates them directly to order statistics. Given a set of numbers
What is a rank? There are various definitions of ranks depending on context and the purpose for which they are defined. However, a common definition relates them directly to order statistics. Given a set of numbers $x_1,...,x_n$ with corresponding order statistics $x_{(1)} \leqslant \cdots \leqslant x_{(n)}$ the typical way to define the ranks $r_1,...,r_n$ for the variables is to require that they satisfy the defining requirement: $$x_{(r_i)} = x_i \quad \quad \quad \quad \quad \text{for all } i = 1,...,n.$$ This requirement is sufficient to define the ranks in the case where all the initial values are distinct. In the case of ties there are various definitions of the ranks that will meet this requirement.
What is a rank? There are various definitions of ranks depending on context and the purpose for which they are defined. However, a common definition relates them directly to order statistics. Given a set of numbers
34,312
Which metric to use for estimating accuracy of a climate model?
The metric may not be the most important question and it rather depends on what you want to use the models for, which is not stated in the question. See the answer by @StephanKolassa . An important issue is that it is difficult to perform a like-for-like comparison between model output and observations, and frequently this is done without understanding how the models work, which is more important than the statistical/forecasting considerations. Some relevant issues: The model runs are projections rather than predictions. A projection in this context is essentially a conditional prediction - if the forcings (e.g. the mount of GHG in the atmosphere) in reality match those in the scenario then the projection is an estimate of what we should expect to see. So the result must consider how well the scenario matches reality. Models do not directly predict weather (i.e. the day-to-day variation in temperature), they simulate weather that is statistically consistent with the conditions in the scenario, according to the physics of the model. This should come as no surprise, weather is chaotic (it is deterministic, but extremely sensitive to initial conditions). The best weather models have a useful prediction horizon of a matter of days, so there is no way a climate model (which is basically the same thing as a weather model) is going to be able to predict weather conditions years in advance. I suspect that is why you get very large errors. I makes no sense whatsoever to compare model output with observations at a daily timescale. A persistence model is always going to beat a climate model hands down on a daily timescale, even if the climate model is perfect, because the climate model doesn't have the exact initial conditions (and in practice is also spatially and temporally quantised). The persistence model gets its initial conditions from yesterdays weather, so it has a much easier job. A climate model is essentially a Monte Carlo simulation, so we don't have just one run with one set of initial conditions, we have an ensemble of model runs, each with different initial conditions. The weather from day to day may be radically different in each run (and radically different from the observations). The distribution of model runs however, gives an indication of the statistical properties of the weather that we can expect in future climate. This makes sense as climate is the statistical properties of the weather. This means all we should expect is for the observations to lie within the spread of the model runs. The models are spatially quantised. It makes no sense to compare station level data with averages over the scale of the grid boxes used in climate models (typically several km or more). The OP mentions regional temperature, which is good. For a fair comparison, it would be best to estimate a gridded dataset that matches the grid used by the model? This may be an issue if the grid box contains a mixture of ocean and land, but the region in the observations is land only. Compare anomalies relative to some sensible baseline (preferably 30 years or more). Individual model runs can be very variable in their average temperatures, but their projections of changes in temperature are much more reliable. I mention these things because there have been journal papers written by experts in forecasting that were highly critical of climate models, but who unfortunately didn't take the time to find out how climate models work, how they are used, or how the model output is interpreted. All of these mistakes have been made, and are pitfalls for the unwary. If you have any questions about climate or climate models, do ask at the EarthSciences SE (and tag me if you want me to see them as I don't check it that often). Update: To explain in a bit more detail why comparing model output with observations on a daily timescale doesn't make much sense, consider a perfect climate model. How could we create a perfect climate model? Say we had a means of visiting parallel dimensions and observing the weather on alternate Earths. If there is an infinite number of parallel dimensions, then there will be a large number where the climate forcings are exactly the same as those in our reality. Will they have the same weather? No. Say on one parallel Earth a butterfly flapped its wings some time in the Cretaceous, but the version on our Earth didn't. That would mean that the initial conditions of the atmosphere on the two Earths at that point would be subtly different. As weather is chaotic, that means the pattern of weather on the two Earths would diverge, and you could easily get a day where it was scorching hot on one Earth and snowing on the other in the same region, if both conditions were consistent with the forcings (in the U.K. it has been known to snow in late Spring and to have warm Summer-like weather, so this is possible). We wouldn't expect the day-to-day weather on parallel Earths to be very similar. A climate model is attempting to be a simulation of such a parallel Earth, and we shouldn't expect the weather of the climate simulation to be any more similar than the weather on a real parallel Earth. The good thing is that by having many parallel Earths, or many model runs, we can estimate the spread of temperature that is feasible for the forcings. So the proper way of performing model-observation comparison is to see where the observations lie within that spread.
Which metric to use for estimating accuracy of a climate model?
The metric may not be the most important question and it rather depends on what you want to use the models for, which is not stated in the question. See the answer by @StephanKolassa . An important i
Which metric to use for estimating accuracy of a climate model? The metric may not be the most important question and it rather depends on what you want to use the models for, which is not stated in the question. See the answer by @StephanKolassa . An important issue is that it is difficult to perform a like-for-like comparison between model output and observations, and frequently this is done without understanding how the models work, which is more important than the statistical/forecasting considerations. Some relevant issues: The model runs are projections rather than predictions. A projection in this context is essentially a conditional prediction - if the forcings (e.g. the mount of GHG in the atmosphere) in reality match those in the scenario then the projection is an estimate of what we should expect to see. So the result must consider how well the scenario matches reality. Models do not directly predict weather (i.e. the day-to-day variation in temperature), they simulate weather that is statistically consistent with the conditions in the scenario, according to the physics of the model. This should come as no surprise, weather is chaotic (it is deterministic, but extremely sensitive to initial conditions). The best weather models have a useful prediction horizon of a matter of days, so there is no way a climate model (which is basically the same thing as a weather model) is going to be able to predict weather conditions years in advance. I suspect that is why you get very large errors. I makes no sense whatsoever to compare model output with observations at a daily timescale. A persistence model is always going to beat a climate model hands down on a daily timescale, even if the climate model is perfect, because the climate model doesn't have the exact initial conditions (and in practice is also spatially and temporally quantised). The persistence model gets its initial conditions from yesterdays weather, so it has a much easier job. A climate model is essentially a Monte Carlo simulation, so we don't have just one run with one set of initial conditions, we have an ensemble of model runs, each with different initial conditions. The weather from day to day may be radically different in each run (and radically different from the observations). The distribution of model runs however, gives an indication of the statistical properties of the weather that we can expect in future climate. This makes sense as climate is the statistical properties of the weather. This means all we should expect is for the observations to lie within the spread of the model runs. The models are spatially quantised. It makes no sense to compare station level data with averages over the scale of the grid boxes used in climate models (typically several km or more). The OP mentions regional temperature, which is good. For a fair comparison, it would be best to estimate a gridded dataset that matches the grid used by the model? This may be an issue if the grid box contains a mixture of ocean and land, but the region in the observations is land only. Compare anomalies relative to some sensible baseline (preferably 30 years or more). Individual model runs can be very variable in their average temperatures, but their projections of changes in temperature are much more reliable. I mention these things because there have been journal papers written by experts in forecasting that were highly critical of climate models, but who unfortunately didn't take the time to find out how climate models work, how they are used, or how the model output is interpreted. All of these mistakes have been made, and are pitfalls for the unwary. If you have any questions about climate or climate models, do ask at the EarthSciences SE (and tag me if you want me to see them as I don't check it that often). Update: To explain in a bit more detail why comparing model output with observations on a daily timescale doesn't make much sense, consider a perfect climate model. How could we create a perfect climate model? Say we had a means of visiting parallel dimensions and observing the weather on alternate Earths. If there is an infinite number of parallel dimensions, then there will be a large number where the climate forcings are exactly the same as those in our reality. Will they have the same weather? No. Say on one parallel Earth a butterfly flapped its wings some time in the Cretaceous, but the version on our Earth didn't. That would mean that the initial conditions of the atmosphere on the two Earths at that point would be subtly different. As weather is chaotic, that means the pattern of weather on the two Earths would diverge, and you could easily get a day where it was scorching hot on one Earth and snowing on the other in the same region, if both conditions were consistent with the forcings (in the U.K. it has been known to snow in late Spring and to have warm Summer-like weather, so this is possible). We wouldn't expect the day-to-day weather on parallel Earths to be very similar. A climate model is attempting to be a simulation of such a parallel Earth, and we shouldn't expect the weather of the climate simulation to be any more similar than the weather on a real parallel Earth. The good thing is that by having many parallel Earths, or many model runs, we can estimate the spread of temperature that is feasible for the forcings. So the proper way of performing model-observation comparison is to see where the observations lie within that spread.
Which metric to use for estimating accuracy of a climate model? The metric may not be the most important question and it rather depends on what you want to use the models for, which is not stated in the question. See the answer by @StephanKolassa . An important i
34,313
Which metric to use for estimating accuracy of a climate model?
By "Absolute Mean Average" I assume you mean the Mean Absolute Error: you take the difference of each separate forecast and its associated actual, then take the mean over the absolute values of these differences. Minimizing the MAE amounts to eliciting the conditional median of the future temperature distributions: Mean absolute error OR root mean squared error? and Why does minimizing the MAE lead to forecasting the median and not the mean? For temperatures, which one can usually assume to be symmetrically distributed, there should not be a lot of difference between MAE and MSE (more precisely, between the forecasts that optimize each). You might be interested in the sections on accuracy measurement in Forecasting: Theory and Practice. Whether you should remove very bad forecasts depends. I would rather try to use them to learn under what circumstances your model breaks down. Such information can be very valuable. Also, removing very bad forecasts tells your model that if it forecasts off a little bit, you will be concerned, but if it is badly off, you don't mind any more. Is this the message you want to send to your model?
Which metric to use for estimating accuracy of a climate model?
By "Absolute Mean Average" I assume you mean the Mean Absolute Error: you take the difference of each separate forecast and its associated actual, then take the mean over the absolute values of these
Which metric to use for estimating accuracy of a climate model? By "Absolute Mean Average" I assume you mean the Mean Absolute Error: you take the difference of each separate forecast and its associated actual, then take the mean over the absolute values of these differences. Minimizing the MAE amounts to eliciting the conditional median of the future temperature distributions: Mean absolute error OR root mean squared error? and Why does minimizing the MAE lead to forecasting the median and not the mean? For temperatures, which one can usually assume to be symmetrically distributed, there should not be a lot of difference between MAE and MSE (more precisely, between the forecasts that optimize each). You might be interested in the sections on accuracy measurement in Forecasting: Theory and Practice. Whether you should remove very bad forecasts depends. I would rather try to use them to learn under what circumstances your model breaks down. Such information can be very valuable. Also, removing very bad forecasts tells your model that if it forecasts off a little bit, you will be concerned, but if it is badly off, you don't mind any more. Is this the message you want to send to your model?
Which metric to use for estimating accuracy of a climate model? By "Absolute Mean Average" I assume you mean the Mean Absolute Error: you take the difference of each separate forecast and its associated actual, then take the mean over the absolute values of these
34,314
What is the correlation between a random variable and its probability integral transform?
When $X$ has a uniform distribution on the interval $[-\sqrt{3},\sqrt{3}]$ it has unit variance and its distribution function on this interval is $$F_X(x) = \frac{1}{2\sqrt{3}}(\sqrt{3}+x),$$ whence it has a density on this interval equal to $$f_X(x) = F_X^\prime(x) = \frac{1}{2\sqrt{3}}$$ and zero everywhere else. Since $E[X]=0,$ the covariance is just the expected product $$\operatorname{Cov}(X, F_X(X)) = E[XF_X(X)] = \int_{-\sqrt{3}}^{\sqrt{3}} x \frac{\sqrt{3}+x}{2\sqrt{3}}\,\frac{\mathrm{d}x}{2\sqrt{3}} = \frac{1}{2}.$$ Because $X$ is a continuous random variable, $F_X(X)$ has a uniform distribution on $[0,1],$ whence its variance is $1/12.$ The correlation therefore is $$\operatorname{Cor}(X, F_X(X)) = \frac{\operatorname{Cov}(X, F_X(X))}{\sqrt{\operatorname{Var}(X)\operatorname{Var}(F_X(X))}} = 1.$$ Thus, this universal upper bound can be attained. Let $\epsilon$ be a (tiny) positive number and consider now any continuous variable $X$ with support on $[-1-\epsilon,-1]\cup[1,1+\epsilon].$ Suppose $\Pr(X \le 0) = 1-p$ and (therefore) $\Pr(X \gt 0) = p.$ Let's compute the correlation by finding the relevant moments. In the right hand plot, both variables have been standardized to unit variance: their correlation coefficient is the slope of the least squares line shown. Here, $p=1/2.$ Clearly $F_X(x)=0$ for $x \lt -1-\epsilon,$ rises continuously to a value of $1-p$ at $x=-1,$ is level at that value for $-1\lt x \lt 1,$ and then rises continuously to $1$ by the time $x$ reaches $1+\epsilon.$ Again, since $X$ is a continuous random variable, $F_X(X)$ is a uniform random variable on $[0,1].$ Also, since $X$ is closely approximated by a binary random variable $Y$ with $\Pr(Y=1)=p$ and $\Pr(Y=-1)=-p,$ their variances will be close and $\operatorname{Var}(Y)=4p(1-p).$ The covariance is a little trickier. Compute $$\operatorname{Cov}(X, F_X(X)) = E[X(F_X-1/2)] = \int_{-1-\epsilon}^{-1} x (F_X(x)-1/2)f_X(x)\,\mathrm{d}x + \int_1^{1+\epsilon} x (F_X(x)-1/2)f_X(x)\,\mathrm{d}x.$$ Integrate these by parts by splitting the integrands into $x$ and all the rest. The result is $p(1-p) + O(\epsilon).$ Consequently $$\operatorname{Cor}(X, F_X(X)) = \frac{p(1-p)/2 + O(\epsilon)} {\sqrt{4p(1-p)+O(\epsilon)}\sqrt{1/12}} = \sqrt{3p(1-p)/4} + O(\epsilon).$$ This can be made as close to $0$ as we might like by making $p$ close to either $0$ or $1$ and shrinking $\epsilon.$ Consequently, any lower bound on the correlation cannot be positive. Most of the density of $X$ has been pushed up against $\pm 1$ by shrinking $\epsilon.$ Now $p=1/200.$ The correlation has reduced from $0.87$ in the first figure to $0.13$ here. Finally, since $F_X$ is a non-decreasing function, the correlation of $X$ with $F_X$ cannot be negative. Coupled with the preceding observation we conclude Universal bounds for the correlation of $(X, F_X(X))$ are $0$ and $1.$ These are the best possible. In fact, $0$ cannot be attained. (The intuitively obvious case would be to take the limits as $p\to 0$ and $\epsilon\to 0^+$ in the second example, but this reduces $X$ to a constant, where the correlation is undefined.)
What is the correlation between a random variable and its probability integral transform?
When $X$ has a uniform distribution on the interval $[-\sqrt{3},\sqrt{3}]$ it has unit variance and its distribution function on this interval is $$F_X(x) = \frac{1}{2\sqrt{3}}(\sqrt{3}+x),$$ whence i
What is the correlation between a random variable and its probability integral transform? When $X$ has a uniform distribution on the interval $[-\sqrt{3},\sqrt{3}]$ it has unit variance and its distribution function on this interval is $$F_X(x) = \frac{1}{2\sqrt{3}}(\sqrt{3}+x),$$ whence it has a density on this interval equal to $$f_X(x) = F_X^\prime(x) = \frac{1}{2\sqrt{3}}$$ and zero everywhere else. Since $E[X]=0,$ the covariance is just the expected product $$\operatorname{Cov}(X, F_X(X)) = E[XF_X(X)] = \int_{-\sqrt{3}}^{\sqrt{3}} x \frac{\sqrt{3}+x}{2\sqrt{3}}\,\frac{\mathrm{d}x}{2\sqrt{3}} = \frac{1}{2}.$$ Because $X$ is a continuous random variable, $F_X(X)$ has a uniform distribution on $[0,1],$ whence its variance is $1/12.$ The correlation therefore is $$\operatorname{Cor}(X, F_X(X)) = \frac{\operatorname{Cov}(X, F_X(X))}{\sqrt{\operatorname{Var}(X)\operatorname{Var}(F_X(X))}} = 1.$$ Thus, this universal upper bound can be attained. Let $\epsilon$ be a (tiny) positive number and consider now any continuous variable $X$ with support on $[-1-\epsilon,-1]\cup[1,1+\epsilon].$ Suppose $\Pr(X \le 0) = 1-p$ and (therefore) $\Pr(X \gt 0) = p.$ Let's compute the correlation by finding the relevant moments. In the right hand plot, both variables have been standardized to unit variance: their correlation coefficient is the slope of the least squares line shown. Here, $p=1/2.$ Clearly $F_X(x)=0$ for $x \lt -1-\epsilon,$ rises continuously to a value of $1-p$ at $x=-1,$ is level at that value for $-1\lt x \lt 1,$ and then rises continuously to $1$ by the time $x$ reaches $1+\epsilon.$ Again, since $X$ is a continuous random variable, $F_X(X)$ is a uniform random variable on $[0,1].$ Also, since $X$ is closely approximated by a binary random variable $Y$ with $\Pr(Y=1)=p$ and $\Pr(Y=-1)=-p,$ their variances will be close and $\operatorname{Var}(Y)=4p(1-p).$ The covariance is a little trickier. Compute $$\operatorname{Cov}(X, F_X(X)) = E[X(F_X-1/2)] = \int_{-1-\epsilon}^{-1} x (F_X(x)-1/2)f_X(x)\,\mathrm{d}x + \int_1^{1+\epsilon} x (F_X(x)-1/2)f_X(x)\,\mathrm{d}x.$$ Integrate these by parts by splitting the integrands into $x$ and all the rest. The result is $p(1-p) + O(\epsilon).$ Consequently $$\operatorname{Cor}(X, F_X(X)) = \frac{p(1-p)/2 + O(\epsilon)} {\sqrt{4p(1-p)+O(\epsilon)}\sqrt{1/12}} = \sqrt{3p(1-p)/4} + O(\epsilon).$$ This can be made as close to $0$ as we might like by making $p$ close to either $0$ or $1$ and shrinking $\epsilon.$ Consequently, any lower bound on the correlation cannot be positive. Most of the density of $X$ has been pushed up against $\pm 1$ by shrinking $\epsilon.$ Now $p=1/200.$ The correlation has reduced from $0.87$ in the first figure to $0.13$ here. Finally, since $F_X$ is a non-decreasing function, the correlation of $X$ with $F_X$ cannot be negative. Coupled with the preceding observation we conclude Universal bounds for the correlation of $(X, F_X(X))$ are $0$ and $1.$ These are the best possible. In fact, $0$ cannot be attained. (The intuitively obvious case would be to take the limits as $p\to 0$ and $\epsilon\to 0^+$ in the second example, but this reduces $X$ to a constant, where the correlation is undefined.)
What is the correlation between a random variable and its probability integral transform? When $X$ has a uniform distribution on the interval $[-\sqrt{3},\sqrt{3}]$ it has unit variance and its distribution function on this interval is $$F_X(x) = \frac{1}{2\sqrt{3}}(\sqrt{3}+x),$$ whence i
34,315
What is the correlation between a random variable and its probability integral transform?
If we assume $\mathbb E^F[X]=0$ then \begin{align} \mathbb E^F[XF(X)]&= \frac{1}{2}\int F(x)\{1-F(x)\}\,\text dx \end{align} Indeed, assuming the pdf $f$ is associated with the cdf $F$, \begin{align} \mathbb E^F[XF(X)]&= \int x F(x) f(x)\text dx\\ &= \int_{-\infty}^0 x F(x) f(x)\text dx + \int_0^\infty x \{F(x) -1+1\}f(x)\text dx \\ &= \int_{-\infty}^0 x F(x) f(x)\text dx - \int_0^\infty x \{1-F(x) \}f(x)\text dx+ \int_0^\infty x f(x)\text dx\\ &= -\frac{1}{2}\int_{-\infty}^0 F(x)^2\text dx - \frac{1}{2} \int_0^\infty \{1-F(x) \}^2\text dx+ \int_0^\infty \{1-F(x)\}\text dx\\ \end{align} by integrations by parts. And, since $\mathbb E^F[X]=0$ then $$\int_0^\infty \{1-F(x)\}\text dx=\int_{ -\infty} ^0 F(x)\text dx$$ Note also that the variance of $X$, $\sigma^2$, does not impact the correlation since $$\text{corr}(X,F_\sigma(X))=12\dfrac{\mathbb E_\sigma(XF_\sigma(X))}{\text{var}_\sigma(X)}=12\dfrac{\mathbb E_\sigma(\sigma^{-1}XF_1(\sigma^{-1}X))}{\text{var}_\sigma(\sigma^{-1}X)}=12\mathbb E_1(XF_1(X))$$ Another identity of possible interest is \begin{align} \mathbb E^F[XF(X)]&= \frac{1}{2}\mathbb E^F[\max\{X_1,X_2\}] \end{align} when $X_1,X_2$ are iid $F$ with mean $0$
What is the correlation between a random variable and its probability integral transform?
If we assume $\mathbb E^F[X]=0$ then \begin{align} \mathbb E^F[XF(X)]&= \frac{1}{2}\int F(x)\{1-F(x)\}\,\text dx \end{align} Indeed, assuming the pdf $f$ is associated with the cdf $F$, \begin{al
What is the correlation between a random variable and its probability integral transform? If we assume $\mathbb E^F[X]=0$ then \begin{align} \mathbb E^F[XF(X)]&= \frac{1}{2}\int F(x)\{1-F(x)\}\,\text dx \end{align} Indeed, assuming the pdf $f$ is associated with the cdf $F$, \begin{align} \mathbb E^F[XF(X)]&= \int x F(x) f(x)\text dx\\ &= \int_{-\infty}^0 x F(x) f(x)\text dx + \int_0^\infty x \{F(x) -1+1\}f(x)\text dx \\ &= \int_{-\infty}^0 x F(x) f(x)\text dx - \int_0^\infty x \{1-F(x) \}f(x)\text dx+ \int_0^\infty x f(x)\text dx\\ &= -\frac{1}{2}\int_{-\infty}^0 F(x)^2\text dx - \frac{1}{2} \int_0^\infty \{1-F(x) \}^2\text dx+ \int_0^\infty \{1-F(x)\}\text dx\\ \end{align} by integrations by parts. And, since $\mathbb E^F[X]=0$ then $$\int_0^\infty \{1-F(x)\}\text dx=\int_{ -\infty} ^0 F(x)\text dx$$ Note also that the variance of $X$, $\sigma^2$, does not impact the correlation since $$\text{corr}(X,F_\sigma(X))=12\dfrac{\mathbb E_\sigma(XF_\sigma(X))}{\text{var}_\sigma(X)}=12\dfrac{\mathbb E_\sigma(\sigma^{-1}XF_1(\sigma^{-1}X))}{\text{var}_\sigma(\sigma^{-1}X)}=12\mathbb E_1(XF_1(X))$$ Another identity of possible interest is \begin{align} \mathbb E^F[XF(X)]&= \frac{1}{2}\mathbb E^F[\max\{X_1,X_2\}] \end{align} when $X_1,X_2$ are iid $F$ with mean $0$
What is the correlation between a random variable and its probability integral transform? If we assume $\mathbb E^F[X]=0$ then \begin{align} \mathbb E^F[XF(X)]&= \frac{1}{2}\int F(x)\{1-F(x)\}\,\text dx \end{align} Indeed, assuming the pdf $f$ is associated with the cdf $F$, \begin{al
34,316
How large of a dataset should I use for building a statistical model?
You should usually fill in missing data when you can Whilst 40K+ rows is certainly a substantial dataset, the important issue here isn't so much the size of the dataset, but the question of whether or not the missing values in the data are "ignorable". When we have a large dataset with missing values, and we propose to use only those entries that don't have missing values, that is called a "complete case analysis". The danger of a complete case analysis is that the missingness of entries in the data could be systematically related to one or more of the variables under analysis, in which case ignoring records that have missing data will bias the analysis (sometimes severely). Practically speaking, missing data is rarely ignorable, particular in cases where it affects a substantial proportion of the records in the overall dataset. Dealing with missing data is an extremely complicated exercise, and the statistical theory and methods for this are quite advanced. Proper methods for dealing with missing data generally involve either explicit statistical modelling of the "missingness" pattern or multiple imputation of missing values using explicit or implicit models. This is difficult and time-consuming and it always comes with some modelling assumptions that are hard to test empirically. Even with the best methods, having substantial amounts of missing data often leads to inferences that are highly uncertain or non-robust to modelling assumptions. For this reason, if you have a cost-effective investigation method that allows you to fill in a substantial amount of your missing data, it is usually worth doing that. Having a better, more complete, dataset is much better than having a patchy dataset and using missing data techniques on it (even if these are done well). So while 40K+ data points is already a lot, I recommend you take your proposed action to fill in as much of the missing data as you can. Increasing the size of your (complete case) dataset is one small advantage of this, but the much bigger advantage is that you will diminish the likelihood of getting a biased analysis due to data that is missing in a manner that is related to the variables of interest in your analysis.
How large of a dataset should I use for building a statistical model?
You should usually fill in missing data when you can Whilst 40K+ rows is certainly a substantial dataset, the important issue here isn't so much the size of the dataset, but the question of whether or
How large of a dataset should I use for building a statistical model? You should usually fill in missing data when you can Whilst 40K+ rows is certainly a substantial dataset, the important issue here isn't so much the size of the dataset, but the question of whether or not the missing values in the data are "ignorable". When we have a large dataset with missing values, and we propose to use only those entries that don't have missing values, that is called a "complete case analysis". The danger of a complete case analysis is that the missingness of entries in the data could be systematically related to one or more of the variables under analysis, in which case ignoring records that have missing data will bias the analysis (sometimes severely). Practically speaking, missing data is rarely ignorable, particular in cases where it affects a substantial proportion of the records in the overall dataset. Dealing with missing data is an extremely complicated exercise, and the statistical theory and methods for this are quite advanced. Proper methods for dealing with missing data generally involve either explicit statistical modelling of the "missingness" pattern or multiple imputation of missing values using explicit or implicit models. This is difficult and time-consuming and it always comes with some modelling assumptions that are hard to test empirically. Even with the best methods, having substantial amounts of missing data often leads to inferences that are highly uncertain or non-robust to modelling assumptions. For this reason, if you have a cost-effective investigation method that allows you to fill in a substantial amount of your missing data, it is usually worth doing that. Having a better, more complete, dataset is much better than having a patchy dataset and using missing data techniques on it (even if these are done well). So while 40K+ data points is already a lot, I recommend you take your proposed action to fill in as much of the missing data as you can. Increasing the size of your (complete case) dataset is one small advantage of this, but the much bigger advantage is that you will diminish the likelihood of getting a biased analysis due to data that is missing in a manner that is related to the variables of interest in your analysis.
How large of a dataset should I use for building a statistical model? You should usually fill in missing data when you can Whilst 40K+ rows is certainly a substantial dataset, the important issue here isn't so much the size of the dataset, but the question of whether or
34,317
Assumptions of Mann-Whitney test for at least ordinal data
I just stumbled across this, and since I am the author of Karch (2021) and do not fully agree with the answers so far, here are my two cents. I will skip the assumption of no ties as there is agreement that it is unnecessary (for the alternatives Christian and I discuss). We have to first decide what properties the assumptions should guarantee. Fay and Proschan (2010) and I (influenced by them) focussed on [approximate] validity (type I error rate is below significance level $\alpha$ [at least in large samples]) and consistency (with larger samples sizes power approaches 1). We also have to agree on what the proper alternative is. I agree with Divine et al. that it should be $H_1:p\neq\frac{1}{2}$, with $p=P(X<Y) + \frac{1}{2}P(X=Y)$. I am surprised that there is controversy around this since the test statistic used is the sample equivalent of $p$ (see Karch (2021), p. 6). Under this setup, the Wilcoxon-Mann-Whitney (WMW) test requires that $H_0:F=G$ is used as null hypothesis (see Fay and Proschan (2010), Table 1). Rephrased as assumption, we thus have to be sure that if $F$ and $G$ are not equal, $p\neq \frac{1}{2}$. Fay and Proschan call this Perspective 3 and state that this situation is unrealistic (This is already in the question, but I felt it was important to highlight this), with which I fully agree. To make this quote understandable, I define $\mathcal{M}:=H_0\lor H_1$. Note that I changed the notation slightly. ... Perspective 3 ... is a focusing one since the full probability set, $\mathcal{M}$ is created more for mathematical necessity than by any scientific justification for modeling the data, which in this case does not include distributions with both $p = 1/2$ and $F \neq G$. It is hard to imagine a situation where this complete set of allowable models, $\mathcal{M}$, and only that set of models is justified scientifically; Thus, while this is technically the correct assumption for the WMW it is hard to imagine situations in which it is actually met and thus a bit irrelevant. One example that is outside of $\mathcal{M}$ is that $F$ and $G$ are normal but have different variances. I demonstrate in Karch (2021) that type I error rates of the WMW test can be inflated in this example, even in large samples. Beyond this, if we extend the properties our assumptions should guarantee to be correct standard errors, good power, and confidence intervals with correct coverages, which seems reasonable, then the WMW is not appropriate even under the unrealistic Perspective 3. As Wilcox (2017) says: A practical concern is that if groups differ, then under general circumstances the wrong standard error is being used by the Wilcoxon–Mann–Whitney test, which can result in relatively poor power and an unsatisfactory confidence interval. (p. 279) To give an example consider $F=\mathcal{N}(0, 2)$ and $G=\mathcal{N}(0.2, 1)$. The alternative hypothesis $H_1$ is thus true. However, the WMW test can be biased in this situation (the power is smaller than the significance level $\alpha$). See: set.seed(123) library(brunnermunzel) reps <- 10^3 p_wmw<- p_BM <- rep(NA, reps) for(i in 1:reps){ g1 <- rnorm(80, mean = 0, sd = 2) g2 <- rnorm(20, mean = .2, sd = 1) p_wmw[i] <- wilcox.test(g1, g2)$p.value p_BM[i] <- brunnermunzel.test(g1, g2)$p.value } print(mean(p_wmw < .05)) [1] 0.034 Overall, the situation is equivalent to the much more well-known and appreciated problems with Stundent's $t$ test. Again from Wilcox (2017): The situation is similar to Student’s T test. When the two distributions are identical, a correct estimate of the standard error is being used. But otherwise, under general conditions, an incorrect estimate is being used, which results in practical concerns, in terms of both Type I errors and power. (p. 278) Just as Welch's $t$ test is a small modification of Student's $t$ test that alleviates these problems, as it provides correct standard errors in general circumstances, Brunner-Munzel's test is a small modification of Wilcoxon's test that provides correct standard errors in general circumstances (both tests can still fail in smaller samples, but problems are much less severe, as at least asymptotically Brunner-Munzel's test provide correct standard errors). There seems to be widespread agreement to use Welch's instead of Student's t test for these reasons (see, for example, Is variance homogeneity check necessary before t-test?). For the same reasons, we should usually use Brunner-Munzel's instead of Wilcoxon's test. The assumptions for Brunner-Munzel's test to have correct standard errors in large samples are rather general and technical. They are described in detail in Brunner et al. (2018). However, they are so general that they are rarely violated. A more practically relevant question is what sample sizes are needed in practice for the standard error to be "correct enough". Simulation studies (see Karch (2021), as well as the reference therein) suggest that this is true for rather small sample sizes. No meaningful type I error inflation have been found yet for $n_1,n_2\geq 10$. However, for small samples sizes the permutation version of the test is recommended. Thus, in practice, it seems fine to treat the Brunner-Munzel test as test for $H_0:p=\frac{1}{2}, H_1:p\neq\frac{1}{2}$, without additional assumptions (beyond i.i.d). As all the problems of the WMW test just discussed tend to disappear for equal samples (see, Brunner et al. (2018); note that this is again equivalent to Student's t test) it also seems fine use the WMW instead when sample sizes are (roughly) equal. I would still use the Brunner-Munzel test even if sample sizes are equal as it's implementations in R provide confidence intervals for $p$, whereas the WMW implementations (I am aware of) do not.
Assumptions of Mann-Whitney test for at least ordinal data
I just stumbled across this, and since I am the author of Karch (2021) and do not fully agree with the answers so far, here are my two cents. I will skip the assumption of no ties as there is agreemen
Assumptions of Mann-Whitney test for at least ordinal data I just stumbled across this, and since I am the author of Karch (2021) and do not fully agree with the answers so far, here are my two cents. I will skip the assumption of no ties as there is agreement that it is unnecessary (for the alternatives Christian and I discuss). We have to first decide what properties the assumptions should guarantee. Fay and Proschan (2010) and I (influenced by them) focussed on [approximate] validity (type I error rate is below significance level $\alpha$ [at least in large samples]) and consistency (with larger samples sizes power approaches 1). We also have to agree on what the proper alternative is. I agree with Divine et al. that it should be $H_1:p\neq\frac{1}{2}$, with $p=P(X<Y) + \frac{1}{2}P(X=Y)$. I am surprised that there is controversy around this since the test statistic used is the sample equivalent of $p$ (see Karch (2021), p. 6). Under this setup, the Wilcoxon-Mann-Whitney (WMW) test requires that $H_0:F=G$ is used as null hypothesis (see Fay and Proschan (2010), Table 1). Rephrased as assumption, we thus have to be sure that if $F$ and $G$ are not equal, $p\neq \frac{1}{2}$. Fay and Proschan call this Perspective 3 and state that this situation is unrealistic (This is already in the question, but I felt it was important to highlight this), with which I fully agree. To make this quote understandable, I define $\mathcal{M}:=H_0\lor H_1$. Note that I changed the notation slightly. ... Perspective 3 ... is a focusing one since the full probability set, $\mathcal{M}$ is created more for mathematical necessity than by any scientific justification for modeling the data, which in this case does not include distributions with both $p = 1/2$ and $F \neq G$. It is hard to imagine a situation where this complete set of allowable models, $\mathcal{M}$, and only that set of models is justified scientifically; Thus, while this is technically the correct assumption for the WMW it is hard to imagine situations in which it is actually met and thus a bit irrelevant. One example that is outside of $\mathcal{M}$ is that $F$ and $G$ are normal but have different variances. I demonstrate in Karch (2021) that type I error rates of the WMW test can be inflated in this example, even in large samples. Beyond this, if we extend the properties our assumptions should guarantee to be correct standard errors, good power, and confidence intervals with correct coverages, which seems reasonable, then the WMW is not appropriate even under the unrealistic Perspective 3. As Wilcox (2017) says: A practical concern is that if groups differ, then under general circumstances the wrong standard error is being used by the Wilcoxon–Mann–Whitney test, which can result in relatively poor power and an unsatisfactory confidence interval. (p. 279) To give an example consider $F=\mathcal{N}(0, 2)$ and $G=\mathcal{N}(0.2, 1)$. The alternative hypothesis $H_1$ is thus true. However, the WMW test can be biased in this situation (the power is smaller than the significance level $\alpha$). See: set.seed(123) library(brunnermunzel) reps <- 10^3 p_wmw<- p_BM <- rep(NA, reps) for(i in 1:reps){ g1 <- rnorm(80, mean = 0, sd = 2) g2 <- rnorm(20, mean = .2, sd = 1) p_wmw[i] <- wilcox.test(g1, g2)$p.value p_BM[i] <- brunnermunzel.test(g1, g2)$p.value } print(mean(p_wmw < .05)) [1] 0.034 Overall, the situation is equivalent to the much more well-known and appreciated problems with Stundent's $t$ test. Again from Wilcox (2017): The situation is similar to Student’s T test. When the two distributions are identical, a correct estimate of the standard error is being used. But otherwise, under general conditions, an incorrect estimate is being used, which results in practical concerns, in terms of both Type I errors and power. (p. 278) Just as Welch's $t$ test is a small modification of Student's $t$ test that alleviates these problems, as it provides correct standard errors in general circumstances, Brunner-Munzel's test is a small modification of Wilcoxon's test that provides correct standard errors in general circumstances (both tests can still fail in smaller samples, but problems are much less severe, as at least asymptotically Brunner-Munzel's test provide correct standard errors). There seems to be widespread agreement to use Welch's instead of Student's t test for these reasons (see, for example, Is variance homogeneity check necessary before t-test?). For the same reasons, we should usually use Brunner-Munzel's instead of Wilcoxon's test. The assumptions for Brunner-Munzel's test to have correct standard errors in large samples are rather general and technical. They are described in detail in Brunner et al. (2018). However, they are so general that they are rarely violated. A more practically relevant question is what sample sizes are needed in practice for the standard error to be "correct enough". Simulation studies (see Karch (2021), as well as the reference therein) suggest that this is true for rather small sample sizes. No meaningful type I error inflation have been found yet for $n_1,n_2\geq 10$. However, for small samples sizes the permutation version of the test is recommended. Thus, in practice, it seems fine to treat the Brunner-Munzel test as test for $H_0:p=\frac{1}{2}, H_1:p\neq\frac{1}{2}$, without additional assumptions (beyond i.i.d). As all the problems of the WMW test just discussed tend to disappear for equal samples (see, Brunner et al. (2018); note that this is again equivalent to Student's t test) it also seems fine use the WMW instead when sample sizes are (roughly) equal. I would still use the Brunner-Munzel test even if sample sizes are equal as it's implementations in R provide confidence intervals for $p$, whereas the WMW implementations (I am aware of) do not.
Assumptions of Mann-Whitney test for at least ordinal data I just stumbled across this, and since I am the author of Karch (2021) and do not fully agree with the answers so far, here are my two cents. I will skip the assumption of no ties as there is agreemen
34,318
Assumptions of Mann-Whitney test for at least ordinal data
The null hypothesis of the MW-test, under which the distribution of the test statistic is computed, is that $H_0:\ F=G$, the two distributions are the same. This obviously implies that their variances are the same, but the latter "assumption" doesn't actually add anything (see below though). It is also assumed that data are i.i.d. I think the confusion about ties comes from imprecision of what is actually meant when referring to the MW-test, just the test statistic, or also the distribution under the $H_0$. If there are ties, both asymptotically and for finite samples, the distribution under the $H_0$ that is used for testing has to be modified. This can be done (so the test can be applied), however the test can be seen as invalid if this is not done. Now how about the "equal variances" assumption? I have mentioned the null hypothesis, however one can state that a valid test does not only require that the distribution under $H_0$ is correctly specified, but also that it has some properties under the alternative. Something of a minimal requirement is that the test should be unbiased, i.e., that the probability to reject under any distribution in the alternative should not be smaller than $\alpha$, the probability to reject under the $H_0$. Unbiasedness follows easily for the alternative that I have learnt (and that is one of the possibilities mentioned in Fay and Proschan), which is that $F$ is stochastically larger than $G$ (i.e., the cdf of $F$ is everywhere smaller or equal than that if $G$, and somewhere smaller). This does not require equal variances, and neither does "Perspective 3" as cited above from Fay and Proschan. Although there are examples of pairs of distributions with unequal variances with $F\neq G$ and $P(X_1>X_2)+\frac{1}{2}P(X_1=X_2)=\frac{1}{2}$ (I believe though I haven't checked that this holds for two Gaussian distributions with equal mean and different variances), I don't think it makes sense to say that the MW-test "assumes equal variances". Computation of the distribution of the test statistic under $H_0$ assumes even more than that, and the valid alternatives stated above against which the test in unbiased contain many pairs of distributions with unequal variances. In fact one could state that using the first Alternative given in the question (which amounts to Fay and Proschan's Perspective 3) there is no further assumption beyond i.i.d. at all, as this contains all distributions. But Julian Karch (see his answer) has shown that the MW-test is not generally unbiased against this alternative. If you are really interested in this alternative, he recommends the Brunner-Munzel test. However there may be assumptions implied by certain interpretations that are given to the test result, so this is something to be careful about. If for example a rejection of the null hypothesis is taken as evidence that $F$ is stochastically larger than $G$, one should know that the test also is unbiased against some alternatives for which this isn't the case, and it is implicitly assumed that these do not obtain (one such possibility would be Gaussian distributions with different means and different variances - this belongs to the "Perspective 3" alternative as far as I can see, but not to the "stochastically larger" alternative). Also, as Fay and Proschan mention, there are distributions for which $F\neq G$ and $P(X_1>X_2)+\frac{1}{2}P(X_1=X_2)=\frac{1}{2}$, which cannot be detected by the MW-test (although it is not so clear whether the user in such a case rather would want to reject, or whether they'd be happy to say that there is no evidence that one distribution tends to be larger than the other). The MW-test can be safely used to test $F=G$ against the "stochastically larger"-alternative, which is how I think most people would interpret the test result, i.e., $F$ tends to produce systematically larger (or smaller) observations than $G$. The issue here is that not everything that is possible is covered, i.e., in reality it may be the case that $F\neq G$ but none of them is stochastically larger than the other, for example $F$ may produce more very large and more very small observations than $G$. In a real application I'd therefore look at visualisation such as boxplots and histograms to see whether this might be the case, and interpret results with caution. Summarising, Fay and Proschan's distinction of different "perspectives" is important, because in fact different perspectives make different implicit assumptions when interpreting the test result, and not being aware of this may lead to misinterpretation. One could say that running the test itself, mathematically, does not require such assumptions (one can just take as null hypothesis all distributions that have rejection probability $\le\alpha$ and as alternative all those for which the rejection probability is larger), but making sense of the result does.
Assumptions of Mann-Whitney test for at least ordinal data
The null hypothesis of the MW-test, under which the distribution of the test statistic is computed, is that $H_0:\ F=G$, the two distributions are the same. This obviously implies that their variances
Assumptions of Mann-Whitney test for at least ordinal data The null hypothesis of the MW-test, under which the distribution of the test statistic is computed, is that $H_0:\ F=G$, the two distributions are the same. This obviously implies that their variances are the same, but the latter "assumption" doesn't actually add anything (see below though). It is also assumed that data are i.i.d. I think the confusion about ties comes from imprecision of what is actually meant when referring to the MW-test, just the test statistic, or also the distribution under the $H_0$. If there are ties, both asymptotically and for finite samples, the distribution under the $H_0$ that is used for testing has to be modified. This can be done (so the test can be applied), however the test can be seen as invalid if this is not done. Now how about the "equal variances" assumption? I have mentioned the null hypothesis, however one can state that a valid test does not only require that the distribution under $H_0$ is correctly specified, but also that it has some properties under the alternative. Something of a minimal requirement is that the test should be unbiased, i.e., that the probability to reject under any distribution in the alternative should not be smaller than $\alpha$, the probability to reject under the $H_0$. Unbiasedness follows easily for the alternative that I have learnt (and that is one of the possibilities mentioned in Fay and Proschan), which is that $F$ is stochastically larger than $G$ (i.e., the cdf of $F$ is everywhere smaller or equal than that if $G$, and somewhere smaller). This does not require equal variances, and neither does "Perspective 3" as cited above from Fay and Proschan. Although there are examples of pairs of distributions with unequal variances with $F\neq G$ and $P(X_1>X_2)+\frac{1}{2}P(X_1=X_2)=\frac{1}{2}$ (I believe though I haven't checked that this holds for two Gaussian distributions with equal mean and different variances), I don't think it makes sense to say that the MW-test "assumes equal variances". Computation of the distribution of the test statistic under $H_0$ assumes even more than that, and the valid alternatives stated above against which the test in unbiased contain many pairs of distributions with unequal variances. In fact one could state that using the first Alternative given in the question (which amounts to Fay and Proschan's Perspective 3) there is no further assumption beyond i.i.d. at all, as this contains all distributions. But Julian Karch (see his answer) has shown that the MW-test is not generally unbiased against this alternative. If you are really interested in this alternative, he recommends the Brunner-Munzel test. However there may be assumptions implied by certain interpretations that are given to the test result, so this is something to be careful about. If for example a rejection of the null hypothesis is taken as evidence that $F$ is stochastically larger than $G$, one should know that the test also is unbiased against some alternatives for which this isn't the case, and it is implicitly assumed that these do not obtain (one such possibility would be Gaussian distributions with different means and different variances - this belongs to the "Perspective 3" alternative as far as I can see, but not to the "stochastically larger" alternative). Also, as Fay and Proschan mention, there are distributions for which $F\neq G$ and $P(X_1>X_2)+\frac{1}{2}P(X_1=X_2)=\frac{1}{2}$, which cannot be detected by the MW-test (although it is not so clear whether the user in such a case rather would want to reject, or whether they'd be happy to say that there is no evidence that one distribution tends to be larger than the other). The MW-test can be safely used to test $F=G$ against the "stochastically larger"-alternative, which is how I think most people would interpret the test result, i.e., $F$ tends to produce systematically larger (or smaller) observations than $G$. The issue here is that not everything that is possible is covered, i.e., in reality it may be the case that $F\neq G$ but none of them is stochastically larger than the other, for example $F$ may produce more very large and more very small observations than $G$. In a real application I'd therefore look at visualisation such as boxplots and histograms to see whether this might be the case, and interpret results with caution. Summarising, Fay and Proschan's distinction of different "perspectives" is important, because in fact different perspectives make different implicit assumptions when interpreting the test result, and not being aware of this may lead to misinterpretation. One could say that running the test itself, mathematically, does not require such assumptions (one can just take as null hypothesis all distributions that have rejection probability $\le\alpha$ and as alternative all those for which the rejection probability is larger), but making sense of the result does.
Assumptions of Mann-Whitney test for at least ordinal data The null hypothesis of the MW-test, under which the distribution of the test statistic is computed, is that $H_0:\ F=G$, the two distributions are the same. This obviously implies that their variances
34,319
Assumptions of Mann-Whitney test for at least ordinal data
There is some disagreement as to the 'proper' use of the two-sample Wilcoxon (rank sum) test. Perhaps this is because it is often used in ways that might surprise its creators and because various software programs have implemented a wide variety of versions to accommodate (moderate proportions of) ties and other departures from canonical assumptions. One way to be reasonably sure how the Wilcoxon RS test works in a particular situation is to try it out and see what actually happens. The following brief simulations address the assumption that the two populations must be of the same shape, differing only by a shift; this assumption is often taken to mean that the population variances must be equal. By contrast, the implementation in R can be viewed as a test whether one distribution stochastically dominates the other--up to a point, regardless of shape or of variance. I use the test to compare samples of size 50 from distributions (a) $\mathsf{Norm}(\mu=100,\sigma=5),$ (b) $\mathsf{Norm}(\mu=100,\sigma=10),$ and (c) $\mathsf{Norm}(\mu=105,\sigma=10).$ First, we use the Wilcoxon SR test to compare null (a) with alternative (b), a difference in shapes; second, to compare null (a) with alternative (c), a difference in shapes and locations. set.seed(1123) pv = replicate(10^4, wilcox.test(rnorm(50, 100, 5), rnorm(50,100,10))$p.val) mean(pv <= .05) [1] 0.0577 # (a vs b) true level about 6%, not exactly 5% par(mfrow=c(1,3)) hist(pv, prob=T, col="skyblue2", main="Same Centers") pv = replicate(10^4, wilcox.test(rnorm(50, 100, 5), rnorm(50,105,10))$p.val) mean(pv <= .05) [1] 0.8483 # (a vs c) power about 85% hist(pv, prob=T, br=20, col="skyblue2", main="Different Centers") curve(pnorm(x,100,5),50,150, lwd=2, col="green3", lty="dashed") curve(pnorm(x,100,10), add=T, col="blue") curve(pnorm(x,105,10), add=T, col="maroon", lty="dotted") par(mfrow=c(1,1)) The first panel of the figure shows the roughly uniform distribution of of P-values of comparison (a) vs (b), and the second shows the power (left-most histogarm bar) of comparison (a) vs (c). The third panel shows that neither distribution (a) [broken green] nor (b) [solid blue] is stochastically dominant. It also shows that (c) [dotted red] dominates (a), plotting mainly to the right of and below (a). Finally, we note that, because data are normal, the most appropriate test to compare (a) and (b) would be a two-sample Welch t test, which does not assume equal variances; its significance level is very near the nominal 5% level (no figure). set.seed(1123) pv = replicate( 10^4, t.test( rnorm(50, 100, 5), rnorm(50,100,10) )$p.val ) mean(pv <= .05) [1] 0.0484 # aprx 5% The point here is not to give an exhaustive catalog of the properties of any one implementation of the Wilcoxon RS test. It is to illustrate how simple simulations can help to settle particular controversies. Note: Original versions of the Wilcoxon rank sum test and the Mann-Whitney U test used different, but essentially equivalent, test statistics. Addendum, per Comment. If the task is to test whether $\mathsf{Beta}(1,3) \ne \mathsf{Beta}(3,1),$ based on ten observations from each distribution, then the two-sample Wilcoxon test (2-sided) will do the job with power very nearly 1: set.seed(2022) pv = replicate(10^5, wilcox.test(rbeta(10, 1,3), rbeta(10, 3,1))$p.val) mean(pv <= 0.05) [1] 0.99692 However, it seems that the meaning of rejection ('perspective') should not be that the median of the former distribution is about $\eta_1=0.2063$ and $\eta_2=0.7937,$ and even less that the median has "shifted" upward. The two distributions have very different shapes. It is clear from plots of empirical CDF of two samples of size ten that $\mathsf{Beta}(3,1)$ (blue) dominates (tends to give larger values than) the former: set.seed(622) x1 = rbeta(10, 1, 3) x2 = rbeta(10, 3, 1) hdr="ECDF Plots: BETA(3,1) Dominates" plot(ecdf(x2), col="blue", xlim=0:1, main=hdr) plot(ecdf(x1), add=T, col="brown")
Assumptions of Mann-Whitney test for at least ordinal data
There is some disagreement as to the 'proper' use of the two-sample Wilcoxon (rank sum) test. Perhaps this is because it is often used in ways that might surprise its creators and because various sof
Assumptions of Mann-Whitney test for at least ordinal data There is some disagreement as to the 'proper' use of the two-sample Wilcoxon (rank sum) test. Perhaps this is because it is often used in ways that might surprise its creators and because various software programs have implemented a wide variety of versions to accommodate (moderate proportions of) ties and other departures from canonical assumptions. One way to be reasonably sure how the Wilcoxon RS test works in a particular situation is to try it out and see what actually happens. The following brief simulations address the assumption that the two populations must be of the same shape, differing only by a shift; this assumption is often taken to mean that the population variances must be equal. By contrast, the implementation in R can be viewed as a test whether one distribution stochastically dominates the other--up to a point, regardless of shape or of variance. I use the test to compare samples of size 50 from distributions (a) $\mathsf{Norm}(\mu=100,\sigma=5),$ (b) $\mathsf{Norm}(\mu=100,\sigma=10),$ and (c) $\mathsf{Norm}(\mu=105,\sigma=10).$ First, we use the Wilcoxon SR test to compare null (a) with alternative (b), a difference in shapes; second, to compare null (a) with alternative (c), a difference in shapes and locations. set.seed(1123) pv = replicate(10^4, wilcox.test(rnorm(50, 100, 5), rnorm(50,100,10))$p.val) mean(pv <= .05) [1] 0.0577 # (a vs b) true level about 6%, not exactly 5% par(mfrow=c(1,3)) hist(pv, prob=T, col="skyblue2", main="Same Centers") pv = replicate(10^4, wilcox.test(rnorm(50, 100, 5), rnorm(50,105,10))$p.val) mean(pv <= .05) [1] 0.8483 # (a vs c) power about 85% hist(pv, prob=T, br=20, col="skyblue2", main="Different Centers") curve(pnorm(x,100,5),50,150, lwd=2, col="green3", lty="dashed") curve(pnorm(x,100,10), add=T, col="blue") curve(pnorm(x,105,10), add=T, col="maroon", lty="dotted") par(mfrow=c(1,1)) The first panel of the figure shows the roughly uniform distribution of of P-values of comparison (a) vs (b), and the second shows the power (left-most histogarm bar) of comparison (a) vs (c). The third panel shows that neither distribution (a) [broken green] nor (b) [solid blue] is stochastically dominant. It also shows that (c) [dotted red] dominates (a), plotting mainly to the right of and below (a). Finally, we note that, because data are normal, the most appropriate test to compare (a) and (b) would be a two-sample Welch t test, which does not assume equal variances; its significance level is very near the nominal 5% level (no figure). set.seed(1123) pv = replicate( 10^4, t.test( rnorm(50, 100, 5), rnorm(50,100,10) )$p.val ) mean(pv <= .05) [1] 0.0484 # aprx 5% The point here is not to give an exhaustive catalog of the properties of any one implementation of the Wilcoxon RS test. It is to illustrate how simple simulations can help to settle particular controversies. Note: Original versions of the Wilcoxon rank sum test and the Mann-Whitney U test used different, but essentially equivalent, test statistics. Addendum, per Comment. If the task is to test whether $\mathsf{Beta}(1,3) \ne \mathsf{Beta}(3,1),$ based on ten observations from each distribution, then the two-sample Wilcoxon test (2-sided) will do the job with power very nearly 1: set.seed(2022) pv = replicate(10^5, wilcox.test(rbeta(10, 1,3), rbeta(10, 3,1))$p.val) mean(pv <= 0.05) [1] 0.99692 However, it seems that the meaning of rejection ('perspective') should not be that the median of the former distribution is about $\eta_1=0.2063$ and $\eta_2=0.7937,$ and even less that the median has "shifted" upward. The two distributions have very different shapes. It is clear from plots of empirical CDF of two samples of size ten that $\mathsf{Beta}(3,1)$ (blue) dominates (tends to give larger values than) the former: set.seed(622) x1 = rbeta(10, 1, 3) x2 = rbeta(10, 3, 1) hdr="ECDF Plots: BETA(3,1) Dominates" plot(ecdf(x2), col="blue", xlim=0:1, main=hdr) plot(ecdf(x1), add=T, col="brown")
Assumptions of Mann-Whitney test for at least ordinal data There is some disagreement as to the 'proper' use of the two-sample Wilcoxon (rank sum) test. Perhaps this is because it is often used in ways that might surprise its creators and because various sof
34,320
Maximum value on a set of die rolls --- how to prove that this is a Markov chain?
Yes, this is a Markov chain. In general, in order to show that the process is a Markov chain, you will need to show that the transition probability depends on the "history" of the chain only through its most recent value. In the present case it is relatively simple to derive the exact form for the transition probabilities, so this can be done directly. Your question does not specify how many sides your die has, so I am going to proceed for the general case where we have a fair die with $k \in \mathbb{N}$ sides. Outcome of the rolls are represented by the sequence $X_1,X_2,X_3,... \sim \text{IID U} \{ 1,...,k \}$ and $Z_n \equiv \max (X_1,...,X_n)$ is the maximum outcome in the first $n$ rolls. We can write the latter quantity in its recursive form as $Z_{n+1} = \max (Z_n, X_{n+1})$, which gives the inverse-relationship: $$\begin{matrix} X_{n+1} \leqslant Z_n \quad & & & \text{if } Z_{n+1} = Z_n, \\[6pt] X_{n+1} = Z_{n+1} & & & \text{if } Z_{n+1} > Z_n. \\[6pt] \end{matrix}$$ Using the independence of the underlying sequence of uniform values, it is simple to establish that $X_{n+1} \ \bot \ (Z_1,...,Z_n)$, so for all $z=1,...,k$ we have the following transition probabilities: $$\begin{align} T_{n+1}(z|\mathbf{z}_n) &\equiv \mathbb{P}(Z_{n+1}=z| Z_1 = z_1,...,Z_n = z_n) \\[14pt] &= \begin{cases} 0 & & & \text{if } z < z_n \\[10pt] \mathbb{P}(X_{n+1} \leqslant z| Z_1 = z_1,...,Z_n = z_n) & & & \text{if } z = z_n \\[10pt] \mathbb{P}(X_{n+1} = z| Z_1 = z_1,...,Z_n = z_n) & & & \text{if } z > z_n \\[10pt] \end{cases} \\[14pt] &= \begin{cases} 0 & & & \text{if } z < z_n \\[10pt] \mathbb{P}(X_{n+1} \leqslant z) & & & \text{if } z = z_n \\[10pt] \mathbb{P}(X_{n+1} = z) & & & \text{if } z > z_n \\[10pt] \end{cases} \\[14pt] &= \begin{cases} 0 & & & \text{if } z < z_n \\[10pt] z/k & & & \text{if } z = z_n \\[10pt] 1/k & & & \text{if } z > z_n \\[10pt] \end{cases} \\[14pt] &= \frac{1}{k} \cdot \mathbb{I}(z > z_n) + \frac{z}{k} \cdot \mathbb{I}(z = z_n). \\[6pt] \end{align}$$ Since this transition probability depends on the history $\mathbf{z}_n$ only through the value $z_n$, you have a Markov chain.
Maximum value on a set of die rolls --- how to prove that this is a Markov chain?
Yes, this is a Markov chain. In general, in order to show that the process is a Markov chain, you will need to show that the transition probability depends on the "history" of the chain only through
Maximum value on a set of die rolls --- how to prove that this is a Markov chain? Yes, this is a Markov chain. In general, in order to show that the process is a Markov chain, you will need to show that the transition probability depends on the "history" of the chain only through its most recent value. In the present case it is relatively simple to derive the exact form for the transition probabilities, so this can be done directly. Your question does not specify how many sides your die has, so I am going to proceed for the general case where we have a fair die with $k \in \mathbb{N}$ sides. Outcome of the rolls are represented by the sequence $X_1,X_2,X_3,... \sim \text{IID U} \{ 1,...,k \}$ and $Z_n \equiv \max (X_1,...,X_n)$ is the maximum outcome in the first $n$ rolls. We can write the latter quantity in its recursive form as $Z_{n+1} = \max (Z_n, X_{n+1})$, which gives the inverse-relationship: $$\begin{matrix} X_{n+1} \leqslant Z_n \quad & & & \text{if } Z_{n+1} = Z_n, \\[6pt] X_{n+1} = Z_{n+1} & & & \text{if } Z_{n+1} > Z_n. \\[6pt] \end{matrix}$$ Using the independence of the underlying sequence of uniform values, it is simple to establish that $X_{n+1} \ \bot \ (Z_1,...,Z_n)$, so for all $z=1,...,k$ we have the following transition probabilities: $$\begin{align} T_{n+1}(z|\mathbf{z}_n) &\equiv \mathbb{P}(Z_{n+1}=z| Z_1 = z_1,...,Z_n = z_n) \\[14pt] &= \begin{cases} 0 & & & \text{if } z < z_n \\[10pt] \mathbb{P}(X_{n+1} \leqslant z| Z_1 = z_1,...,Z_n = z_n) & & & \text{if } z = z_n \\[10pt] \mathbb{P}(X_{n+1} = z| Z_1 = z_1,...,Z_n = z_n) & & & \text{if } z > z_n \\[10pt] \end{cases} \\[14pt] &= \begin{cases} 0 & & & \text{if } z < z_n \\[10pt] \mathbb{P}(X_{n+1} \leqslant z) & & & \text{if } z = z_n \\[10pt] \mathbb{P}(X_{n+1} = z) & & & \text{if } z > z_n \\[10pt] \end{cases} \\[14pt] &= \begin{cases} 0 & & & \text{if } z < z_n \\[10pt] z/k & & & \text{if } z = z_n \\[10pt] 1/k & & & \text{if } z > z_n \\[10pt] \end{cases} \\[14pt] &= \frac{1}{k} \cdot \mathbb{I}(z > z_n) + \frac{z}{k} \cdot \mathbb{I}(z = z_n). \\[6pt] \end{align}$$ Since this transition probability depends on the history $\mathbf{z}_n$ only through the value $z_n$, you have a Markov chain.
Maximum value on a set of die rolls --- how to prove that this is a Markov chain? Yes, this is a Markov chain. In general, in order to show that the process is a Markov chain, you will need to show that the transition probability depends on the "history" of the chain only through
34,321
Maximum value on a set of die rolls --- how to prove that this is a Markov chain?
To prove that $$P(Z_{n+1}|Z_n)=P(Z_{n+1}|Z_1,...,Z_n)$$ you can use induction. If you can prove that it holds for $n=0$ and assuming it is true for some $n$ and prove that it holds for $n+1$ induction shows that it holds for any $n$. Since \begin{align} P(Z_{2}=z| Z_1 = z_1) = \begin{cases} 0 & & & \text{if } z < z_1 \\[10pt] z/k & & & \text{if } z = z_1 \\[10pt] 1/k & & & \text{if } z > z_1 \\[10pt] \end{cases} \end{align} and \begin{align} P(Z_{n+1}=z| Z_1 = z_1, ... ,Z_n = z_n) &= \begin{cases} 0 & & & \text{if } z < z_n \\[10pt] z/k & & & \text{if } z = z_n \\[10pt] 1/k & & & \text{if } z > z_n \\[10pt] \end{cases}\\[14pt] &=P(Z_{n+1}=z| Z_n = z_n) \\ \end{align} induction shows that $P(Z_{n+1}|Z_1,...,Z_n)=P(Z_{n+1}|Z_n)$.
Maximum value on a set of die rolls --- how to prove that this is a Markov chain?
To prove that $$P(Z_{n+1}|Z_n)=P(Z_{n+1}|Z_1,...,Z_n)$$ you can use induction. If you can prove that it holds for $n=0$ and assuming it is true for some $n$ and prove that it holds for $n+1$ induc
Maximum value on a set of die rolls --- how to prove that this is a Markov chain? To prove that $$P(Z_{n+1}|Z_n)=P(Z_{n+1}|Z_1,...,Z_n)$$ you can use induction. If you can prove that it holds for $n=0$ and assuming it is true for some $n$ and prove that it holds for $n+1$ induction shows that it holds for any $n$. Since \begin{align} P(Z_{2}=z| Z_1 = z_1) = \begin{cases} 0 & & & \text{if } z < z_1 \\[10pt] z/k & & & \text{if } z = z_1 \\[10pt] 1/k & & & \text{if } z > z_1 \\[10pt] \end{cases} \end{align} and \begin{align} P(Z_{n+1}=z| Z_1 = z_1, ... ,Z_n = z_n) &= \begin{cases} 0 & & & \text{if } z < z_n \\[10pt] z/k & & & \text{if } z = z_n \\[10pt] 1/k & & & \text{if } z > z_n \\[10pt] \end{cases}\\[14pt] &=P(Z_{n+1}=z| Z_n = z_n) \\ \end{align} induction shows that $P(Z_{n+1}|Z_1,...,Z_n)=P(Z_{n+1}|Z_n)$.
Maximum value on a set of die rolls --- how to prove that this is a Markov chain? To prove that $$P(Z_{n+1}|Z_n)=P(Z_{n+1}|Z_1,...,Z_n)$$ you can use induction. If you can prove that it holds for $n=0$ and assuming it is true for some $n$ and prove that it holds for $n+1$ induc
34,322
Unable to get correct coefficients for logistic regression in simulated dataset
If you're trying to generate data from logistic regression's assumed data generating mechanism, your code does not do that. Logistic regression's data generating mechanism looks like $$ \eta = X\beta$$ $$ p = \dfrac{1}{1+e^{-\eta}}$$ $$ y \sim \operatorname{Binomial}(p, n) $$ What it looks like you're trying to do is create a linear regression in the log odds space, error term included. That is incorrect. The error term comes from the binomial likelihood. To create data properly so that glm will estimate the parameters you've specified, you need to do library(sigmoid) N <- 10000 age <- runif(N, min=20, max=90) #Changes here p <- logistic(-100+2*age) hid <- rbinom(N, 1, p) # End changes df <- data.frame(age=age, hid=hid) lr <- glm(hid~age, data=df, family=binomial(link="logit")) s <- summary(lr) print(s) ```
Unable to get correct coefficients for logistic regression in simulated dataset
If you're trying to generate data from logistic regression's assumed data generating mechanism, your code does not do that. Logistic regression's data generating mechanism looks like $$ \eta = X\beta$
Unable to get correct coefficients for logistic regression in simulated dataset If you're trying to generate data from logistic regression's assumed data generating mechanism, your code does not do that. Logistic regression's data generating mechanism looks like $$ \eta = X\beta$$ $$ p = \dfrac{1}{1+e^{-\eta}}$$ $$ y \sim \operatorname{Binomial}(p, n) $$ What it looks like you're trying to do is create a linear regression in the log odds space, error term included. That is incorrect. The error term comes from the binomial likelihood. To create data properly so that glm will estimate the parameters you've specified, you need to do library(sigmoid) N <- 10000 age <- runif(N, min=20, max=90) #Changes here p <- logistic(-100+2*age) hid <- rbinom(N, 1, p) # End changes df <- data.frame(age=age, hid=hid) lr <- glm(hid~age, data=df, family=binomial(link="logit")) s <- summary(lr) print(s) ```
Unable to get correct coefficients for logistic regression in simulated dataset If you're trying to generate data from logistic regression's assumed data generating mechanism, your code does not do that. Logistic regression's data generating mechanism looks like $$ \eta = X\beta$
34,323
Generating random samples obeying the exponential distribution with a given min and max
You describe truncation to an interval. I will elaborate. Suppose $X$ is any random variable (such as an exponential variable) and let $F_X$ be its distribution function, $$F_X(x) = \Pr(X\le x).$$ For an interval $[a,b],$ the truncation limits $X$ to that interval. That lops off some probability from $X,$ namely the chance that $X$ either is less than $a$ or greater than $b.$ The chance that is left is $$\Pr(X\in[a,b]) = \Pr(X\le b) - \Pr(X\le a) + \Pr(X=a) = F_X(b) - F_X(a) + \Pr(X=a).$$ Thus, to make the total probability come out to $1,$ the distribution function for the truncated $X$ must be zero when $x\lt a,$ $1$ when $x\ge b,$ and otherwise is $$F_X(x;a,b) = \frac{\Pr(X\in[a,x])}{\Pr(X\in[a,b])}= \frac{F_X(x) - F_X(a) + \Pr(X=a)}{F_X(b) - F_X(a) + \Pr(X=a)}.$$ When you can compute the inverse of the distribution function--which almost always means $X$ is a continuous variable--it's straightforward to generate samples: draw a uniform random probability $U$ (from the interval $[0,1],$ of course) and find a number $x$ for which $F_X(x) = U.$ This value is written $$x = F^{-1}_X(U).$$ $F_X^{-1}$ is called the "percentage point function" or "inverse distribution function." For example, when $X$ has an Exponential distribution with rate $\lambda \gt 0,$ $$U = F_X(x) = 1 - \exp(-\lambda x),$$ which we can solve to obtain $$F_X^{-1}(U) = -\frac{1}{\lambda}\log(U).$$ This is called "inverting the distribution" or "applying the percentage point function." It turns out--and this is the point of this post--that when you can invert $F_X,$ you can also invert the truncated distribution. Given $U,$ this amounts to solving $$U = F_X(x;a,b) = \frac{F_X(x)-F_X(a)}{F_X(b) - F_X(a)},$$ because (since we are now assuming $X$ is continuous) the terms $\Pr(X=a)=0$ drop out. The solution is $$x = F_X^{-1}(U;a,b) = F_X^{-1}\left(F_X(a)+\left[F_X(b) - F_X(a)\right]U\right).$$ That is, the only change is that after drawing $U,$ you must rescale and shift it to make its value lie between $F_X(a)$ and $F_X(b),$ and then you invert it. This yields the second formula in the question. An equivalent procedure is to draw a uniform value $V$ from the interval $[F_X(a),F_X(b)]$ and compute $F_X^{-1}(V).$ This works because the scaled and shifted version of $U$ has a uniform distribution in this interval. I use this method in the code below. The figure illustrates the results of this algorithm with $\lambda=1/2$ and truncation to the interval $[2,7].$ I think it alone is a pretty good verification of the procedure. The R code is general-purpose: replace ff (which implements $F_X$) and f.inv (which implements $F^{-1}_X$) with the corresponding functions for any continuous random variable. # # Provide a CDF and its percentage point function. # lambda <- 1/2 ff <- function(x) pexp(x, lambda) f.inv <- function(q) qexp(q, lambda) # # Specify the interval of truncation. # a <- 2 b <- 7 # # Simulate data and truncated data. # n <- 1e6 x <- f.inv(runif(n)) x.trunc <- f.inv(runif(n, ff(a), ff(b))) # # Draw histograms. # dx <- (b - a) / 25 bins <- seq(a - ceiling((a - min(x))/dx)*dx, max(x)+dx, by=dx) h <- hist(x.trunc, breaks=bins, plot=FALSE) hist(x, breaks=bins, freq=FALSE, ylim=c(0, max(h$density)), col="#e0e0e0", xlab="Value", main="Histogram of X and its truncated version") plot(h, add=TRUE, freq=FALSE, col="#2020ff40") abline(v = c(a,b), lty=3, lwd=2) mtext(c(expression(a), expression(b)), at = c(a, b), side=1, line=0.25)
Generating random samples obeying the exponential distribution with a given min and max
You describe truncation to an interval. I will elaborate. Suppose $X$ is any random variable (such as an exponential variable) and let $F_X$ be its distribution function, $$F_X(x) = \Pr(X\le x).$$ Fo
Generating random samples obeying the exponential distribution with a given min and max You describe truncation to an interval. I will elaborate. Suppose $X$ is any random variable (such as an exponential variable) and let $F_X$ be its distribution function, $$F_X(x) = \Pr(X\le x).$$ For an interval $[a,b],$ the truncation limits $X$ to that interval. That lops off some probability from $X,$ namely the chance that $X$ either is less than $a$ or greater than $b.$ The chance that is left is $$\Pr(X\in[a,b]) = \Pr(X\le b) - \Pr(X\le a) + \Pr(X=a) = F_X(b) - F_X(a) + \Pr(X=a).$$ Thus, to make the total probability come out to $1,$ the distribution function for the truncated $X$ must be zero when $x\lt a,$ $1$ when $x\ge b,$ and otherwise is $$F_X(x;a,b) = \frac{\Pr(X\in[a,x])}{\Pr(X\in[a,b])}= \frac{F_X(x) - F_X(a) + \Pr(X=a)}{F_X(b) - F_X(a) + \Pr(X=a)}.$$ When you can compute the inverse of the distribution function--which almost always means $X$ is a continuous variable--it's straightforward to generate samples: draw a uniform random probability $U$ (from the interval $[0,1],$ of course) and find a number $x$ for which $F_X(x) = U.$ This value is written $$x = F^{-1}_X(U).$$ $F_X^{-1}$ is called the "percentage point function" or "inverse distribution function." For example, when $X$ has an Exponential distribution with rate $\lambda \gt 0,$ $$U = F_X(x) = 1 - \exp(-\lambda x),$$ which we can solve to obtain $$F_X^{-1}(U) = -\frac{1}{\lambda}\log(U).$$ This is called "inverting the distribution" or "applying the percentage point function." It turns out--and this is the point of this post--that when you can invert $F_X,$ you can also invert the truncated distribution. Given $U,$ this amounts to solving $$U = F_X(x;a,b) = \frac{F_X(x)-F_X(a)}{F_X(b) - F_X(a)},$$ because (since we are now assuming $X$ is continuous) the terms $\Pr(X=a)=0$ drop out. The solution is $$x = F_X^{-1}(U;a,b) = F_X^{-1}\left(F_X(a)+\left[F_X(b) - F_X(a)\right]U\right).$$ That is, the only change is that after drawing $U,$ you must rescale and shift it to make its value lie between $F_X(a)$ and $F_X(b),$ and then you invert it. This yields the second formula in the question. An equivalent procedure is to draw a uniform value $V$ from the interval $[F_X(a),F_X(b)]$ and compute $F_X^{-1}(V).$ This works because the scaled and shifted version of $U$ has a uniform distribution in this interval. I use this method in the code below. The figure illustrates the results of this algorithm with $\lambda=1/2$ and truncation to the interval $[2,7].$ I think it alone is a pretty good verification of the procedure. The R code is general-purpose: replace ff (which implements $F_X$) and f.inv (which implements $F^{-1}_X$) with the corresponding functions for any continuous random variable. # # Provide a CDF and its percentage point function. # lambda <- 1/2 ff <- function(x) pexp(x, lambda) f.inv <- function(q) qexp(q, lambda) # # Specify the interval of truncation. # a <- 2 b <- 7 # # Simulate data and truncated data. # n <- 1e6 x <- f.inv(runif(n)) x.trunc <- f.inv(runif(n, ff(a), ff(b))) # # Draw histograms. # dx <- (b - a) / 25 bins <- seq(a - ceiling((a - min(x))/dx)*dx, max(x)+dx, by=dx) h <- hist(x.trunc, breaks=bins, plot=FALSE) hist(x, breaks=bins, freq=FALSE, ylim=c(0, max(h$density)), col="#e0e0e0", xlab="Value", main="Histogram of X and its truncated version") plot(h, add=TRUE, freq=FALSE, col="#2020ff40") abline(v = c(a,b), lty=3, lwd=2) mtext(c(expression(a), expression(b)), at = c(a, b), side=1, line=0.25)
Generating random samples obeying the exponential distribution with a given min and max You describe truncation to an interval. I will elaborate. Suppose $X$ is any random variable (such as an exponential variable) and let $F_X$ be its distribution function, $$F_X(x) = \Pr(X\le x).$$ Fo
34,324
Generating random samples obeying the exponential distribution with a given min and max
whuber has given you a general answer showing the overall technique. I will give you a shorter answer that focuses only on your specific case. Note that there is an answer to a similar question (using the same method but for the truncated normal distribution) here. You have already pointed out the technique of inverse-transform sampling, which involves generating a random quantile from the uniform distribution $U \sim \text{U}(0,1)$. When sampling within a truncated interval, you need merely adjust this procedure so that you generate a random quantile over the range of allowable quantiles for the truncated interval, giving a restricted random quantile $R \sim \text{U}(q_\min,q_\max)$. Now, if $X \sim \text{Exp}(\lambda)$ then the relevant quantile values are obtained by substituting the boundaries of the interval into the CDF, giving:$^\dagger$ $$q_\min = F(t_\min) = \exp(-\lambda t_\min) \quad \quad \quad q_\max = F(t_\max) = \exp(-\lambda t_\max).$$ Since $R \sim \text{U}(q_\min,q_\max)$ we can obtain this value from the random variable $U \sim \text{U}(0,1)$ using the transformation: $$\begin{align} r &= q_\min + u (q_\max - q_\min) \\[6pt] &= \exp(-\lambda t_\min) + u (\exp(-\lambda t_\max) - \exp(-\lambda t_\min)). \\[6pt] \end{align}$$ Thus, inverse-transformation sampling gives the formula used by the software: $$\begin{align} x &= -\frac{1}{\lambda} \ln (r) \\[6pt] &= -\frac{1}{\lambda} \ln \bigg( \exp(-\lambda t_\min) + u (\exp(-\lambda t_\max) - \exp(-\lambda t_\min)) \bigg). \\[6pt] \end{align}$$ $^\dagger$ Here I am making use of the fact that the distribution is continuous to gloss over a slight complication; see whuber's answer for more detail on the general case.
Generating random samples obeying the exponential distribution with a given min and max
whuber has given you a general answer showing the overall technique. I will give you a shorter answer that focuses only on your specific case. Note that there is an answer to a similar question (usi
Generating random samples obeying the exponential distribution with a given min and max whuber has given you a general answer showing the overall technique. I will give you a shorter answer that focuses only on your specific case. Note that there is an answer to a similar question (using the same method but for the truncated normal distribution) here. You have already pointed out the technique of inverse-transform sampling, which involves generating a random quantile from the uniform distribution $U \sim \text{U}(0,1)$. When sampling within a truncated interval, you need merely adjust this procedure so that you generate a random quantile over the range of allowable quantiles for the truncated interval, giving a restricted random quantile $R \sim \text{U}(q_\min,q_\max)$. Now, if $X \sim \text{Exp}(\lambda)$ then the relevant quantile values are obtained by substituting the boundaries of the interval into the CDF, giving:$^\dagger$ $$q_\min = F(t_\min) = \exp(-\lambda t_\min) \quad \quad \quad q_\max = F(t_\max) = \exp(-\lambda t_\max).$$ Since $R \sim \text{U}(q_\min,q_\max)$ we can obtain this value from the random variable $U \sim \text{U}(0,1)$ using the transformation: $$\begin{align} r &= q_\min + u (q_\max - q_\min) \\[6pt] &= \exp(-\lambda t_\min) + u (\exp(-\lambda t_\max) - \exp(-\lambda t_\min)). \\[6pt] \end{align}$$ Thus, inverse-transformation sampling gives the formula used by the software: $$\begin{align} x &= -\frac{1}{\lambda} \ln (r) \\[6pt] &= -\frac{1}{\lambda} \ln \bigg( \exp(-\lambda t_\min) + u (\exp(-\lambda t_\max) - \exp(-\lambda t_\min)) \bigg). \\[6pt] \end{align}$$ $^\dagger$ Here I am making use of the fact that the distribution is continuous to gloss over a slight complication; see whuber's answer for more detail on the general case.
Generating random samples obeying the exponential distribution with a given min and max whuber has given you a general answer showing the overall technique. I will give you a shorter answer that focuses only on your specific case. Note that there is an answer to a similar question (usi
34,325
Misconception about left censoring
This is a statement about subtraction and absolute value of real numbers. In the context we must understand a "time interval" as being an interval of the form $[A,B]$ where one or both of the endpoints is a random time. The object of study is the duration $X=B-A.$ When the start of the interval $A$ is not known, but only the fact that $A\le a$ is known (with $a$ marking the beginning of the study period), the variable $A$ is left censored. However, the value of the duration $X$ when the end of the interval $B=b$ is observed is $$X = b-A \ge b-a,$$ which is right censoring.
Misconception about left censoring
This is a statement about subtraction and absolute value of real numbers. In the context we must understand a "time interval" as being an interval of the form $[A,B]$ where one or both of the endpoint
Misconception about left censoring This is a statement about subtraction and absolute value of real numbers. In the context we must understand a "time interval" as being an interval of the form $[A,B]$ where one or both of the endpoints is a random time. The object of study is the duration $X=B-A.$ When the start of the interval $A$ is not known, but only the fact that $A\le a$ is known (with $a$ marking the beginning of the study period), the variable $A$ is left censored. However, the value of the duration $X$ when the end of the interval $B=b$ is observed is $$X = b-A \ge b-a,$$ which is right censoring.
Misconception about left censoring This is a statement about subtraction and absolute value of real numbers. In the context we must understand a "time interval" as being an interval of the form $[A,B]$ where one or both of the endpoint
34,326
Misconception about left censoring
This can be very confusing. Start with a simple situation, when time = 0 is both the time of study entry and the time of starting some therapy, for example in a cancer outcome study in which you want to evaluate time to death after therapy starts. If Participant A both enters the study and starts therapy at time = 0, and is still alive at time = 2 years, then you know that the time between starting therapy and death will be at least 2 years for Participant A. That's standard right-censoring at 2 years. Now say that Participant B enters your study after having started treatment at some prior unknown time. You don't know exactly when treatment began, so you call time = 0 for Participant B the time of study entry. If Participant B dies at time = 2 years, you know that the time between starting therapy and death was at least 2 years. That's logically the same as for Participant A: in both cases, you have a lower limit on the time between starting therapy and death. That's right censoring on the survival time of interest in both cases. Potential confusion with Participant B can come from losing the more typical association of lack of an event with right censoring, which you do have with Participant A. With Participant B you observe an event but you don't know the actual elapsed time from starting therapy to death, so you have to treat that event time (starting from your time = 0 at study entry) as a right-censored observation.
Misconception about left censoring
This can be very confusing. Start with a simple situation, when time = 0 is both the time of study entry and the time of starting some therapy, for example in a cancer outcome study in which you want
Misconception about left censoring This can be very confusing. Start with a simple situation, when time = 0 is both the time of study entry and the time of starting some therapy, for example in a cancer outcome study in which you want to evaluate time to death after therapy starts. If Participant A both enters the study and starts therapy at time = 0, and is still alive at time = 2 years, then you know that the time between starting therapy and death will be at least 2 years for Participant A. That's standard right-censoring at 2 years. Now say that Participant B enters your study after having started treatment at some prior unknown time. You don't know exactly when treatment began, so you call time = 0 for Participant B the time of study entry. If Participant B dies at time = 2 years, you know that the time between starting therapy and death was at least 2 years. That's logically the same as for Participant A: in both cases, you have a lower limit on the time between starting therapy and death. That's right censoring on the survival time of interest in both cases. Potential confusion with Participant B can come from losing the more typical association of lack of an event with right censoring, which you do have with Participant A. With Participant B you observe an event but you don't know the actual elapsed time from starting therapy to death, so you have to treat that event time (starting from your time = 0 at study entry) as a right-censored observation.
Misconception about left censoring This can be very confusing. Start with a simple situation, when time = 0 is both the time of study entry and the time of starting some therapy, for example in a cancer outcome study in which you want
34,327
Why can $R^2$ be negative in linear regression -- interview question [duplicate]
The interviewer is right. Sorry. set.seed(2020) x <- seq(0, 1, 0.001) err <- rnorm(length(x)) y <- 99 - 30*x + err L <- lm(y~0+x) # "0" forces the intercept to be zero plot(x, y, ylim=c(0, max(y))) abline(a=0, b= summary(L)$coef[1], col='red') abline(h=mean(y), col='black') SSRes <- sum(resid(L)^2) SSTot <- sum((y - mean(y))^2) R2 <- 1 - SSRes/SSTot R2 I get $R^2 = -31.22529$. This makes sense when you look at the plot the code produces. The red line is the regression line. The black line is the "naive" line where you always guess the mean of $y$, regardless of the $x$. The $R^2<0$ makes sense when you consider what $R^2$ does. $R^2$ measures how much better the regression model is at guessing the conditional mean than always guessing the pooled mean. Looking at the graph you're better off guessing the mean of the pooled values of $y$ than you are using the regression line. EDIT There is an argument to be made that the "SSTot" to which you should compare an intercept-free model is just the sum of squares of $y$ (so $\sum (y_i-0)^2$), not $\sum (y_i - \bar{y})^2$. However, $R^2_{ish} = 1- \frac{\sum(y_i - \hat{y}_i)^2}{\sum y_i^2}$ is quite different from the usual $R^2$ and (I think) loses the usual connection to amount of variance explained. If this $R^2_{ish}$ is used, however, when the intercept is excluded, $R^2_{ish} \ge 0$.
Why can $R^2$ be negative in linear regression -- interview question [duplicate]
The interviewer is right. Sorry. set.seed(2020) x <- seq(0, 1, 0.001) err <- rnorm(length(x)) y <- 99 - 30*x + err L <- lm(y~0+x) # "0" forces the intercept to be zero plot(x, y, ylim=c(0, max(y))) ab
Why can $R^2$ be negative in linear regression -- interview question [duplicate] The interviewer is right. Sorry. set.seed(2020) x <- seq(0, 1, 0.001) err <- rnorm(length(x)) y <- 99 - 30*x + err L <- lm(y~0+x) # "0" forces the intercept to be zero plot(x, y, ylim=c(0, max(y))) abline(a=0, b= summary(L)$coef[1], col='red') abline(h=mean(y), col='black') SSRes <- sum(resid(L)^2) SSTot <- sum((y - mean(y))^2) R2 <- 1 - SSRes/SSTot R2 I get $R^2 = -31.22529$. This makes sense when you look at the plot the code produces. The red line is the regression line. The black line is the "naive" line where you always guess the mean of $y$, regardless of the $x$. The $R^2<0$ makes sense when you consider what $R^2$ does. $R^2$ measures how much better the regression model is at guessing the conditional mean than always guessing the pooled mean. Looking at the graph you're better off guessing the mean of the pooled values of $y$ than you are using the regression line. EDIT There is an argument to be made that the "SSTot" to which you should compare an intercept-free model is just the sum of squares of $y$ (so $\sum (y_i-0)^2$), not $\sum (y_i - \bar{y})^2$. However, $R^2_{ish} = 1- \frac{\sum(y_i - \hat{y}_i)^2}{\sum y_i^2}$ is quite different from the usual $R^2$ and (I think) loses the usual connection to amount of variance explained. If this $R^2_{ish}$ is used, however, when the intercept is excluded, $R^2_{ish} \ge 0$.
Why can $R^2$ be negative in linear regression -- interview question [duplicate] The interviewer is right. Sorry. set.seed(2020) x <- seq(0, 1, 0.001) err <- rnorm(length(x)) y <- 99 - 30*x + err L <- lm(y~0+x) # "0" forces the intercept to be zero plot(x, y, ylim=c(0, max(y))) ab
34,328
Why can $R^2$ be negative in linear regression -- interview question [duplicate]
It looks like your interview was correct. In the case that you include an intercept it is not possible. The easiest way to see this is to take the projection view of linear regression. $\hat{y} = X\hat{\beta} = X(X^TX)^{-1}X^TY = P_XY$ Where $P_X$ is a orthogonal projection matrix. It projects vectors into the subspace spanned by linear combinations of $X$. You can think of this as shining a light on the vector into the linear subspace spanned by X. It maps $Y$ to the closest possible part of the subspace. We can also define the projection onto a subspace spanned by an intercept, denoted $P_\iota$, where $\iota$ is a vector of ones. It turns out that $P_\iota Y = \bar{y}$, a $n \times 1$ vector with the mean as each value. In other words, the best possible linear approximation to $Y$ using only combinations of constants would be the mean. That makes sense and you may have seen related results in a stats class before. If $X$ includes an intercept then the linear subspace spanned by $X$ is a superset of the linear subspace spanned by an intercept. What this means is that since $P_X$ finds the closest approximation in the subspace and it contains the intercept subspace, then it has to be at least as close to $Y$ as the best approximation in the span of $\iota$. In other words $|Y - \hat{y}| = |Y - P_XY| \leq |Y - P_\iota Y| = |Y - \bar{y}|$ if $X$ contains the intercept (and thus the squares must also follow this inequality). Now if we do not include an intercept, this is no longer true, because the linear span of $X$ is no longer a superset of the intercept linear space. It is thus no longer guaranteed that our prediction is at least as good as the mean. Consider the example where $X$ is a single variable with mean 0, finite variance and is independent of $Y$, and $Y$ has some arbitrary mean $E[Y] \neq 0$ (but exists). $\hat{\beta} = (X^TX)^{-1}X^TY \overset{p}{\to} \frac{ E[XY] }{ E[X^2] } = \frac{E[X]E[Y]}{E[X^2]} = 0$ As n gets large, the coefficient becomes arbitrarily close to zero. This means that $\hat{y} \overset{p}{\to} 0$ Using the centered $\mathcal{R}^2$ formula we get \begin{align} 1 - \frac{\sum_{i=1}^n (y_i - \hat{y})^2}{\sum_{i=1}^n(y_i -\bar{y})^2} &= 1 - \frac{\sum_{i=1}^n (y_i - o_p(1))^2}{\sum_{i=1}^n(y_i -\bar{y})^2}\\ &\overset{p}{\to} 1 - \frac{E[Y^2]}{var(Y)}\\ & = 1 - \frac{E[Y^2]}{E[Y^2] - (E[Y])^2} \leq 0 \end{align} So if $X$ doesn't really explain anything in $Y$, and the mean of $Y$ is far from 0, we can have a really negative $\mathcal{R}^2$ Below is some R code to simulate such a case set.seed(2020) n <- 10000 y <- rnorm(n,50,1) x <- rnorm(n) mod <- lm(y ~ -1 + x) yhat <- predict(mod) R2 <- 1 - sum((y - yhat)^2)/sum((y - mean(y))^2) R2 $\mathcal{R^2} = -2514.479$ Edit: I agree with Dave that when we don't include an intercept it would be reasonable to argue that the uncentered $\mathcal{R}^2$ is the more natural $\mathcal{R}^2$ measure. The problem with the uncentered version is that it is not invariant to changes in the mean of the regressand (see Davidson and Mackinnon: Econometric Theory and Methods chapter 3 for discussion).
Why can $R^2$ be negative in linear regression -- interview question [duplicate]
It looks like your interview was correct. In the case that you include an intercept it is not possible. The easiest way to see this is to take the projection view of linear regression. $\hat{y} = X\ha
Why can $R^2$ be negative in linear regression -- interview question [duplicate] It looks like your interview was correct. In the case that you include an intercept it is not possible. The easiest way to see this is to take the projection view of linear regression. $\hat{y} = X\hat{\beta} = X(X^TX)^{-1}X^TY = P_XY$ Where $P_X$ is a orthogonal projection matrix. It projects vectors into the subspace spanned by linear combinations of $X$. You can think of this as shining a light on the vector into the linear subspace spanned by X. It maps $Y$ to the closest possible part of the subspace. We can also define the projection onto a subspace spanned by an intercept, denoted $P_\iota$, where $\iota$ is a vector of ones. It turns out that $P_\iota Y = \bar{y}$, a $n \times 1$ vector with the mean as each value. In other words, the best possible linear approximation to $Y$ using only combinations of constants would be the mean. That makes sense and you may have seen related results in a stats class before. If $X$ includes an intercept then the linear subspace spanned by $X$ is a superset of the linear subspace spanned by an intercept. What this means is that since $P_X$ finds the closest approximation in the subspace and it contains the intercept subspace, then it has to be at least as close to $Y$ as the best approximation in the span of $\iota$. In other words $|Y - \hat{y}| = |Y - P_XY| \leq |Y - P_\iota Y| = |Y - \bar{y}|$ if $X$ contains the intercept (and thus the squares must also follow this inequality). Now if we do not include an intercept, this is no longer true, because the linear span of $X$ is no longer a superset of the intercept linear space. It is thus no longer guaranteed that our prediction is at least as good as the mean. Consider the example where $X$ is a single variable with mean 0, finite variance and is independent of $Y$, and $Y$ has some arbitrary mean $E[Y] \neq 0$ (but exists). $\hat{\beta} = (X^TX)^{-1}X^TY \overset{p}{\to} \frac{ E[XY] }{ E[X^2] } = \frac{E[X]E[Y]}{E[X^2]} = 0$ As n gets large, the coefficient becomes arbitrarily close to zero. This means that $\hat{y} \overset{p}{\to} 0$ Using the centered $\mathcal{R}^2$ formula we get \begin{align} 1 - \frac{\sum_{i=1}^n (y_i - \hat{y})^2}{\sum_{i=1}^n(y_i -\bar{y})^2} &= 1 - \frac{\sum_{i=1}^n (y_i - o_p(1))^2}{\sum_{i=1}^n(y_i -\bar{y})^2}\\ &\overset{p}{\to} 1 - \frac{E[Y^2]}{var(Y)}\\ & = 1 - \frac{E[Y^2]}{E[Y^2] - (E[Y])^2} \leq 0 \end{align} So if $X$ doesn't really explain anything in $Y$, and the mean of $Y$ is far from 0, we can have a really negative $\mathcal{R}^2$ Below is some R code to simulate such a case set.seed(2020) n <- 10000 y <- rnorm(n,50,1) x <- rnorm(n) mod <- lm(y ~ -1 + x) yhat <- predict(mod) R2 <- 1 - sum((y - yhat)^2)/sum((y - mean(y))^2) R2 $\mathcal{R^2} = -2514.479$ Edit: I agree with Dave that when we don't include an intercept it would be reasonable to argue that the uncentered $\mathcal{R}^2$ is the more natural $\mathcal{R}^2$ measure. The problem with the uncentered version is that it is not invariant to changes in the mean of the regressand (see Davidson and Mackinnon: Econometric Theory and Methods chapter 3 for discussion).
Why can $R^2$ be negative in linear regression -- interview question [duplicate] It looks like your interview was correct. In the case that you include an intercept it is not possible. The easiest way to see this is to take the projection view of linear regression. $\hat{y} = X\ha
34,329
Why can $R^2$ be negative in linear regression -- interview question [duplicate]
Using OLS with intercept, the only situation with negative R-squared is the following: You fit your model on a training set. You apply the model on a fresh test set, calculate the out-of-sample residuals and from there, derive the out-of-sample R-squared. The latter can be negative. Here the dummy example in R n <- 100 df <- data.frame(x=rnorm(n), y=rnorm(n)) train <- df[1:70, ] test <- df[71:n, ] # Train on train fit <- lm(y~x, train) summary(fit) # Multiple R-squared: 3.832e-06 # Evaluate on test oos_residuals <- test[, "y"] - predict(fit, test) oos_residual_ss <- sum(oos_residuals^2) oos_total_ss <- sum((test[, "y"] - mean(train[, "y"]))^2) 1 - oos_residual_ss / oos_total_ss # -0.001413857
Why can $R^2$ be negative in linear regression -- interview question [duplicate]
Using OLS with intercept, the only situation with negative R-squared is the following: You fit your model on a training set. You apply the model on a fresh test set, calculate the out-of-sample resi
Why can $R^2$ be negative in linear regression -- interview question [duplicate] Using OLS with intercept, the only situation with negative R-squared is the following: You fit your model on a training set. You apply the model on a fresh test set, calculate the out-of-sample residuals and from there, derive the out-of-sample R-squared. The latter can be negative. Here the dummy example in R n <- 100 df <- data.frame(x=rnorm(n), y=rnorm(n)) train <- df[1:70, ] test <- df[71:n, ] # Train on train fit <- lm(y~x, train) summary(fit) # Multiple R-squared: 3.832e-06 # Evaluate on test oos_residuals <- test[, "y"] - predict(fit, test) oos_residual_ss <- sum(oos_residuals^2) oos_total_ss <- sum((test[, "y"] - mean(train[, "y"]))^2) 1 - oos_residual_ss / oos_total_ss # -0.001413857
Why can $R^2$ be negative in linear regression -- interview question [duplicate] Using OLS with intercept, the only situation with negative R-squared is the following: You fit your model on a training set. You apply the model on a fresh test set, calculate the out-of-sample resi
34,330
why regularization is slower slope and not higher?
1.a Related to the Variance/Bias trade off. Bias / variance tradeoff math You could see the regularization as a form of shrinking the parameters. When you are fitting a model to data then the you need to consider that your data (and your resulting estimates) are made/generated from two components: $$ \text{data $=$ deterministic part $+$ noise }$$ Your estimates are not only fitting the deterministic part (which is the part that we wish to capture with the parameters) but also the noise. The fitting to the noise (which is overfitting, because we should not capture the noise with our estimate of the model, as this can not be generalized, has no external validity) is something that we wish to reduce. By using regularization, by shrinking the parameters, we reduce the sample variance of the estimates, and it will reduce the tendency to fit the random noise. So that is a good thing. At the same time the shrinking will also introduce bias, but we can find some optimal amount based on some computations with prior knowledge or based on data and cross validation. In the graph below, from my answer to the previously mentioned question, you can see how it works for a single parameter model (estimate of the mean only), but it will work similarly for a linear model. 1.b On average, shrinking the coefficients, when done in the right amount, will lead to a net smaller error. Intuition: sometimes your estimate is too high (in which case shrinking improves), sometimes your estimate too low (in which case shrinking makes it worse). Note that shrinking the parameter does not equaly influence those errors... we are not shifting the biased parameter estimate by some same distance independent from the value of the unbiased estimate (in which case there would be indeed no net improvement with the bias) We are shifting with a factor that is larger if the estimate is larger away from zero. The result is that the improvement when we overestimated the parameter is larger than the detoriation when underestimated the parameter. So we are able to make the improvements larger than the detoriations and the net profit/loss will be positive In formula's: The distribution of some non-biased parameter estimate might be some normal distribution say:$$\hat\beta\sim\mathcal{N}(\beta, \epsilon_{\hat\beta}^2)$$ and for a shrunken (biased) parameter estimate is $$c\hat\beta \sim \mathcal{N}(c\beta, c^2\epsilon_{\hat\beta}^2)$$ These are the curves in the left image. The black one is for the non-biased where $c=1$. The mean total error of the parameter estimate, a sum of bias and variance, is then $$E[(c\hat\beta-\beta)^2]=\underbrace{(\beta-c\beta)^2 }_{\text{bias of $\hat\beta$}}+\underbrace{ c^2 \epsilon_{c\hat\beta}^2}_{\text{variance of $c\hat\beta$}}$$with derivative $$\frac{\partial}{\partial c} E[(c\hat\beta-\beta)^2]=-2\hat\beta(\beta-c\beta)+2 c\epsilon_{c\hat\beta}^2$$ which is positive for $c=1$ which means that $c=1$ is not an optimum and that reducing $c$ when $c=1$ leads to a smaller total error. The variance term will relatively decrease more than the bias term increases (and in fact for $c=1$ the bias term does not decrease, the derivative is zero) 2. Related to prior knowledge and a Bayesian estimate You can see the regularization as the prior knowledge that the coefficients must not be too large. (and there must be some questions around here where it is demonstrated that regularization is equal to a particular prior) This prior is especially useful in a setting where you are fitting with a large amount of regressors, for which you can reasonably know that many are redundant, and for which you can know that most coefficients should be equal to zero or close to zero. (So this fitting with a lot of redundant parameters goes a bit further than your two parameter model. For the two parameters the regularization doesn't, at first sight, seem so, useful and in that case the profit by applying a prior that places the parameters closer to zero is only a small advantage) If you are applying the right prior information then your predictions will be better. This you can see in this question Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals In my answer to that question I write: The credible interval makes an improvement by including information about the marginal distribution of $\theta$ and in this way it will be able to make smaller intervals without giving up on the average coverage which is still $\alpha \%$. (But it becomes less reliable/fails when the additional assumption, about the prior, is not true) In the example the credible interval is smaller by a factor $c = \frac{\tau^2}{\tau^2+1}$ and the improvement of the coverage, albeit the smaller intervals, is achieved by shifting the intervals a bit towards $\theta = 0$, which has a larger probability of occurring (which is where the prior density concentrates). By applying a prior, you will be able to make better estimates (the credible interval is smaller than the confidence interval, which does not use the prior information). But.... it requires that the prior/bias is correct or otherwise the biased predictions with the credible interval will be more often wrong. Luckily, it is not unreasonable to expect a priori that the coefficients will have some finite maximum boundary, and shrinking them to zero is not a bad idea (shrinking them to something else than zero might be even better and requires appropriate transformation of your data, e.g. centering beforehand). How much you shrink can be found out with cross validation or objective Bayesian estimation (to be honest I do not know so much about objective Bayesian methods, could somebody maybe confirm that regularization is actually in some sort of sense comparable to objective Bayesian estimation?).
why regularization is slower slope and not higher?
1.a Related to the Variance/Bias trade off. Bias / variance tradeoff math You could see the regularization as a form of shrinking the parameters. When you are fitting a model to data then the you need
why regularization is slower slope and not higher? 1.a Related to the Variance/Bias trade off. Bias / variance tradeoff math You could see the regularization as a form of shrinking the parameters. When you are fitting a model to data then the you need to consider that your data (and your resulting estimates) are made/generated from two components: $$ \text{data $=$ deterministic part $+$ noise }$$ Your estimates are not only fitting the deterministic part (which is the part that we wish to capture with the parameters) but also the noise. The fitting to the noise (which is overfitting, because we should not capture the noise with our estimate of the model, as this can not be generalized, has no external validity) is something that we wish to reduce. By using regularization, by shrinking the parameters, we reduce the sample variance of the estimates, and it will reduce the tendency to fit the random noise. So that is a good thing. At the same time the shrinking will also introduce bias, but we can find some optimal amount based on some computations with prior knowledge or based on data and cross validation. In the graph below, from my answer to the previously mentioned question, you can see how it works for a single parameter model (estimate of the mean only), but it will work similarly for a linear model. 1.b On average, shrinking the coefficients, when done in the right amount, will lead to a net smaller error. Intuition: sometimes your estimate is too high (in which case shrinking improves), sometimes your estimate too low (in which case shrinking makes it worse). Note that shrinking the parameter does not equaly influence those errors... we are not shifting the biased parameter estimate by some same distance independent from the value of the unbiased estimate (in which case there would be indeed no net improvement with the bias) We are shifting with a factor that is larger if the estimate is larger away from zero. The result is that the improvement when we overestimated the parameter is larger than the detoriation when underestimated the parameter. So we are able to make the improvements larger than the detoriations and the net profit/loss will be positive In formula's: The distribution of some non-biased parameter estimate might be some normal distribution say:$$\hat\beta\sim\mathcal{N}(\beta, \epsilon_{\hat\beta}^2)$$ and for a shrunken (biased) parameter estimate is $$c\hat\beta \sim \mathcal{N}(c\beta, c^2\epsilon_{\hat\beta}^2)$$ These are the curves in the left image. The black one is for the non-biased where $c=1$. The mean total error of the parameter estimate, a sum of bias and variance, is then $$E[(c\hat\beta-\beta)^2]=\underbrace{(\beta-c\beta)^2 }_{\text{bias of $\hat\beta$}}+\underbrace{ c^2 \epsilon_{c\hat\beta}^2}_{\text{variance of $c\hat\beta$}}$$with derivative $$\frac{\partial}{\partial c} E[(c\hat\beta-\beta)^2]=-2\hat\beta(\beta-c\beta)+2 c\epsilon_{c\hat\beta}^2$$ which is positive for $c=1$ which means that $c=1$ is not an optimum and that reducing $c$ when $c=1$ leads to a smaller total error. The variance term will relatively decrease more than the bias term increases (and in fact for $c=1$ the bias term does not decrease, the derivative is zero) 2. Related to prior knowledge and a Bayesian estimate You can see the regularization as the prior knowledge that the coefficients must not be too large. (and there must be some questions around here where it is demonstrated that regularization is equal to a particular prior) This prior is especially useful in a setting where you are fitting with a large amount of regressors, for which you can reasonably know that many are redundant, and for which you can know that most coefficients should be equal to zero or close to zero. (So this fitting with a lot of redundant parameters goes a bit further than your two parameter model. For the two parameters the regularization doesn't, at first sight, seem so, useful and in that case the profit by applying a prior that places the parameters closer to zero is only a small advantage) If you are applying the right prior information then your predictions will be better. This you can see in this question Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals In my answer to that question I write: The credible interval makes an improvement by including information about the marginal distribution of $\theta$ and in this way it will be able to make smaller intervals without giving up on the average coverage which is still $\alpha \%$. (But it becomes less reliable/fails when the additional assumption, about the prior, is not true) In the example the credible interval is smaller by a factor $c = \frac{\tau^2}{\tau^2+1}$ and the improvement of the coverage, albeit the smaller intervals, is achieved by shifting the intervals a bit towards $\theta = 0$, which has a larger probability of occurring (which is where the prior density concentrates). By applying a prior, you will be able to make better estimates (the credible interval is smaller than the confidence interval, which does not use the prior information). But.... it requires that the prior/bias is correct or otherwise the biased predictions with the credible interval will be more often wrong. Luckily, it is not unreasonable to expect a priori that the coefficients will have some finite maximum boundary, and shrinking them to zero is not a bad idea (shrinking them to something else than zero might be even better and requires appropriate transformation of your data, e.g. centering beforehand). How much you shrink can be found out with cross validation or objective Bayesian estimation (to be honest I do not know so much about objective Bayesian methods, could somebody maybe confirm that regularization is actually in some sort of sense comparable to objective Bayesian estimation?).
why regularization is slower slope and not higher? 1.a Related to the Variance/Bias trade off. Bias / variance tradeoff math You could see the regularization as a form of shrinking the parameters. When you are fitting a model to data then the you need
34,331
why regularization is slower slope and not higher?
Consider a large collection of regression problems like this one, with different 'true best' slopes and different estimated slopes. You're correct that in any single data set, the estimated slope is equally likely to be above or below the truth. But if you look at the whole collection of problems, the estimated slopes will vary more than the true slopes (because of the added estimation uncertainty), so that the largest estimated slopes will tend to have been overestimated and the smallest estimated slopes will tend to have been underestimated. Shrinking all the slopes towards zero will make some of them more accurate and some of them less accurate, but you can see how it would make them collectively more accurate in some sense. You can make this argument precise in a Bayesian sense where the shrinkage comes from a prior distribution over slopes or just from the idea that the problems are exchangeable in some sense. You can also make it precise in a frequentist sense: it's Stein's Paradox, which Wikipedia covers well: https://en.wikipedia.org/wiki/Stein%27s_example
why regularization is slower slope and not higher?
Consider a large collection of regression problems like this one, with different 'true best' slopes and different estimated slopes. You're correct that in any single data set, the estimated slope is e
why regularization is slower slope and not higher? Consider a large collection of regression problems like this one, with different 'true best' slopes and different estimated slopes. You're correct that in any single data set, the estimated slope is equally likely to be above or below the truth. But if you look at the whole collection of problems, the estimated slopes will vary more than the true slopes (because of the added estimation uncertainty), so that the largest estimated slopes will tend to have been overestimated and the smallest estimated slopes will tend to have been underestimated. Shrinking all the slopes towards zero will make some of them more accurate and some of them less accurate, but you can see how it would make them collectively more accurate in some sense. You can make this argument precise in a Bayesian sense where the shrinkage comes from a prior distribution over slopes or just from the idea that the problems are exchangeable in some sense. You can also make it precise in a frequentist sense: it's Stein's Paradox, which Wikipedia covers well: https://en.wikipedia.org/wiki/Stein%27s_example
why regularization is slower slope and not higher? Consider a large collection of regression problems like this one, with different 'true best' slopes and different estimated slopes. You're correct that in any single data set, the estimated slope is e
34,332
why regularization is slower slope and not higher?
This seems a really interesting discussion and it is maybe nice to point another feature of regularization. Why regularization reduces the risk of overfitting? At a first look could sound strange to talk about overfitting for such a simple model (simple linear regression). However, I think the point the example wants to emphasize is the impact of the regularization on the leverage. Suppose we have a rigde regression (what follows can be generalized to more exotic problems) $$ \hat{y} = X \hat{\beta} = X (X'X + k I)^{-1} X' = H y $$ where $H$ is the hat matrix, $X$ is the model matrix ($n \times p$) and $I$ is a regularization matrix shrinking the values of $\beta$. The leverage is equal to the diagonal elements of the matrix $H$ (let's indicate them as $h_{ii}$). This is true for the simple regression model as well as for the regularized one (and for any regularized estimator for what matters). But what is the impact of the regularization on the leverage exactly? If we compute the SVD of $X = UDV'$, can be shown that the ridge leverage is equal to $$ h_{ii} = \sum_{j = 1}^{p} \frac{\lambda_{j}}{\lambda_{j} + k} u^{2}_{ij} $$ with $\lambda_{j}$ equal to the $j$th eigenvalue of $X'X$, $u_{ij}\lambda^{1/2}_{j}$ is the proj. of the $i$th row of $X$ onto the $j$th principal axis, and $\mbox{tr}(H) = \sum h_{ii}$ measures the effective degrees of freedom. From the formula above we can deduce that for $k > 0$ For each observation, the ridge regression leverage is smaller w.r.t. the LS leverage The leverage decreases monotonically as $k$ increases The rate of decrease of the leverage depends on the position of the single $X$-row (the rows in the direction of the principal axis with larger eigenvalues experience a smaller leverage reduction effect). Going back to the example, In my opinion, the author just wants to stress the fact that the regularized line is not pulled down by the blue point around 20K as much as the non-regularized one when the red dots in the same surroundings are taken out (this in light of point 1&3 above). This prevents 'overfitting' (wich we can read here as high influence) and ensures better results also for unseen data. I hope my answer adds something interesting to this nice discussion.
why regularization is slower slope and not higher?
This seems a really interesting discussion and it is maybe nice to point another feature of regularization. Why regularization reduces the risk of overfitting? At a first look could sound strange to t
why regularization is slower slope and not higher? This seems a really interesting discussion and it is maybe nice to point another feature of regularization. Why regularization reduces the risk of overfitting? At a first look could sound strange to talk about overfitting for such a simple model (simple linear regression). However, I think the point the example wants to emphasize is the impact of the regularization on the leverage. Suppose we have a rigde regression (what follows can be generalized to more exotic problems) $$ \hat{y} = X \hat{\beta} = X (X'X + k I)^{-1} X' = H y $$ where $H$ is the hat matrix, $X$ is the model matrix ($n \times p$) and $I$ is a regularization matrix shrinking the values of $\beta$. The leverage is equal to the diagonal elements of the matrix $H$ (let's indicate them as $h_{ii}$). This is true for the simple regression model as well as for the regularized one (and for any regularized estimator for what matters). But what is the impact of the regularization on the leverage exactly? If we compute the SVD of $X = UDV'$, can be shown that the ridge leverage is equal to $$ h_{ii} = \sum_{j = 1}^{p} \frac{\lambda_{j}}{\lambda_{j} + k} u^{2}_{ij} $$ with $\lambda_{j}$ equal to the $j$th eigenvalue of $X'X$, $u_{ij}\lambda^{1/2}_{j}$ is the proj. of the $i$th row of $X$ onto the $j$th principal axis, and $\mbox{tr}(H) = \sum h_{ii}$ measures the effective degrees of freedom. From the formula above we can deduce that for $k > 0$ For each observation, the ridge regression leverage is smaller w.r.t. the LS leverage The leverage decreases monotonically as $k$ increases The rate of decrease of the leverage depends on the position of the single $X$-row (the rows in the direction of the principal axis with larger eigenvalues experience a smaller leverage reduction effect). Going back to the example, In my opinion, the author just wants to stress the fact that the regularized line is not pulled down by the blue point around 20K as much as the non-regularized one when the red dots in the same surroundings are taken out (this in light of point 1&3 above). This prevents 'overfitting' (wich we can read here as high influence) and ensures better results also for unseen data. I hope my answer adds something interesting to this nice discussion.
why regularization is slower slope and not higher? This seems a really interesting discussion and it is maybe nice to point another feature of regularization. Why regularization reduces the risk of overfitting? At a first look could sound strange to t
34,333
why regularization is slower slope and not higher?
It's an awkward example to demo regularization. The problem is that nobody regularizes with two variables and 36 data points. It's just one terrible example which makes me cringe. If anything the issue is underfitting - there's not enough variables (or degrees of freedom) in this model. For instance, no matter what is GDP per capita if your country has GULAG in it, it's going to impact your life satisfaction, trust me on this one. Nothing can save this model. So, you are right to call the author out on this example. It doesn't make sense. I'm surprised my colleagues are trying to somehow rationalize this as a suitable didactic tool to teach regularization. He has an appropriate overfitting example in the book. Here's the Figure: Now, if you'd apply regularization and high degree polynomial, then it would be a great way to show how regularization potentially can improve performance of a model, and limitations of regularization. Here's my replication of the result: I applied a order 15 polynomial regression of the kind that Excel does, except my $x^k$ were standardized before plugging into the regression. It's the crazy dotted line, similar to one in the book. Also, you can see the straight line regression, which seems to miss that "life satisfaction" - (why would any pick this as an example?!) - saturates. I suppose we should stop trying to satisfy Western consumers at this time, not worth it. Next, I applied Tikhonov regularization (similar to ridge regression) and show it in green solid line. It seems quite better than the straight polynomial. However, I had to run a few different regularization constants to get a fit this good. Second, and most important point is that it doesn't fix the model issue. If you plug a high enough GDP it blows up. So, regularization is not a magic cure. It can reduce overfitting in interpolation context, but it may not fix the issues in extrapolation context. That's one reason, in my opinion, why our AI/ML solutions based on deep learning and NN are so data hungry: they are not very good at extrapolating (out of sample is not extrapolation, btw). They don't create new knowledge, they only memorize what we knew before. They all want every corner covered in the input data set, otherwise they tend to produce ridiculous outputs, unexplainable too. So, this example would have been close to what ML/AI field does in spirit. A univariate linear regression, like in the example you show, is an exactly the opposite in spirit and letter to what ML/AI field uses. A parsimonious explainable trackable model? No way! A little feature engineering goes long way Here, instead of using the polynomialregression, I plugged what's called Nelson-Sigel-Svensson model from finance. It's actually based on Gauss-Laguerre orthogonal functions. The straight fit (dotted line) produces a very good interpolation. However, its value at very low GDPs doesnt make much sense. So I applied a Tikhonov regilarization (green line), and it seems to produce more reasonable fit in both very low and high GDP at expense of poorer fit insde the observed GDP ranges.
why regularization is slower slope and not higher?
It's an awkward example to demo regularization. The problem is that nobody regularizes with two variables and 36 data points. It's just one terrible example which makes me cringe. If anything the issu
why regularization is slower slope and not higher? It's an awkward example to demo regularization. The problem is that nobody regularizes with two variables and 36 data points. It's just one terrible example which makes me cringe. If anything the issue is underfitting - there's not enough variables (or degrees of freedom) in this model. For instance, no matter what is GDP per capita if your country has GULAG in it, it's going to impact your life satisfaction, trust me on this one. Nothing can save this model. So, you are right to call the author out on this example. It doesn't make sense. I'm surprised my colleagues are trying to somehow rationalize this as a suitable didactic tool to teach regularization. He has an appropriate overfitting example in the book. Here's the Figure: Now, if you'd apply regularization and high degree polynomial, then it would be a great way to show how regularization potentially can improve performance of a model, and limitations of regularization. Here's my replication of the result: I applied a order 15 polynomial regression of the kind that Excel does, except my $x^k$ were standardized before plugging into the regression. It's the crazy dotted line, similar to one in the book. Also, you can see the straight line regression, which seems to miss that "life satisfaction" - (why would any pick this as an example?!) - saturates. I suppose we should stop trying to satisfy Western consumers at this time, not worth it. Next, I applied Tikhonov regularization (similar to ridge regression) and show it in green solid line. It seems quite better than the straight polynomial. However, I had to run a few different regularization constants to get a fit this good. Second, and most important point is that it doesn't fix the model issue. If you plug a high enough GDP it blows up. So, regularization is not a magic cure. It can reduce overfitting in interpolation context, but it may not fix the issues in extrapolation context. That's one reason, in my opinion, why our AI/ML solutions based on deep learning and NN are so data hungry: they are not very good at extrapolating (out of sample is not extrapolation, btw). They don't create new knowledge, they only memorize what we knew before. They all want every corner covered in the input data set, otherwise they tend to produce ridiculous outputs, unexplainable too. So, this example would have been close to what ML/AI field does in spirit. A univariate linear regression, like in the example you show, is an exactly the opposite in spirit and letter to what ML/AI field uses. A parsimonious explainable trackable model? No way! A little feature engineering goes long way Here, instead of using the polynomialregression, I plugged what's called Nelson-Sigel-Svensson model from finance. It's actually based on Gauss-Laguerre orthogonal functions. The straight fit (dotted line) produces a very good interpolation. However, its value at very low GDPs doesnt make much sense. So I applied a Tikhonov regilarization (green line), and it seems to produce more reasonable fit in both very low and high GDP at expense of poorer fit insde the observed GDP ranges.
why regularization is slower slope and not higher? It's an awkward example to demo regularization. The problem is that nobody regularizes with two variables and 36 data points. It's just one terrible example which makes me cringe. If anything the issu
34,334
why regularization is slower slope and not higher?
I'm going to ignore all rigor and just give an answer that (hopefully) appeals to intuition. Let's consider least squares. Then our goal seeks to find $argmin\{ RSS + \lambda J \}$ where $J$ is the complexity penalty and $\lambda$ is a tunable hyperparameter. You can think of $J$ being L1 or L2 regularization, maybe $J := \|\beta\|^2$. So ignoring all equations, let's just think about this problem. Since our goal is to minimize this sum, then it will be small when $RSS$ and $\lambda J$ is small. Well, since $J$ is by definition the norm of the weights vector, it will be small when the weights are small. Since the weights determine the slope, it follows that regularization will give us a lower slope.
why regularization is slower slope and not higher?
I'm going to ignore all rigor and just give an answer that (hopefully) appeals to intuition. Let's consider least squares. Then our goal seeks to find $argmin\{ RSS + \lambda J \}$ where $J$ is the c
why regularization is slower slope and not higher? I'm going to ignore all rigor and just give an answer that (hopefully) appeals to intuition. Let's consider least squares. Then our goal seeks to find $argmin\{ RSS + \lambda J \}$ where $J$ is the complexity penalty and $\lambda$ is a tunable hyperparameter. You can think of $J$ being L1 or L2 regularization, maybe $J := \|\beta\|^2$. So ignoring all equations, let's just think about this problem. Since our goal is to minimize this sum, then it will be small when $RSS$ and $\lambda J$ is small. Well, since $J$ is by definition the norm of the weights vector, it will be small when the weights are small. Since the weights determine the slope, it follows that regularization will give us a lower slope.
why regularization is slower slope and not higher? I'm going to ignore all rigor and just give an answer that (hopefully) appeals to intuition. Let's consider least squares. Then our goal seeks to find $argmin\{ RSS + \lambda J \}$ where $J$ is the c
34,335
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses are different
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Yes and no. (Go not to the elves for counsel...) Speaking broadly, any given test statistic has some power curve in relation to a given sequence of alternatives under some set of assumptions (sufficiently specified to have a unqiue value for power under any element in the sequence); a test with reasonable power against some set of alternatives will also tend to have power against other alternatives that are in some sense similar (e.g. if I make one set of values typically larger than another, I will also typically make the difference in means and 75th percentiles larger while I am doing it). The questions we should tend to focus on are along the lines of "what alternatives do I want to test against, what else am I prepared to assume, and what are the properties of various possible test statistics in those cases?" Unfortunately many people have a tendency to adapt their hypothesis to a chosen test statistic rather than the other way around. You're right that the most general form of the hypotheses are different. However, if you add some additional assumptions to the rank sum test, you could regard it that way; for example, if you assume that the alternative is a location shift (that the distribution is the same even under the alternative, apart from being shifted up or down), then you could see it as a kind of nonparametric equivalent. In effect, if the only issue with the ordinary two-sample t-test is the populations are not necessarily normally distributed, but otherwise everything else is as supposed, then you might treat the Wilcoxon rank sum test as an alternative version of a t-test that doesn't assume normality. For example, the population location shift will correspond to a difference in population means if the population distribution has a finite mean. It's sensitive to that kind of alternative, so it will be a good test for that situation (even if the population distributions are exactly normal). However it's also sensitive to other alternatives (i.e. if your location-shift-alternative assumption is wrong, you may be rejecting because of something other than a location shift). On the other hand, the t-test itself is also sensitive to other differences than a pure shift in the mean, so one might say the much the same thing about it; if you assume a pure location shift but in actual fact it's (say) a scale shift you, it will reject and you might tend to misinterpret the outcome. It's always important to think carefully about what sort of alternatives you want to test for, exactly, and what you're prepared to assume about the populations under those alternatives. There's a much more straightforward nonparametric equivalent, though, which is a permutation test based on a t-statistic*. If the assumptions of the t-test hold, this test will typically work very similarly - especially in large samples. If the assumptions of the t-test don't hold, but the assumptions of the permutation test do, then it will have the advertized significance level (though it may be less powerful than the rank sum test on shift alternatives if the populations are heavy-tailed). *(if you prefer, you could do a permutation test based on a difference in sample means).
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Yes and no. (Go not to the elves for counsel...) Speaking broadly, any given test statistic has some power curve
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses are different Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Yes and no. (Go not to the elves for counsel...) Speaking broadly, any given test statistic has some power curve in relation to a given sequence of alternatives under some set of assumptions (sufficiently specified to have a unqiue value for power under any element in the sequence); a test with reasonable power against some set of alternatives will also tend to have power against other alternatives that are in some sense similar (e.g. if I make one set of values typically larger than another, I will also typically make the difference in means and 75th percentiles larger while I am doing it). The questions we should tend to focus on are along the lines of "what alternatives do I want to test against, what else am I prepared to assume, and what are the properties of various possible test statistics in those cases?" Unfortunately many people have a tendency to adapt their hypothesis to a chosen test statistic rather than the other way around. You're right that the most general form of the hypotheses are different. However, if you add some additional assumptions to the rank sum test, you could regard it that way; for example, if you assume that the alternative is a location shift (that the distribution is the same even under the alternative, apart from being shifted up or down), then you could see it as a kind of nonparametric equivalent. In effect, if the only issue with the ordinary two-sample t-test is the populations are not necessarily normally distributed, but otherwise everything else is as supposed, then you might treat the Wilcoxon rank sum test as an alternative version of a t-test that doesn't assume normality. For example, the population location shift will correspond to a difference in population means if the population distribution has a finite mean. It's sensitive to that kind of alternative, so it will be a good test for that situation (even if the population distributions are exactly normal). However it's also sensitive to other alternatives (i.e. if your location-shift-alternative assumption is wrong, you may be rejecting because of something other than a location shift). On the other hand, the t-test itself is also sensitive to other differences than a pure shift in the mean, so one might say the much the same thing about it; if you assume a pure location shift but in actual fact it's (say) a scale shift you, it will reject and you might tend to misinterpret the outcome. It's always important to think carefully about what sort of alternatives you want to test for, exactly, and what you're prepared to assume about the populations under those alternatives. There's a much more straightforward nonparametric equivalent, though, which is a permutation test based on a t-statistic*. If the assumptions of the t-test hold, this test will typically work very similarly - especially in large samples. If the assumptions of the t-test don't hold, but the assumptions of the permutation test do, then it will have the advertized significance level (though it may be less powerful than the rank sum test on shift alternatives if the populations are heavy-tailed). *(if you prefer, you could do a permutation test based on a difference in sample means).
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Yes and no. (Go not to the elves for counsel...) Speaking broadly, any given test statistic has some power curve
34,336
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses are different
Unfortunately, we often teach students about hypothesis testing this way: "To compare the central tendency of two groups, use a t test, unless you can't for some reason, then use a Wilcoxon-Mann-Whitney (WMW) test." In this sense, it's an "alternative" to the t test, comparing the location of two groups, when a t test itself isn't appropriate. Instead, I wish we would emphasize that different tests evaluate different hypotheses. Thinking of the two-sample case, there are times when we want to compare means (t test), or medians (Mood's median test or others), or the 75th percentile (quantile regression), or stochastic dominance (WMW, essentially). Any of these may be of interest, depending on the data we're evaluating, and what we want to know.
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses
Unfortunately, we often teach students about hypothesis testing this way: "To compare the central tendency of two groups, use a t test, unless you can't for some reason, then use a Wilcoxon-Mann-Whitn
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses are different Unfortunately, we often teach students about hypothesis testing this way: "To compare the central tendency of two groups, use a t test, unless you can't for some reason, then use a Wilcoxon-Mann-Whitney (WMW) test." In this sense, it's an "alternative" to the t test, comparing the location of two groups, when a t test itself isn't appropriate. Instead, I wish we would emphasize that different tests evaluate different hypotheses. Thinking of the two-sample case, there are times when we want to compare means (t test), or medians (Mood's median test or others), or the 75th percentile (quantile regression), or stochastic dominance (WMW, essentially). Any of these may be of interest, depending on the data we're evaluating, and what we want to know.
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses Unfortunately, we often teach students about hypothesis testing this way: "To compare the central tendency of two groups, use a t test, unless you can't for some reason, then use a Wilcoxon-Mann-Whitn
34,337
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses are different
Most textbooks teach that the Wilcoxon rank sum test is for testing for equality of central tendency of two distributions or location shift, and the choice between the Wilcoxon rank sum test and the t-test depends on the results of a test of normality. In reality it is the rare cases in which the population distributions of the two groups are merely shifted and the Wilcoxon rank sum test actually tests P(X >Y) + 1/2*P(X=Y). A global inconsistency for comparisons among several groups may happen with the Wilcoxon rank sum test as shown in the paper George W.D. (the conclusion of A < B < C < A ) The Wilcoxon rank sum test measures an attribute of two sets of observations that is a function of how they are distributed relative to each other, and not of any absolute features of either distribution alone. Anyway, it seems that 1) the choice between t- and Wilcoxon test should not be based on a test of normality and 2) the Wilcoxon rank sum test is NOT for testing for equality of central tendency of two distributions in most cases.
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses
Most textbooks teach that the Wilcoxon rank sum test is for testing for equality of central tendency of two distributions or location shift, and the choice between the Wilcoxon rank sum test and the t
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses are different Most textbooks teach that the Wilcoxon rank sum test is for testing for equality of central tendency of two distributions or location shift, and the choice between the Wilcoxon rank sum test and the t-test depends on the results of a test of normality. In reality it is the rare cases in which the population distributions of the two groups are merely shifted and the Wilcoxon rank sum test actually tests P(X >Y) + 1/2*P(X=Y). A global inconsistency for comparisons among several groups may happen with the Wilcoxon rank sum test as shown in the paper George W.D. (the conclusion of A < B < C < A ) The Wilcoxon rank sum test measures an attribute of two sets of observations that is a function of how they are distributed relative to each other, and not of any absolute features of either distribution alone. Anyway, it seems that 1) the choice between t- and Wilcoxon test should not be based on a test of normality and 2) the Wilcoxon rank sum test is NOT for testing for equality of central tendency of two distributions in most cases.
Is the Wilcoxon rank-sum test a nonparametric alternative to the two sample t-test? Null hypotheses Most textbooks teach that the Wilcoxon rank sum test is for testing for equality of central tendency of two distributions or location shift, and the choice between the Wilcoxon rank sum test and the t
34,338
Proof that if covariance is zero then there is no linear relationship
Here is a proof of the mathematical statement at the end of your question: we can find a $Z$ which is uncorrelated to $X$ and satisfies $$ \mathbb{E}(Y|X) = b X + \mathbb{E}(Z|X) $$ by assuming $Z = Y - bX$, and then choosing the $b$ which makes $\mathrm{Cov}(X, Z) = 0$ true. For this $b$ we have $$ 0 = \mathrm{Cov}(X, Z) = \mathrm{Cov}(X, Y - bX) = \mathrm{Cov}(X, Y) - b \mathrm{Var}(X), $$ and thus $$ b = \frac{\mathrm{Cov}(X, Y)}{\mathrm{Var}(X)}. $$ (Note that the same $b$ is found as the slope of the linear regression line.) We have $b = 0$, if and only if $\mathrm{Cov}(X,Y) = 0$.
Proof that if covariance is zero then there is no linear relationship
Here is a proof of the mathematical statement at the end of your question: we can find a $Z$ which is uncorrelated to $X$ and satisfies $$ \mathbb{E}(Y|X) = b X + \mathbb{E}(Z|X) $$ by assuming $Z =
Proof that if covariance is zero then there is no linear relationship Here is a proof of the mathematical statement at the end of your question: we can find a $Z$ which is uncorrelated to $X$ and satisfies $$ \mathbb{E}(Y|X) = b X + \mathbb{E}(Z|X) $$ by assuming $Z = Y - bX$, and then choosing the $b$ which makes $\mathrm{Cov}(X, Z) = 0$ true. For this $b$ we have $$ 0 = \mathrm{Cov}(X, Z) = \mathrm{Cov}(X, Y - bX) = \mathrm{Cov}(X, Y) - b \mathrm{Var}(X), $$ and thus $$ b = \frac{\mathrm{Cov}(X, Y)}{\mathrm{Var}(X)}. $$ (Note that the same $b$ is found as the slope of the linear regression line.) We have $b = 0$, if and only if $\mathrm{Cov}(X,Y) = 0$.
Proof that if covariance is zero then there is no linear relationship Here is a proof of the mathematical statement at the end of your question: we can find a $Z$ which is uncorrelated to $X$ and satisfies $$ \mathbb{E}(Y|X) = b X + \mathbb{E}(Z|X) $$ by assuming $Z =
34,339
Proof that if covariance is zero then there is no linear relationship
If there is a linear relationship between two RVs, i.e. $Y=aX+b$, where $a\neq 0$, then the covariance is $$\operatorname{cov}(X,Y)=a\operatorname{cov}(X,X)=a\operatorname{var}(X)\neq0$$ So, if there a linear relation, covariance is not zero. If the covariance is zero, the linear relation can't exist because we'll contradict.
Proof that if covariance is zero then there is no linear relationship
If there is a linear relationship between two RVs, i.e. $Y=aX+b$, where $a\neq 0$, then the covariance is $$\operatorname{cov}(X,Y)=a\operatorname{cov}(X,X)=a\operatorname{var}(X)\neq0$$ So, if there
Proof that if covariance is zero then there is no linear relationship If there is a linear relationship between two RVs, i.e. $Y=aX+b$, where $a\neq 0$, then the covariance is $$\operatorname{cov}(X,Y)=a\operatorname{cov}(X,X)=a\operatorname{var}(X)\neq0$$ So, if there a linear relation, covariance is not zero. If the covariance is zero, the linear relation can't exist because we'll contradict.
Proof that if covariance is zero then there is no linear relationship If there is a linear relationship between two RVs, i.e. $Y=aX+b$, where $a\neq 0$, then the covariance is $$\operatorname{cov}(X,Y)=a\operatorname{cov}(X,X)=a\operatorname{var}(X)\neq0$$ So, if there
34,340
Proof that if covariance is zero then there is no linear relationship
On a distribution level it should be straightforward to show that a linear correlation implies a non-zero covariance (the other way to prove what you wanted). But as a word of warning, this may not hold for a sample. If you have a small data set generated with a linear correlation, but by chance a large outlier you can compute a negative correlation or no correlation on the sample.
Proof that if covariance is zero then there is no linear relationship
On a distribution level it should be straightforward to show that a linear correlation implies a non-zero covariance (the other way to prove what you wanted). But as a word of warning, this may not ho
Proof that if covariance is zero then there is no linear relationship On a distribution level it should be straightforward to show that a linear correlation implies a non-zero covariance (the other way to prove what you wanted). But as a word of warning, this may not hold for a sample. If you have a small data set generated with a linear correlation, but by chance a large outlier you can compute a negative correlation or no correlation on the sample.
Proof that if covariance is zero then there is no linear relationship On a distribution level it should be straightforward to show that a linear correlation implies a non-zero covariance (the other way to prove what you wanted). But as a word of warning, this may not ho
34,341
Calculating the expected value of truncated normal
Your formula implementation is wrong because, $$\phi\left(\frac{x-\mu}{\sigma}\right)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}\neq f_{X,\mu,\sigma}(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}$$ As you can see, we have an extra $\sigma$ in the denominator of $f_{X,\mu,\sigma}(x)$, which yields: $$\phi\left(\frac{x-\mu}{\sigma}\right)=\sigma f_{X,\mu,\sigma}(x)$$ dnorm method gives you $f_{X,\mu,\sigma}(x)$, where you need to multiply it with $\sigma$ to obtain $\phi$. Since your $\sigma=2$, this can be practically done via subtracting the second term again, which is $1-0.7124=0.2876$: $$1-0.2876-0.2876=0.4247$$ which is close to your estimate.
Calculating the expected value of truncated normal
Your formula implementation is wrong because, $$\phi\left(\frac{x-\mu}{\sigma}\right)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}\neq f_{X,\mu,\sigma}(x)=\frac{1}{\sqrt{2
Calculating the expected value of truncated normal Your formula implementation is wrong because, $$\phi\left(\frac{x-\mu}{\sigma}\right)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}\neq f_{X,\mu,\sigma}(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}$$ As you can see, we have an extra $\sigma$ in the denominator of $f_{X,\mu,\sigma}(x)$, which yields: $$\phi\left(\frac{x-\mu}{\sigma}\right)=\sigma f_{X,\mu,\sigma}(x)$$ dnorm method gives you $f_{X,\mu,\sigma}(x)$, where you need to multiply it with $\sigma$ to obtain $\phi$. Since your $\sigma=2$, this can be practically done via subtracting the second term again, which is $1-0.7124=0.2876$: $$1-0.2876-0.2876=0.4247$$ which is close to your estimate.
Calculating the expected value of truncated normal Your formula implementation is wrong because, $$\phi\left(\frac{x-\mu}{\sigma}\right)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}\neq f_{X,\mu,\sigma}(x)=\frac{1}{\sqrt{2
34,342
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackrel{d}{\rightarrow}X+Y$?
Formalizing @Ben answer, independence is almost a sufficient condition, because we know that the characteristic function of the sum of two independent RV's is the product of their marginal characteristic functions. Let $$Z_n = X_n + Y_n$$. Under independence of $X_n$ and $Y_n$, $$\phi_{Z_n}(t) = \phi_{X_n}(t)\phi_{Y_n}(t)$$ So $$\lim \phi_{Z_n}(t) =\lim \Big [\phi_{X_n}(t)\phi_{Y_n}(t)\Big]$$ and we have (since we assume that $X_n$ and $Y_n$ converge) $$\lim \Big [\phi_{X_n}(t)\phi_{Y_n}(t)\Big] = \lim \phi_{X_n}(t)\cdot \lim \phi_{Y_n}(t) = \phi_{X}(t)\cdot \phi_{Y}(t) $$ which is the characteristic function of $X+Y$... if $X+Y$ are independent. And they will be independent if one of the two has a continuous distribution function (see this post). This is the condition required in addition to independence of the sequences, so that independence is preserved at the limit. Without independence we would have $$\phi_{Z_n}(t) \neq \phi_{X_n}(t)\phi_{Y_n}(t)$$ and no general assertion can be made about the limit.
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackre
Formalizing @Ben answer, independence is almost a sufficient condition, because we know that the characteristic function of the sum of two independent RV's is the product of their marginal characteris
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackrel{d}{\rightarrow}X+Y$? Formalizing @Ben answer, independence is almost a sufficient condition, because we know that the characteristic function of the sum of two independent RV's is the product of their marginal characteristic functions. Let $$Z_n = X_n + Y_n$$. Under independence of $X_n$ and $Y_n$, $$\phi_{Z_n}(t) = \phi_{X_n}(t)\phi_{Y_n}(t)$$ So $$\lim \phi_{Z_n}(t) =\lim \Big [\phi_{X_n}(t)\phi_{Y_n}(t)\Big]$$ and we have (since we assume that $X_n$ and $Y_n$ converge) $$\lim \Big [\phi_{X_n}(t)\phi_{Y_n}(t)\Big] = \lim \phi_{X_n}(t)\cdot \lim \phi_{Y_n}(t) = \phi_{X}(t)\cdot \phi_{Y}(t) $$ which is the characteristic function of $X+Y$... if $X+Y$ are independent. And they will be independent if one of the two has a continuous distribution function (see this post). This is the condition required in addition to independence of the sequences, so that independence is preserved at the limit. Without independence we would have $$\phi_{Z_n}(t) \neq \phi_{X_n}(t)\phi_{Y_n}(t)$$ and no general assertion can be made about the limit.
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackre Formalizing @Ben answer, independence is almost a sufficient condition, because we know that the characteristic function of the sum of two independent RV's is the product of their marginal characteris
34,343
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackrel{d}{\rightarrow}X+Y$?
The Cramer-Wold theorem gives a necessary and sufficient condition: Let $\{z_n\}$ be a sequence of $R^K$-valued random variables. Then, $$ z_n \to_d z\;\Longleftrightarrow\;\lambda'z_n\to_d \lambda'z\quad\forall\quad \lambda\in R^K\backslash\{0\} $$ To give an example, let $U\sim N(0,1)$ and define $W_n:=U$ as well as $V_n:=(-1)^nU$. We then trivially have $$W_n\to_d U$$ and, due to symmetry of the standard normal distribution, that $$V_n\to_d U.$$ However, $W_n+V_n$ does not converge in distribution, as $$ W_n+V_n=\begin{cases}2U\sim N(0,4)&\text{for}\;n\;\text{even}\\ 0&\text{for}\;n\;\text{odd}\end{cases} $$ This is an application of the Cramer-Wold Device for $\lambda=(1,\;1)'$.
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackre
The Cramer-Wold theorem gives a necessary and sufficient condition: Let $\{z_n\}$ be a sequence of $R^K$-valued random variables. Then, $$ z_n \to_d z\;\Longleftrightarrow\;\lambda'z_n\to_d \lambda'z\
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackrel{d}{\rightarrow}X+Y$? The Cramer-Wold theorem gives a necessary and sufficient condition: Let $\{z_n\}$ be a sequence of $R^K$-valued random variables. Then, $$ z_n \to_d z\;\Longleftrightarrow\;\lambda'z_n\to_d \lambda'z\quad\forall\quad \lambda\in R^K\backslash\{0\} $$ To give an example, let $U\sim N(0,1)$ and define $W_n:=U$ as well as $V_n:=(-1)^nU$. We then trivially have $$W_n\to_d U$$ and, due to symmetry of the standard normal distribution, that $$V_n\to_d U.$$ However, $W_n+V_n$ does not converge in distribution, as $$ W_n+V_n=\begin{cases}2U\sim N(0,4)&\text{for}\;n\;\text{even}\\ 0&\text{for}\;n\;\text{odd}\end{cases} $$ This is an application of the Cramer-Wold Device for $\lambda=(1,\;1)'$.
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackre The Cramer-Wold theorem gives a necessary and sufficient condition: Let $\{z_n\}$ be a sequence of $R^K$-valued random variables. Then, $$ z_n \to_d z\;\Longleftrightarrow\;\lambda'z_n\to_d \lambda'z\
34,344
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackrel{d}{\rightarrow}X+Y$?
Yes, independence is sufficient: The antecedent conditions here concern convergence in distribution for the marginal distributions of $\{ X_n \}$ and $\{ Y_n \}$. The reason that the implication does not hold generally is that there is nothing in the antecedent conditions that deals with the statistical dependence between the elements of the two sequences. If you were to impose independence of the sequences then that would be sufficient to ensure convergence in distribution of the sum. (Alecos has added an excellent answer below that proves this result using characteristic functions. Asymptotic independence is also sufficient for this implication, since the same limiting decomposition of the characteristic functions occurs.)
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackre
Yes, independence is sufficient: The antecedent conditions here concern convergence in distribution for the marginal distributions of $\{ X_n \}$ and $\{ Y_n \}$. The reason that the implication does
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackrel{d}{\rightarrow}X+Y$? Yes, independence is sufficient: The antecedent conditions here concern convergence in distribution for the marginal distributions of $\{ X_n \}$ and $\{ Y_n \}$. The reason that the implication does not hold generally is that there is nothing in the antecedent conditions that deals with the statistical dependence between the elements of the two sequences. If you were to impose independence of the sequences then that would be sufficient to ensure convergence in distribution of the sum. (Alecos has added an excellent answer below that proves this result using characteristic functions. Asymptotic independence is also sufficient for this implication, since the same limiting decomposition of the characteristic functions occurs.)
When does $X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y$ imply $X_n+Y_n\stackre Yes, independence is sufficient: The antecedent conditions here concern convergence in distribution for the marginal distributions of $\{ X_n \}$ and $\{ Y_n \}$. The reason that the implication does
34,345
Worm and Apple Expected Value
In the excellent answer by Glen_b, he shows that you can calculate the expected value analytically using a simple system of linear equations. Following this analytic method you can determine that the expected number of moves to the apple is six. Another excellent answer by whuber shows how to derive the probability mass function for the process after any given number of moves, and this method can also be used to obtain an analytic solution for the expected value. If you would like to see some further insight on this problem, you should read some papers on circular random walks (see e.g., Stephens 1963) To give an alternative view of the problem, I am going to show you how you can get the same result using the brute force method of just calculating out the Markov chain using statistical computing. This method is inferior to analytical examination in many respects, but it has the advantage that it lets you to deal with the problem without requiring any major mathematical insight. Brute force computational method: Taking the states in order $A,B,C,D,E$, your Markov chain transitions according to the following transition matrix: $$\mathbf{P} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\[6pt] \tfrac{1}{2} & 0 & \tfrac{1}{2} & 0 & 0 \\[6pt] 0 & \tfrac{1}{2} & 0 & \tfrac{1}{2} & 0 \\[6pt] 0 & 0 & \tfrac{1}{2} & 0 & \tfrac{1}{2} \\[6pt] \tfrac{1}{2} & 0 & 0 & \tfrac{1}{2} & 0 \\[6pt] \end{bmatrix}$$ The first state is the absorbing state $A$ where the worm is at the apple. Let $T_C$ be the number of moves until the worm gets to the apple from state $C$. Then for all $n \in \mathbb{N}$ the probability that the worm is at the apple after this number of moves is $\mathbb{P}(T_C \leqslant n) = \{ \mathbf{P}^n \}_{C,A}$ and so the expected number of moves to get to the apple from this state is: $$\mathbb{E}(T_C) = \sum_{n=0}^\infty \mathbb{P}(T_C > n) = \sum_{n=0}^\infty (1-\{ \mathbf{P}^n \}_{C,A}).$$ The terms in the sum decrease exponentially for large $n$ so we can compute the expected value to any desired level of accuracy by truncating the sum at a finite number of terms. (The exponential decay of the terms ensures that we can limit the size of the removed terms to be below a desired level.) In practice it is easy to take a large number of terms until the size of the remaining terms is extremely small. Programming this in R: You can program this as a function in R using the code below. This code has been vectorised to generate an array of powers of the transition matrix for a finite sequence of moves. We also generate a plot of the probability that the apple has not been reached, showing that this decreases exponentially. #Create function to give n-step transition matrix for n = 1,...,N #N is the last value of n PROB <- function(N) { P <- matrix(c(1, 0, 0, 0, 0, 1/2, 0, 1/2, 0, 0, 0, 1/2, 0, 1/2, 0, 0, 0, 1/2, 0, 1/2, 1/2, 0, 0, 1/2, 0), nrow = 5, ncol = 5, byrow = TRUE); PPP <- array(0, dim = c(5,5,N)); PPP[,,1] <- P; for (n in 2:N) { PPP[,,n] <- PPP[,,n-1] %*% P; } PPP } #Calculate probabilities of reaching apple for n = 1,...,100 N <- 100; DF <- data.frame(Probability = PROB(N)[3,1,], Moves = 1:N); #Plot probability of not having reached apple library(ggplot2); FIGURE <- ggplot(DF, aes(x = Moves, y = 1-Probability)) + geom_point() + scale_y_log10(breaks = scales::trans_breaks("log10", function(x) 10^x), labels = scales::trans_format("log10", scales::math_format(10^.x))) + ggtitle('Probability that worm has not reached apple') + xlab('Number of Moves') + ylab('Probability'); FIGURE; #Calculate expected number of moves to get to apple #Calculation truncates the infinite sum at N = 100 #We add one to represent the term for n = 0 EXP <- 1 + sum(1-DF$Probability); EXP; [1] 6 As you can see from this calculation, the expected number of moves to get to the apple is six. This calculation was extremely rapid using the above vectorised code for the Markov chain.
Worm and Apple Expected Value
In the excellent answer by Glen_b, he shows that you can calculate the expected value analytically using a simple system of linear equations. Following this analytic method you can determine that the
Worm and Apple Expected Value In the excellent answer by Glen_b, he shows that you can calculate the expected value analytically using a simple system of linear equations. Following this analytic method you can determine that the expected number of moves to the apple is six. Another excellent answer by whuber shows how to derive the probability mass function for the process after any given number of moves, and this method can also be used to obtain an analytic solution for the expected value. If you would like to see some further insight on this problem, you should read some papers on circular random walks (see e.g., Stephens 1963) To give an alternative view of the problem, I am going to show you how you can get the same result using the brute force method of just calculating out the Markov chain using statistical computing. This method is inferior to analytical examination in many respects, but it has the advantage that it lets you to deal with the problem without requiring any major mathematical insight. Brute force computational method: Taking the states in order $A,B,C,D,E$, your Markov chain transitions according to the following transition matrix: $$\mathbf{P} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\[6pt] \tfrac{1}{2} & 0 & \tfrac{1}{2} & 0 & 0 \\[6pt] 0 & \tfrac{1}{2} & 0 & \tfrac{1}{2} & 0 \\[6pt] 0 & 0 & \tfrac{1}{2} & 0 & \tfrac{1}{2} \\[6pt] \tfrac{1}{2} & 0 & 0 & \tfrac{1}{2} & 0 \\[6pt] \end{bmatrix}$$ The first state is the absorbing state $A$ where the worm is at the apple. Let $T_C$ be the number of moves until the worm gets to the apple from state $C$. Then for all $n \in \mathbb{N}$ the probability that the worm is at the apple after this number of moves is $\mathbb{P}(T_C \leqslant n) = \{ \mathbf{P}^n \}_{C,A}$ and so the expected number of moves to get to the apple from this state is: $$\mathbb{E}(T_C) = \sum_{n=0}^\infty \mathbb{P}(T_C > n) = \sum_{n=0}^\infty (1-\{ \mathbf{P}^n \}_{C,A}).$$ The terms in the sum decrease exponentially for large $n$ so we can compute the expected value to any desired level of accuracy by truncating the sum at a finite number of terms. (The exponential decay of the terms ensures that we can limit the size of the removed terms to be below a desired level.) In practice it is easy to take a large number of terms until the size of the remaining terms is extremely small. Programming this in R: You can program this as a function in R using the code below. This code has been vectorised to generate an array of powers of the transition matrix for a finite sequence of moves. We also generate a plot of the probability that the apple has not been reached, showing that this decreases exponentially. #Create function to give n-step transition matrix for n = 1,...,N #N is the last value of n PROB <- function(N) { P <- matrix(c(1, 0, 0, 0, 0, 1/2, 0, 1/2, 0, 0, 0, 1/2, 0, 1/2, 0, 0, 0, 1/2, 0, 1/2, 1/2, 0, 0, 1/2, 0), nrow = 5, ncol = 5, byrow = TRUE); PPP <- array(0, dim = c(5,5,N)); PPP[,,1] <- P; for (n in 2:N) { PPP[,,n] <- PPP[,,n-1] %*% P; } PPP } #Calculate probabilities of reaching apple for n = 1,...,100 N <- 100; DF <- data.frame(Probability = PROB(N)[3,1,], Moves = 1:N); #Plot probability of not having reached apple library(ggplot2); FIGURE <- ggplot(DF, aes(x = Moves, y = 1-Probability)) + geom_point() + scale_y_log10(breaks = scales::trans_breaks("log10", function(x) 10^x), labels = scales::trans_format("log10", scales::math_format(10^.x))) + ggtitle('Probability that worm has not reached apple') + xlab('Number of Moves') + ylab('Probability'); FIGURE; #Calculate expected number of moves to get to apple #Calculation truncates the infinite sum at N = 100 #We add one to represent the term for n = 0 EXP <- 1 + sum(1-DF$Probability); EXP; [1] 6 As you can see from this calculation, the expected number of moves to get to the apple is six. This calculation was extremely rapid using the above vectorised code for the Markov chain.
Worm and Apple Expected Value In the excellent answer by Glen_b, he shows that you can calculate the expected value analytically using a simple system of linear equations. Following this analytic method you can determine that the
34,346
Worm and Apple Expected Value
Just want to illustrate a simple way to look at part (a) without going through all the Markov chain routine. There are two classes of states to worry about: being one step away and being two steps away (C and D are identical in terms of expected steps until reaching A, and B and E are identical). Let "$S_B$" represent the number of steps it takes from vertex $B$ and so on. $E(S_C) = 1+\frac12[E(S_B)+E(S_D)] = 1+ \frac12[E(S_B)+E(S_C)]$ Similarly write an equation for the expectation for $E(S_B)$. Substitute the second into the first (and for convenience write $c$ for $E(S_C)$) and you get a solution for $c$ in a couple of lines.
Worm and Apple Expected Value
Just want to illustrate a simple way to look at part (a) without going through all the Markov chain routine. There are two classes of states to worry about: being one step away and being two steps awa
Worm and Apple Expected Value Just want to illustrate a simple way to look at part (a) without going through all the Markov chain routine. There are two classes of states to worry about: being one step away and being two steps away (C and D are identical in terms of expected steps until reaching A, and B and E are identical). Let "$S_B$" represent the number of steps it takes from vertex $B$ and so on. $E(S_C) = 1+\frac12[E(S_B)+E(S_D)] = 1+ \frac12[E(S_B)+E(S_C)]$ Similarly write an equation for the expectation for $E(S_B)$. Substitute the second into the first (and for convenience write $c$ for $E(S_C)$) and you get a solution for $c$ in a couple of lines.
Worm and Apple Expected Value Just want to illustrate a simple way to look at part (a) without going through all the Markov chain routine. There are two classes of states to worry about: being one step away and being two steps awa
34,347
Worm and Apple Expected Value
The problem This Markov chain has three states, distinguished by whether the worm is $0,$ $1,$ or $2$ spaces away from $C.$ Let $X_i$ be the random variable giving how many steps the worm will take to reach $C$ from state $i\in\{0,1,2\}.$ Their probability generating functions are a convenient algebraic way to encode the probabilities of these variables. It is unnecessary to worry about analytic issues like convergence: just view them as formal power series in a symbol $t$ given by $$f_i(t) = \Pr(X_i=0) + \Pr(X_i=1)t^1 + \Pr(X_i=2)t^2 + \cdots + \Pr(X_i=n)t^n + \cdots$$ Since $\Pr(X_0=0)=1,$ it is trivial that $f_0(t)=1.$ We need to find $f_2.$ Analysis and solution From state $1,$ the worm has equal chances of $1/2$ of moving back to state $2$ or reaching $C$. Accounting for taking this one step adds $1$ to all powers of $t$, tantamount to multiplying the pgf by $t$, giving $$f_1 = \frac{1}{2}t\left(f_2 + f_0\right).$$ Similarly, from state $2$ the worm has equal chances of staying in state $2$ or reaching state $1,$ whence $$f_2 = \frac{1}{2}t\left(f_2 + f_1\right).$$ The appearance of $t/2$ suggests our work will be made easier by introducing the variable $x=t/2,$ giving $$f_1(x) = x(f_2(x) + f_0(x));\quad f_2(x) = x(f_2(x) + f_1(x)).$$ Substituting the first into the second and recalling $f_0=1$ gives $$f_2(x) = x(f_2(x) + x(f_2(x) + 1))\tag{*}$$ whose unique solution is $$f_2(x) = \frac{x^2}{1 - x - x^2}.\tag{**}$$ I highlighted the equation $(*)$ to emphasize its basic simplicity and its formal similarity to the equation we would obtain by analyzing only the expected values $E[X_i]:$ in effect, for the same amount of work it takes to find this one number, we get the entire distribution. Implications and simplification Equivalently, when $(*)$ is written out term-by-term and the powers of $t$ are matched it asserts that for $n\ge 4,$ $$2^n\Pr(X_2=n) = 2^{n-1}\Pr(X_2=n-1) + 2^{n-2}\Pr(X_2=n-2).$$ This is the recurrence for the famous sequence of Fibonacci numbers $$(F_n) = (1,1,2,3,5,8,13,21,34,55,89,144,\ldots)$$ (indexed from $n=0$). The solution matching $(**)$ is this sequence shifted by two places (because there is no probability that $X_2=0$ or $X_2=1$ and it is easy to check that $2^2\Pr(X_2=2)=1=2^3\Pr(X_2=3)$). Consequently $$\Pr(X_2 = n) = 2^{-n-2}F_{n-2}.$$ More specifically, $$\eqalign{ f_2(t) &= 2^{-2}F_0t^2 + 2^{-3}F_1 t^3 + 2^{-4} F_2 t^4 + \cdots \\ &= \frac{1}{4}t^2 + \frac{1}{8}t^3 + \frac{2}{16}t^4 + \frac{3}{32}t^5 + \frac{5}{64}t^6 + \frac{8}{128}t^7 +\frac{13}{256}t^8 + \cdots. }$$ The expectation of $X_2$ is readily found by evaluating the derivative $f^\prime$ and substituting $t=1,$ because (differentiating the powers of $t$ term by term) this gives the formula $$f^\prime(1) = \Pr(X_2=0)(0) + \Pr(X_2=1)(1)1^0 + \cdots + \Pr(X_2=n)(n)1^{n-1} + \cdots$$ which, as the sum of the probabilities times the values of $X_2,$ is precisely the definition of $E[X_2].$ Taking the derivative using $(**)$ produces a simple formula for the expectation. Some brief comments By expanding $(**)$ as partial fractions, $f_2$ can be written as the sum of two geometric series. This immediately shows the probabilities $\Pr(X_2=n)$ will decrease exponentially. It also yields a closed form for the tail probabilities $\Pr(X_2 \gt n).$ Using that, we can quickly compute that $\Pr(X_2 \ge 100)$ is a little less than $10^{-9}.$ Finally, these formulas involve the Golden Ratio $\phi = (1 + \sqrt{5})/2.$ This number is the length of a chord of a regular pentagon (of unit side), yielding a striking connection between a purely combinatorial Markov chain on the pentagon (which "knows" nothing about Euclidean geometry) and the geometry of a regular pentagon in the Euclidean plane.
Worm and Apple Expected Value
The problem This Markov chain has three states, distinguished by whether the worm is $0,$ $1,$ or $2$ spaces away from $C.$ Let $X_i$ be the random variable giving how many steps the worm will take t
Worm and Apple Expected Value The problem This Markov chain has three states, distinguished by whether the worm is $0,$ $1,$ or $2$ spaces away from $C.$ Let $X_i$ be the random variable giving how many steps the worm will take to reach $C$ from state $i\in\{0,1,2\}.$ Their probability generating functions are a convenient algebraic way to encode the probabilities of these variables. It is unnecessary to worry about analytic issues like convergence: just view them as formal power series in a symbol $t$ given by $$f_i(t) = \Pr(X_i=0) + \Pr(X_i=1)t^1 + \Pr(X_i=2)t^2 + \cdots + \Pr(X_i=n)t^n + \cdots$$ Since $\Pr(X_0=0)=1,$ it is trivial that $f_0(t)=1.$ We need to find $f_2.$ Analysis and solution From state $1,$ the worm has equal chances of $1/2$ of moving back to state $2$ or reaching $C$. Accounting for taking this one step adds $1$ to all powers of $t$, tantamount to multiplying the pgf by $t$, giving $$f_1 = \frac{1}{2}t\left(f_2 + f_0\right).$$ Similarly, from state $2$ the worm has equal chances of staying in state $2$ or reaching state $1,$ whence $$f_2 = \frac{1}{2}t\left(f_2 + f_1\right).$$ The appearance of $t/2$ suggests our work will be made easier by introducing the variable $x=t/2,$ giving $$f_1(x) = x(f_2(x) + f_0(x));\quad f_2(x) = x(f_2(x) + f_1(x)).$$ Substituting the first into the second and recalling $f_0=1$ gives $$f_2(x) = x(f_2(x) + x(f_2(x) + 1))\tag{*}$$ whose unique solution is $$f_2(x) = \frac{x^2}{1 - x - x^2}.\tag{**}$$ I highlighted the equation $(*)$ to emphasize its basic simplicity and its formal similarity to the equation we would obtain by analyzing only the expected values $E[X_i]:$ in effect, for the same amount of work it takes to find this one number, we get the entire distribution. Implications and simplification Equivalently, when $(*)$ is written out term-by-term and the powers of $t$ are matched it asserts that for $n\ge 4,$ $$2^n\Pr(X_2=n) = 2^{n-1}\Pr(X_2=n-1) + 2^{n-2}\Pr(X_2=n-2).$$ This is the recurrence for the famous sequence of Fibonacci numbers $$(F_n) = (1,1,2,3,5,8,13,21,34,55,89,144,\ldots)$$ (indexed from $n=0$). The solution matching $(**)$ is this sequence shifted by two places (because there is no probability that $X_2=0$ or $X_2=1$ and it is easy to check that $2^2\Pr(X_2=2)=1=2^3\Pr(X_2=3)$). Consequently $$\Pr(X_2 = n) = 2^{-n-2}F_{n-2}.$$ More specifically, $$\eqalign{ f_2(t) &= 2^{-2}F_0t^2 + 2^{-3}F_1 t^3 + 2^{-4} F_2 t^4 + \cdots \\ &= \frac{1}{4}t^2 + \frac{1}{8}t^3 + \frac{2}{16}t^4 + \frac{3}{32}t^5 + \frac{5}{64}t^6 + \frac{8}{128}t^7 +\frac{13}{256}t^8 + \cdots. }$$ The expectation of $X_2$ is readily found by evaluating the derivative $f^\prime$ and substituting $t=1,$ because (differentiating the powers of $t$ term by term) this gives the formula $$f^\prime(1) = \Pr(X_2=0)(0) + \Pr(X_2=1)(1)1^0 + \cdots + \Pr(X_2=n)(n)1^{n-1} + \cdots$$ which, as the sum of the probabilities times the values of $X_2,$ is precisely the definition of $E[X_2].$ Taking the derivative using $(**)$ produces a simple formula for the expectation. Some brief comments By expanding $(**)$ as partial fractions, $f_2$ can be written as the sum of two geometric series. This immediately shows the probabilities $\Pr(X_2=n)$ will decrease exponentially. It also yields a closed form for the tail probabilities $\Pr(X_2 \gt n).$ Using that, we can quickly compute that $\Pr(X_2 \ge 100)$ is a little less than $10^{-9}.$ Finally, these formulas involve the Golden Ratio $\phi = (1 + \sqrt{5})/2.$ This number is the length of a chord of a regular pentagon (of unit side), yielding a striking connection between a purely combinatorial Markov chain on the pentagon (which "knows" nothing about Euclidean geometry) and the geometry of a regular pentagon in the Euclidean plane.
Worm and Apple Expected Value The problem This Markov chain has three states, distinguished by whether the worm is $0,$ $1,$ or $2$ spaces away from $C.$ Let $X_i$ be the random variable giving how many steps the worm will take t
34,348
Worm and Apple Expected Value
For the mean number of days until dinner, condition on the step taken on the first day. Let $X$ be the number of days until the worm gets the apple. Let $F$ be the first step. Then we have $$E[X]=E[X|F=B] \ [P(F=B)]+E[X|F=D] \ P[F=D]$$ If the first step is to $B,$ then either the worm gets the apple on day 2 with probability one-half, or it is back to vertex $C$ with probability one-half and it starts over. We can write this as $$E[X|F=B]=2 \left( \frac{1}{2} \right) + \left(2+E[X] \right) \left( \frac{1}{2} \right)=2+\frac{E[X]}{2}$$ If the first step is to $D,$ then by symmetry this is the same as being at vertex $C$ except the worm has taken a single step so $$E[X|F=D]=1+E[X]$$ Putting it all together, we get $$E[X] = \left( 2+\frac{E[X]}{2} \right)\left( \frac{1}{2} \right) + \left( 1 + E[X] \right)\left( \frac{1}{2} \right) $$ Solving for $E[X]$ yields $$E[X] = 6$$
Worm and Apple Expected Value
For the mean number of days until dinner, condition on the step taken on the first day. Let $X$ be the number of days until the worm gets the apple. Let $F$ be the first step. Then we have $$E[X]=E[
Worm and Apple Expected Value For the mean number of days until dinner, condition on the step taken on the first day. Let $X$ be the number of days until the worm gets the apple. Let $F$ be the first step. Then we have $$E[X]=E[X|F=B] \ [P(F=B)]+E[X|F=D] \ P[F=D]$$ If the first step is to $B,$ then either the worm gets the apple on day 2 with probability one-half, or it is back to vertex $C$ with probability one-half and it starts over. We can write this as $$E[X|F=B]=2 \left( \frac{1}{2} \right) + \left(2+E[X] \right) \left( \frac{1}{2} \right)=2+\frac{E[X]}{2}$$ If the first step is to $D,$ then by symmetry this is the same as being at vertex $C$ except the worm has taken a single step so $$E[X|F=D]=1+E[X]$$ Putting it all together, we get $$E[X] = \left( 2+\frac{E[X]}{2} \right)\left( \frac{1}{2} \right) + \left( 1 + E[X] \right)\left( \frac{1}{2} \right) $$ Solving for $E[X]$ yields $$E[X] = 6$$
Worm and Apple Expected Value For the mean number of days until dinner, condition on the step taken on the first day. Let $X$ be the number of days until the worm gets the apple. Let $F$ be the first step. Then we have $$E[X]=E[
34,349
Why using cross validation is not a good option for Lasso regression?
So, the point is that when you define an optimal value of $\lambda$ you must ask optimal for what? In the case of the LASSO, there are two possible goals: Estimate $\lambda_{\text{pred}}$, the value of $\lambda$ that leads to the best prediction error. Estimate $\lambda_{\text{ms}}$, the value of $\lambda$ that produces the correct model (or at least something that is close to it). As Dr. Fox correctly notes, in general it is not the case that $\lambda_{\text{pred}} = \lambda_{\text{ms}}$, and typically $\lambda_{\text{pred}} < \lambda_{\text{ms}}$. But choosing $\lambda$ by cross-validation is using prediction error, and hence one would expect it to estimate $\lambda_{\text{pred}}$. Consequently, if you choose $\lambda$ by cross validation, you may select a $\lambda$ which leads to a model which is too big. If your goal is recovery of the true model, it follows that one should be careful applying cross validation. I personally encounter this issue a lot when writing papers whenever I do a simulation study looking at the lasso for variable selection. Invariably, using cross-validation to select $\lambda$ is a disaster. I have had much better luck applying Lasso$(\lambda)$ to select the model and then fitting by least squares, then applying cross-validation to this entire procedure to select $\lambda$. It's still not ideal, but it is a big improvement. That's not to say that cross-validation is completely off the table for model selection, it's just that you need to think carefully about what $\lambda$ your method is estimating. For example, lets consider ignoring the lasso and just think of a low-dimensional linear regression. In this case, leave-one-out cross validation is known to be more or less equivalent to some variant of AIC, and AIC is well-known to be inconsistent for model selection. Similarly, BIC is generally associated with leave-$V$-out cross validation where $V$ is some function of the size of the data, and it is well-known that variants of BIC are model selection consistent. Hence, there is some way of doing cross-validation that we would expect to be consistent for model selection, but leave-one-out is not.
Why using cross validation is not a good option for Lasso regression?
So, the point is that when you define an optimal value of $\lambda$ you must ask optimal for what? In the case of the LASSO, there are two possible goals: Estimate $\lambda_{\text{pred}}$, the value
Why using cross validation is not a good option for Lasso regression? So, the point is that when you define an optimal value of $\lambda$ you must ask optimal for what? In the case of the LASSO, there are two possible goals: Estimate $\lambda_{\text{pred}}$, the value of $\lambda$ that leads to the best prediction error. Estimate $\lambda_{\text{ms}}$, the value of $\lambda$ that produces the correct model (or at least something that is close to it). As Dr. Fox correctly notes, in general it is not the case that $\lambda_{\text{pred}} = \lambda_{\text{ms}}$, and typically $\lambda_{\text{pred}} < \lambda_{\text{ms}}$. But choosing $\lambda$ by cross-validation is using prediction error, and hence one would expect it to estimate $\lambda_{\text{pred}}$. Consequently, if you choose $\lambda$ by cross validation, you may select a $\lambda$ which leads to a model which is too big. If your goal is recovery of the true model, it follows that one should be careful applying cross validation. I personally encounter this issue a lot when writing papers whenever I do a simulation study looking at the lasso for variable selection. Invariably, using cross-validation to select $\lambda$ is a disaster. I have had much better luck applying Lasso$(\lambda)$ to select the model and then fitting by least squares, then applying cross-validation to this entire procedure to select $\lambda$. It's still not ideal, but it is a big improvement. That's not to say that cross-validation is completely off the table for model selection, it's just that you need to think carefully about what $\lambda$ your method is estimating. For example, lets consider ignoring the lasso and just think of a low-dimensional linear regression. In this case, leave-one-out cross validation is known to be more or less equivalent to some variant of AIC, and AIC is well-known to be inconsistent for model selection. Similarly, BIC is generally associated with leave-$V$-out cross validation where $V$ is some function of the size of the data, and it is well-known that variants of BIC are model selection consistent. Hence, there is some way of doing cross-validation that we would expect to be consistent for model selection, but leave-one-out is not.
Why using cross validation is not a good option for Lasso regression? So, the point is that when you define an optimal value of $\lambda$ you must ask optimal for what? In the case of the LASSO, there are two possible goals: Estimate $\lambda_{\text{pred}}$, the value
34,350
Why using cross validation is not a good option for Lasso regression?
The lecturer's meaning is not entirely clear. She says: But in the case of LASSO, I just want to mention that using these types of procedures, assessing the error on a validation set or using cross validation, it's choosing lambda that provides the best predictive accuracy. But what that ends up tending to do is choosing a lambda value that's a bit smaller than might be optimal for doing model selection. Cross-validation can be used in two ways in LASSO: to choose an optimal $\lambda$ and to assess the predictive error. To the best of my knowledge doing these things together in a single fold is not best practices. That's because you've chosen the $\lambda$ value that's optimal for a particular cross-validation fold, this naturally leads to overfitting which favors smaller values of $\lambda$. I think better practices would be either to apply nested CV: so that for the particular training fold, CV is applied again to find an optimal $\lambda$ in that iteration; or apply CV in two batches, first a set of optimization-CV to find an optimal $\lambda$ then fix that value when fitting LASSO models in a another batch set of validation-CV steps to assess model error.
Why using cross validation is not a good option for Lasso regression?
The lecturer's meaning is not entirely clear. She says: But in the case of LASSO, I just want to mention that using these types of procedures, assessing the error on a validation set or using cross v
Why using cross validation is not a good option for Lasso regression? The lecturer's meaning is not entirely clear. She says: But in the case of LASSO, I just want to mention that using these types of procedures, assessing the error on a validation set or using cross validation, it's choosing lambda that provides the best predictive accuracy. But what that ends up tending to do is choosing a lambda value that's a bit smaller than might be optimal for doing model selection. Cross-validation can be used in two ways in LASSO: to choose an optimal $\lambda$ and to assess the predictive error. To the best of my knowledge doing these things together in a single fold is not best practices. That's because you've chosen the $\lambda$ value that's optimal for a particular cross-validation fold, this naturally leads to overfitting which favors smaller values of $\lambda$. I think better practices would be either to apply nested CV: so that for the particular training fold, CV is applied again to find an optimal $\lambda$ in that iteration; or apply CV in two batches, first a set of optimization-CV to find an optimal $\lambda$ then fix that value when fitting LASSO models in a another batch set of validation-CV steps to assess model error.
Why using cross validation is not a good option for Lasso regression? The lecturer's meaning is not entirely clear. She says: But in the case of LASSO, I just want to mention that using these types of procedures, assessing the error on a validation set or using cross v
34,351
Why using cross validation is not a good option for Lasso regression?
Here is a very simple explanation of why there is a difference between modeling for research and modeling for prediction. I'll get into how this relates to cross-validation and Lasso by the end. Let's say you have three possible models for a problem: 1.) A good predictive approximation with a mixture of true and accidental variables with parameters tuned for prediction and very little theory behind it (the proverbial "black box"). 2.) The true model with all of and only the true variables and perfectly tuned parameters (i.e. the ground truth). 3.) A good approximation of the true model with only true variables but not all of them and parameters that are close to the true parameters (the ground truth minus some missing information) 1 and 3 are possible in practice, 2 almost always isn't. 3 is a better approximation to understanding 2, but very often 1 is actually better at predicting future values. Why? Because the "false" variables and parameters of 1 in some sense encode information that is present in 2 but not in 3 (maybe they are confounding variables in some sense). It is very unintuitive, because if you had model 2 you would obviously do better in both understanding and prediction. But you don't have model 2. In practice you have to choose between a functional approximation and a first principles approximation. A motorcycle is a reasonable functional approximation of a car, a car that is missing a wheel is a terrible functional approximation while still being much closer in reality to a car. Lasso regression combined with cross-validation is a great way of generating models in the first category. The problem is that there is no principled reason to think that it will get you closer to 2 or even 3. Maybe it will or maybe it won't, but you will do better being much more aggressive in your regularization than predictive accuracy would suggest if you want to get close to 3. Of course eventually all approximations approach the ground truth, a very impressive predictive accuracy is some evidence of having a true model.
Why using cross validation is not a good option for Lasso regression?
Here is a very simple explanation of why there is a difference between modeling for research and modeling for prediction. I'll get into how this relates to cross-validation and Lasso by the end. Let's
Why using cross validation is not a good option for Lasso regression? Here is a very simple explanation of why there is a difference between modeling for research and modeling for prediction. I'll get into how this relates to cross-validation and Lasso by the end. Let's say you have three possible models for a problem: 1.) A good predictive approximation with a mixture of true and accidental variables with parameters tuned for prediction and very little theory behind it (the proverbial "black box"). 2.) The true model with all of and only the true variables and perfectly tuned parameters (i.e. the ground truth). 3.) A good approximation of the true model with only true variables but not all of them and parameters that are close to the true parameters (the ground truth minus some missing information) 1 and 3 are possible in practice, 2 almost always isn't. 3 is a better approximation to understanding 2, but very often 1 is actually better at predicting future values. Why? Because the "false" variables and parameters of 1 in some sense encode information that is present in 2 but not in 3 (maybe they are confounding variables in some sense). It is very unintuitive, because if you had model 2 you would obviously do better in both understanding and prediction. But you don't have model 2. In practice you have to choose between a functional approximation and a first principles approximation. A motorcycle is a reasonable functional approximation of a car, a car that is missing a wheel is a terrible functional approximation while still being much closer in reality to a car. Lasso regression combined with cross-validation is a great way of generating models in the first category. The problem is that there is no principled reason to think that it will get you closer to 2 or even 3. Maybe it will or maybe it won't, but you will do better being much more aggressive in your regularization than predictive accuracy would suggest if you want to get close to 3. Of course eventually all approximations approach the ground truth, a very impressive predictive accuracy is some evidence of having a true model.
Why using cross validation is not a good option for Lasso regression? Here is a very simple explanation of why there is a difference between modeling for research and modeling for prediction. I'll get into how this relates to cross-validation and Lasso by the end. Let's
34,352
What are the minimum and maximum values of variance? [closed]
I interpret this question as asking Given a set of $N$ numbers $x_1, x_2, \ldots,x_N$, what is the minimum and maximum values that the variance $V$, defined as $$V = \frac 1N \sum_{k=1}^N (x_k-\bar{x})^2 ~~ \text{where}~\bar{x}=\frac 1N \sum_{k=1}^N x_k$$ can take on? Well, the minimum value of $V$ is $0$ as Daniel Lopez's comment points out, and it occurs if and only if all the $N$ numbers have the same value. At the other end, every finite set of real numbers has a (finite) upper bound (call it $b$) and a (finite) lower bound (call it $a$), and $$V \leq \frac{(b-a)^2}{4} = \left(\frac{b-a}{2}\right)^2 = \left(\frac{\mathcal R}{2}\right)^2\tag{1}$$ where $\mathcal R$ is the range of the set of $N$ numbers. Note that it is not necessary to know the values of $b$ and $a$ separately; we only need the range $\mathcal R = (b-a)$ to calculate the upper bound $(1)$ on $V$. If $N$ is an even number, there exist sets for which the bound $(1)$ holds with equality: these are sets for which $\frac N2$ of the $x_k$ have value $b$ and the other $\frac N2$ have value $a$. For odd $N$, the bound $\frac{(b-a)^2}{4}$ still applies but cannot be attained with equality if $N>1$. For odd $N>1$, the maximum value is $\frac{(b-a)^2}{4}\frac{N^2-1}{N^2}$ and is attained by a set in which $\frac{N-1}{2}$ of the $x_k$ have value $b$, $\frac{N+1}{2}$ have value $a$, or vice versa. For details, see this answer of mine on math.SE. In another answer (and the comments on it), @Jim has argued that "A set of $N$ real numbers" tells the listener nothing whatsoever about the set if you don't know what any of the values are, even the minimum or maximum, and so the only completely correct answer is that maximum possible $V$ is unbounded: any other answer (such as mine above or a couple of possible answers suggested by Jim) must be be festooned with caveats that the answer is based on assumptions might be unwarranted. I disagree. Even if a secretive questioner is unwilling to share any details about the set he/she is concerned about, my answer gives the questioner enough information to find the maximum possible value of $V$ for him/herself from very minimal information abut the set: just the range suffices, no need to know even what $N$ is! EDIT: (by AHK) Corrected the maximum variance for odd $N>1$ and the corresponding choice of $x_i$'s
What are the minimum and maximum values of variance? [closed]
I interpret this question as asking Given a set of $N$ numbers $x_1, x_2, \ldots,x_N$, what is the minimum and maximum values that the variance $V$, defined as $$V = \frac 1N \sum_{k=1}^N (x_k-\bar{x
What are the minimum and maximum values of variance? [closed] I interpret this question as asking Given a set of $N$ numbers $x_1, x_2, \ldots,x_N$, what is the minimum and maximum values that the variance $V$, defined as $$V = \frac 1N \sum_{k=1}^N (x_k-\bar{x})^2 ~~ \text{where}~\bar{x}=\frac 1N \sum_{k=1}^N x_k$$ can take on? Well, the minimum value of $V$ is $0$ as Daniel Lopez's comment points out, and it occurs if and only if all the $N$ numbers have the same value. At the other end, every finite set of real numbers has a (finite) upper bound (call it $b$) and a (finite) lower bound (call it $a$), and $$V \leq \frac{(b-a)^2}{4} = \left(\frac{b-a}{2}\right)^2 = \left(\frac{\mathcal R}{2}\right)^2\tag{1}$$ where $\mathcal R$ is the range of the set of $N$ numbers. Note that it is not necessary to know the values of $b$ and $a$ separately; we only need the range $\mathcal R = (b-a)$ to calculate the upper bound $(1)$ on $V$. If $N$ is an even number, there exist sets for which the bound $(1)$ holds with equality: these are sets for which $\frac N2$ of the $x_k$ have value $b$ and the other $\frac N2$ have value $a$. For odd $N$, the bound $\frac{(b-a)^2}{4}$ still applies but cannot be attained with equality if $N>1$. For odd $N>1$, the maximum value is $\frac{(b-a)^2}{4}\frac{N^2-1}{N^2}$ and is attained by a set in which $\frac{N-1}{2}$ of the $x_k$ have value $b$, $\frac{N+1}{2}$ have value $a$, or vice versa. For details, see this answer of mine on math.SE. In another answer (and the comments on it), @Jim has argued that "A set of $N$ real numbers" tells the listener nothing whatsoever about the set if you don't know what any of the values are, even the minimum or maximum, and so the only completely correct answer is that maximum possible $V$ is unbounded: any other answer (such as mine above or a couple of possible answers suggested by Jim) must be be festooned with caveats that the answer is based on assumptions might be unwarranted. I disagree. Even if a secretive questioner is unwilling to share any details about the set he/she is concerned about, my answer gives the questioner enough information to find the maximum possible value of $V$ for him/herself from very minimal information abut the set: just the range suffices, no need to know even what $N$ is! EDIT: (by AHK) Corrected the maximum variance for odd $N>1$ and the corresponding choice of $x_i$'s
What are the minimum and maximum values of variance? [closed] I interpret this question as asking Given a set of $N$ numbers $x_1, x_2, \ldots,x_N$, what is the minimum and maximum values that the variance $V$, defined as $$V = \frac 1N \sum_{k=1}^N (x_k-\bar{x
34,353
What are the minimum and maximum values of variance? [closed]
It depends... But on what exactly? The answer depends on the assumptions that you make. 1. If I read your question most literally: you know all data values. In that case, there is no need for bounds (minimum or maximum), as you can simply calculate the variance of the data values in the array with: $$ \text{var}(\mathbf{x}) = \frac{1}{N} \sum_{i=1}^N (x_i - \bar{x})^2 \, . $$ 2. Now, say, you do not know any of the values; only that there are $N$. In other words: you have not seen the sample, but only know the sample size. Then the answer depends on what you assume about where the sample came from, i.e. the population. 2.1 If you make no assumptions about the population (equivalently: the underlying distribution), then you cannot say anything about an upperbound for the sample variance. Take the example of a $t$-distribution with $\nu \le 2$ degrees of freedom. The population variance is infinite, and the sample variance cannot be bounded from above (unless $N = 1$). Why? Because for every sample you provide, I can increase its variance by pushing the minimum and maximum values farther from the mean. Please note, this same argument holds for a standard normal distribution! Even-though it has population variance equal to $1$, one can create samples with arbitrarily large sample variance. 2.2 If you assume that the population "lives on" bounded support, then Dilip Sarwate's answer will suffice: on support $[0, \, c]$ the sample variance is maximally $c^2 / 4$ (multiplied by $(N-1)/N$ for odd $N$). P.S. Since the variance is essentially a weighted sum (integral) of non-negative terms (integrand), it is non-negative itself and bounded from below by $0$. I therefore concentrated on the upperbound in my answer.
What are the minimum and maximum values of variance? [closed]
It depends... But on what exactly? The answer depends on the assumptions that you make. 1. If I read your question most literally: you know all data values. In that case, there is no need for bounds
What are the minimum and maximum values of variance? [closed] It depends... But on what exactly? The answer depends on the assumptions that you make. 1. If I read your question most literally: you know all data values. In that case, there is no need for bounds (minimum or maximum), as you can simply calculate the variance of the data values in the array with: $$ \text{var}(\mathbf{x}) = \frac{1}{N} \sum_{i=1}^N (x_i - \bar{x})^2 \, . $$ 2. Now, say, you do not know any of the values; only that there are $N$. In other words: you have not seen the sample, but only know the sample size. Then the answer depends on what you assume about where the sample came from, i.e. the population. 2.1 If you make no assumptions about the population (equivalently: the underlying distribution), then you cannot say anything about an upperbound for the sample variance. Take the example of a $t$-distribution with $\nu \le 2$ degrees of freedom. The population variance is infinite, and the sample variance cannot be bounded from above (unless $N = 1$). Why? Because for every sample you provide, I can increase its variance by pushing the minimum and maximum values farther from the mean. Please note, this same argument holds for a standard normal distribution! Even-though it has population variance equal to $1$, one can create samples with arbitrarily large sample variance. 2.2 If you assume that the population "lives on" bounded support, then Dilip Sarwate's answer will suffice: on support $[0, \, c]$ the sample variance is maximally $c^2 / 4$ (multiplied by $(N-1)/N$ for odd $N$). P.S. Since the variance is essentially a weighted sum (integral) of non-negative terms (integrand), it is non-negative itself and bounded from below by $0$. I therefore concentrated on the upperbound in my answer.
What are the minimum and maximum values of variance? [closed] It depends... But on what exactly? The answer depends on the assumptions that you make. 1. If I read your question most literally: you know all data values. In that case, there is no need for bounds
34,354
Why is the OLS assumption "no perfect multicollinearity" so vital?
I bet this was covered a million times on this board. In a nutshell: because the design matrix becomes degenerated, and there is no unique solution to the linear algebra problem of OLS. There will be infinite number of equally good solutions, and there's no way to tell which one is better. Technical details: the design matrix is a matrix that is constructed by putting all $p$ variables in columns and all $n$ observations in rows. It is $X_{ij}$ where $i$ is rows from 1 to $n$ and $j$ is columns from 1 to $p$. It so happens that when there is a perfect collinearity then matrix $X$ can be reduced to a matrix $X'_{ik}$ where each column now represents a new set of variables $k=[1, p']$ such that $p'<p$. In other words the new design matrix $X'$ has fewer columns than the original, yet no information was lost. In this case the usual solution $\beta=(X^TX)^{-1}X^TY$ does not exist because $(X^TX)^{-1}$ is singular. On the other hand the solution $\beta'=(X'^TX')^{-1}X'^TY$ does exist on the new set of variables. So, the only problem with perfect collinearity is that the original set of variables does not have a unique solution, but it does have solutions. The implication is that you can pick any of the non unique solutions, and it will be as good as any other. Note, it will not be as bad as any other. So, you can use this solution to predict $Y'$. The only problem's that you'll have to step outside a typical OLS method to find the solutions, because OLS' linear algebra trick doesn't work. Things like gradient descent will work.
Why is the OLS assumption "no perfect multicollinearity" so vital?
I bet this was covered a million times on this board. In a nutshell: because the design matrix becomes degenerated, and there is no unique solution to the linear algebra problem of OLS. There will be
Why is the OLS assumption "no perfect multicollinearity" so vital? I bet this was covered a million times on this board. In a nutshell: because the design matrix becomes degenerated, and there is no unique solution to the linear algebra problem of OLS. There will be infinite number of equally good solutions, and there's no way to tell which one is better. Technical details: the design matrix is a matrix that is constructed by putting all $p$ variables in columns and all $n$ observations in rows. It is $X_{ij}$ where $i$ is rows from 1 to $n$ and $j$ is columns from 1 to $p$. It so happens that when there is a perfect collinearity then matrix $X$ can be reduced to a matrix $X'_{ik}$ where each column now represents a new set of variables $k=[1, p']$ such that $p'<p$. In other words the new design matrix $X'$ has fewer columns than the original, yet no information was lost. In this case the usual solution $\beta=(X^TX)^{-1}X^TY$ does not exist because $(X^TX)^{-1}$ is singular. On the other hand the solution $\beta'=(X'^TX')^{-1}X'^TY$ does exist on the new set of variables. So, the only problem with perfect collinearity is that the original set of variables does not have a unique solution, but it does have solutions. The implication is that you can pick any of the non unique solutions, and it will be as good as any other. Note, it will not be as bad as any other. So, you can use this solution to predict $Y'$. The only problem's that you'll have to step outside a typical OLS method to find the solutions, because OLS' linear algebra trick doesn't work. Things like gradient descent will work.
Why is the OLS assumption "no perfect multicollinearity" so vital? I bet this was covered a million times on this board. In a nutshell: because the design matrix becomes degenerated, and there is no unique solution to the linear algebra problem of OLS. There will be
34,355
Why is the OLS assumption "no perfect multicollinearity" so vital?
Perfect multicollinearity leads to great pain. Suppose your data $Y$ is generated by a single parameter $X$ with an added noise process $u$, so $Y = \beta X + u$. Now let me (foolishly!) adopt a model $Y = \beta_1 X_1 + \beta_2 X_2 + u$, where there is perfect multicollinearity, say $X_1 = X_2$. I try to find $\beta_1, \beta_2$ by regression. But the least-squares error is minimized equally well for many solutions, just as long as $\beta_1 + \beta_2 = \beta$. So in fact, there's no way to state independent values for $\beta_1$ and $\beta_2$ in this case. There is also no way to state confidence intervals for $\beta_1$ and $\beta_2$. Personally, I don't regard any statistical quantity as meaningful unless I can give a confidence interval.
Why is the OLS assumption "no perfect multicollinearity" so vital?
Perfect multicollinearity leads to great pain. Suppose your data $Y$ is generated by a single parameter $X$ with an added noise process $u$, so $Y = \beta X + u$. Now let me (foolishly!) adopt a model
Why is the OLS assumption "no perfect multicollinearity" so vital? Perfect multicollinearity leads to great pain. Suppose your data $Y$ is generated by a single parameter $X$ with an added noise process $u$, so $Y = \beta X + u$. Now let me (foolishly!) adopt a model $Y = \beta_1 X_1 + \beta_2 X_2 + u$, where there is perfect multicollinearity, say $X_1 = X_2$. I try to find $\beta_1, \beta_2$ by regression. But the least-squares error is minimized equally well for many solutions, just as long as $\beta_1 + \beta_2 = \beta$. So in fact, there's no way to state independent values for $\beta_1$ and $\beta_2$ in this case. There is also no way to state confidence intervals for $\beta_1$ and $\beta_2$. Personally, I don't regard any statistical quantity as meaningful unless I can give a confidence interval.
Why is the OLS assumption "no perfect multicollinearity" so vital? Perfect multicollinearity leads to great pain. Suppose your data $Y$ is generated by a single parameter $X$ with an added noise process $u$, so $Y = \beta X + u$. Now let me (foolishly!) adopt a model
34,356
What is auxiliary loss as mentioned in PSPNet paper
The idea of auxiliary loss (aka auxiliary towers) comes from GoogLeNet paper. At core intuition can be explained in this way: Let's say you are building a network by stacking up lots of identical modules. As network becomes deeper, you face slowed down training because of vanishing gradient issue (this was before BatchNorm days). To promote learning for each module layer, you can attach some small network to the output of that module. This network typically have a couple of conv layers followed by FCs and then final classification prediction. This auxiliary network's task is to predict same label as final network would predict but using the module's output. We add the loss of this aux network to the final loss of the entire network weighted by some value < 1. For example, in GoogLeNet, you can see two tower like aux networks on the right ending in orange nodes: Now, if the module is learning slowly then it would generate big loss and cause gradient flow in that module helping gradients further downstream as well. This technique has apparently found to help training for very deep networks. Even when using batch norm, this can help to accelerate training during early cycles when weights are randomly initialized. Many NAS architecture uses this technique for initial evaluation during the search as you have a very limited budget to run epochs when evaluating 1000s of architectures so early acceleration improves performance. As aux networks are removed from the final model, it is not considered "cheating".
What is auxiliary loss as mentioned in PSPNet paper
The idea of auxiliary loss (aka auxiliary towers) comes from GoogLeNet paper. At core intuition can be explained in this way: Let's say you are building a network by stacking up lots of identical mod
What is auxiliary loss as mentioned in PSPNet paper The idea of auxiliary loss (aka auxiliary towers) comes from GoogLeNet paper. At core intuition can be explained in this way: Let's say you are building a network by stacking up lots of identical modules. As network becomes deeper, you face slowed down training because of vanishing gradient issue (this was before BatchNorm days). To promote learning for each module layer, you can attach some small network to the output of that module. This network typically have a couple of conv layers followed by FCs and then final classification prediction. This auxiliary network's task is to predict same label as final network would predict but using the module's output. We add the loss of this aux network to the final loss of the entire network weighted by some value < 1. For example, in GoogLeNet, you can see two tower like aux networks on the right ending in orange nodes: Now, if the module is learning slowly then it would generate big loss and cause gradient flow in that module helping gradients further downstream as well. This technique has apparently found to help training for very deep networks. Even when using batch norm, this can help to accelerate training during early cycles when weights are randomly initialized. Many NAS architecture uses this technique for initial evaluation during the search as you have a very limited budget to run epochs when evaluating 1000s of architectures so early acceleration improves performance. As aux networks are removed from the final model, it is not considered "cheating".
What is auxiliary loss as mentioned in PSPNet paper The idea of auxiliary loss (aka auxiliary towers) comes from GoogLeNet paper. At core intuition can be explained in this way: Let's say you are building a network by stacking up lots of identical mod
34,357
What is auxiliary loss as mentioned in PSPNet paper
I'm not totally sure about the use of the auxiliary loss in the PSPNet but in general such a auxiliary loss is used in networks with many layers. Such a auxiliary loss helps to reduce the vanishing gradient problem for earlier layers, stabilizes the training and is used as regularization. It's only used for training and not for inference. GoogLeNet also used auxiliary losses: https://arxiv.org/abs/1409.4842
What is auxiliary loss as mentioned in PSPNet paper
I'm not totally sure about the use of the auxiliary loss in the PSPNet but in general such a auxiliary loss is used in networks with many layers. Such a auxiliary loss helps to reduce the vanishing gr
What is auxiliary loss as mentioned in PSPNet paper I'm not totally sure about the use of the auxiliary loss in the PSPNet but in general such a auxiliary loss is used in networks with many layers. Such a auxiliary loss helps to reduce the vanishing gradient problem for earlier layers, stabilizes the training and is used as regularization. It's only used for training and not for inference. GoogLeNet also used auxiliary losses: https://arxiv.org/abs/1409.4842
What is auxiliary loss as mentioned in PSPNet paper I'm not totally sure about the use of the auxiliary loss in the PSPNet but in general such a auxiliary loss is used in networks with many layers. Such a auxiliary loss helps to reduce the vanishing gr
34,358
Soft version of the maximum function?
Consider the function $\text{hardmax}(z)^Tz$ for $z = [1, 2, 3, 4, 5]$ where hardmax is a hard version of softmax, which returns 1 for the maximum component and 0 for all the other components. Then we will have $[0, 0, 0, 0, 1] ^T [1, 2, 3, 4, 5] = 5$. On the other hand, softmax of $z$ will be $[0.01, 0.03, 0.09, 0.23, 0.64]$ so $[0.01, 0.03, 0.09, 0.23, 0.64] ^T [1, 2, 3, 4, 5] = 4.46$. As you can see, softmax causes a weighted average on the components where the larger components are weighted more heavily.
Soft version of the maximum function?
Consider the function $\text{hardmax}(z)^Tz$ for $z = [1, 2, 3, 4, 5]$ where hardmax is a hard version of softmax, which returns 1 for the maximum component and 0 for all the other components. Then we
Soft version of the maximum function? Consider the function $\text{hardmax}(z)^Tz$ for $z = [1, 2, 3, 4, 5]$ where hardmax is a hard version of softmax, which returns 1 for the maximum component and 0 for all the other components. Then we will have $[0, 0, 0, 0, 1] ^T [1, 2, 3, 4, 5] = 5$. On the other hand, softmax of $z$ will be $[0.01, 0.03, 0.09, 0.23, 0.64]$ so $[0.01, 0.03, 0.09, 0.23, 0.64] ^T [1, 2, 3, 4, 5] = 4.46$. As you can see, softmax causes a weighted average on the components where the larger components are weighted more heavily.
Soft version of the maximum function? Consider the function $\text{hardmax}(z)^Tz$ for $z = [1, 2, 3, 4, 5]$ where hardmax is a hard version of softmax, which returns 1 for the maximum component and 0 for all the other components. Then we
34,359
Soft version of the maximum function?
You can construct a smoother version of max function using softmax function, as the expression in your book suggests. Consider the following formulation of a max function: $$\max(z_1,\dots,z_n)=\mathrm{argmax}(z)\times z^T$$ The function argmax returns a vector with 0s and 1s. Thus it produces a rough max function. Rough in sense that its first derivative wrt its arguments is discontinuous: it's either 0 or 1. Whenever $z_i=z_j$ the first derivative jumps between 0 and 1. By replacing argmax with what machine learning people call softmax you get a smooth version of max function too, as suggested in your book. Here's a couple of charts to demonstrate the point. The following is a surface of an ordinary $\max(x_1,x_2)$ function. Compare it to the version using the expression from your textbook $\mathrm{softmax}(x_1,x_2)^T\times (x_1,x_2)$: Smoother version of max can be easier to deal with analytically.
Soft version of the maximum function?
You can construct a smoother version of max function using softmax function, as the expression in your book suggests. Consider the following formulation of a max function: $$\max(z_1,\dots,z_n)=\mathr
Soft version of the maximum function? You can construct a smoother version of max function using softmax function, as the expression in your book suggests. Consider the following formulation of a max function: $$\max(z_1,\dots,z_n)=\mathrm{argmax}(z)\times z^T$$ The function argmax returns a vector with 0s and 1s. Thus it produces a rough max function. Rough in sense that its first derivative wrt its arguments is discontinuous: it's either 0 or 1. Whenever $z_i=z_j$ the first derivative jumps between 0 and 1. By replacing argmax with what machine learning people call softmax you get a smooth version of max function too, as suggested in your book. Here's a couple of charts to demonstrate the point. The following is a surface of an ordinary $\max(x_1,x_2)$ function. Compare it to the version using the expression from your textbook $\mathrm{softmax}(x_1,x_2)^T\times (x_1,x_2)$: Smoother version of max can be easier to deal with analytically.
Soft version of the maximum function? You can construct a smoother version of max function using softmax function, as the expression in your book suggests. Consider the following formulation of a max function: $$\max(z_1,\dots,z_n)=\mathr
34,360
Soft version of the maximum function?
softmax is a smooth approximation of the argmax function,* taking a vector and returning a vector: $$\text{softmax}(x) = \frac{e^{\beta x}}{\sum{e^{\beta x}}} \to \text{argmax}(x)$$ This takes a vector as input and returns a vector as output (a one-hot encoding of the max's index, as opposed to an ordinal position). In order to get a smooth approximation of the max function, which returns the largest value in a vector (not its index), one can take the dot product of the softmax with the original vector: $$\text{softmax}(x)^Tx \to \text{argmax}(x)^Tx = \max(x)$$ * Note that softmax, in the case of multiple identical maximum values, will return a vector with $1/n$ in the maximum values' arguments' positions, not multiple 1s. * In softmax, $\beta = 1$, and as it approaches infinity, the function approaches argmax.
Soft version of the maximum function?
softmax is a smooth approximation of the argmax function,* taking a vector and returning a vector: $$\text{softmax}(x) = \frac{e^{\beta x}}{\sum{e^{\beta x}}} \to \text{argmax}(x)$$ This takes a vect
Soft version of the maximum function? softmax is a smooth approximation of the argmax function,* taking a vector and returning a vector: $$\text{softmax}(x) = \frac{e^{\beta x}}{\sum{e^{\beta x}}} \to \text{argmax}(x)$$ This takes a vector as input and returns a vector as output (a one-hot encoding of the max's index, as opposed to an ordinal position). In order to get a smooth approximation of the max function, which returns the largest value in a vector (not its index), one can take the dot product of the softmax with the original vector: $$\text{softmax}(x)^Tx \to \text{argmax}(x)^Tx = \max(x)$$ * Note that softmax, in the case of multiple identical maximum values, will return a vector with $1/n$ in the maximum values' arguments' positions, not multiple 1s. * In softmax, $\beta = 1$, and as it approaches infinity, the function approaches argmax.
Soft version of the maximum function? softmax is a smooth approximation of the argmax function,* taking a vector and returning a vector: $$\text{softmax}(x) = \frac{e^{\beta x}}{\sum{e^{\beta x}}} \to \text{argmax}(x)$$ This takes a vect
34,361
Calculating survival probability per person at time (t) from Cox PH
The Cox proportional hazards model can be described as follows: $$h(t|X)=h_{0}(t)e^{\beta X}$$ where $h(t)$ is the hazard rate at time $t$, $h_{0}(t)$ is the baseline hazard rate at time $t$, $\beta$ is a vector of coefficients and $X$ is a vector of covariates. As you will know, the Cox model is a semi-parametric model in that it is only partially defined parametrically. Essentially, the covariate part assumes a functional form whereas the baseline part has no parametric functional form (it's form is that of a step function). Additionally, the survival curve of the Cox model is: $$\begin{align} S(t|X)&=\text{exp}\bigg(-\int_{0}^{t}h_{0}(t)e^{\beta X}\,dt\bigg)\\ &=\text{exp}\big(-H_{0}(t)\big)^{\text{exp}(\beta X)}\\ &=S_{0}(t)^{\text{exp}(\beta X)} \end{align}$$ where $H_{0}(t)=\int_{0}^{t}h_{0}(t)\,dt$, $S_{0}(t)=\text{exp}\big(-H_{0}(t)\big)$, $S(t)$ is the survival function at time $t$, $S_{0}(t)$ is the baseline survival function at time $t$ and $H_{0}(t)$ is the baseline cumulative hazard function at time $t$. The R function basehaz() provides the estimated cumulative hazard function, $H_{0}(t)$, defined above. For example, I can fit a Cox PH model with a single covariate sexn in R as follows: f=formula(sv~factor(sexn)) cox.fit=coxph(f) I can then extract (and plot) the underlying baseline cumulative hazard function as follows: bh=basehaz(cox.fit) plot(bh[,2],bh[,1],main="Cumulative hazard function",xlab="Time",ylab="H0(t)") Now, because of the proportional nature of the Cox model, to obtain the survival curves of the two groups defined by their sexn value I can just raise the cumulative hazard function to the power of the estimated coefficient for sexn. For example, for my variable $sexn=\{0,1\}$, the two survival curves would be: $$S(t|X=0)=\text{exp}(-H_{0}(t))^{\text{exp}(\beta(0))}=\text{exp}(-H_{0}(t))$$ and $$S(t|X=1)=\text{exp}(-H_{0}(t))^{\text{exp}(\beta(1))}=\text{exp}(-H_{0}(t))^{\text{exp}(\beta)}$$ If you want to see the relative survival, you can just plot the curves as follows: plot(bh[,2],exp(-bh[,1])^(exp(cox.fit$coef)),xlim=c(40,85),ylim=c(0,1), col="red",main="Survival curves for two groups",xlab="Time",ylab="S(t|X)") par(new=TRUE) plot(bh[,2],exp(-bh[,1]),xlim=c(40,85),ylim=c(0,1), col="blue",main="Survival curves for two groups",xlab="Time",ylab="S(t|X)") legend("topright",c("sexn=1","sexn=0"),lty=c(1,1),col=c(2,4)) Thus, you can see that the group with $sexn=1$ has a lower survival than the group with $sexn=0$. If you want to measure the relative survival of the two groups you can do so in many ways. You can say that for two individuals (differing in only $sexn$) that start at $\text{Time}=40$, the individual with $sexn=1$ has a lower probability of surviving to any time $t>40$ compared with the individual with $sexn=0$. I believe what you are trying to achieve is to calculate the survival estimate: $$S(t=30|X)$$ This can be achieved by fitting a Cox model to a given survival object and applying the estimated coefficients to each individual depending on their individual covariates. This will scale the baseline survival curve and give you the desired survival estimate for each of your individuals.
Calculating survival probability per person at time (t) from Cox PH
The Cox proportional hazards model can be described as follows: $$h(t|X)=h_{0}(t)e^{\beta X}$$ where $h(t)$ is the hazard rate at time $t$, $h_{0}(t)$ is the baseline hazard rate at time $t$, $\beta$
Calculating survival probability per person at time (t) from Cox PH The Cox proportional hazards model can be described as follows: $$h(t|X)=h_{0}(t)e^{\beta X}$$ where $h(t)$ is the hazard rate at time $t$, $h_{0}(t)$ is the baseline hazard rate at time $t$, $\beta$ is a vector of coefficients and $X$ is a vector of covariates. As you will know, the Cox model is a semi-parametric model in that it is only partially defined parametrically. Essentially, the covariate part assumes a functional form whereas the baseline part has no parametric functional form (it's form is that of a step function). Additionally, the survival curve of the Cox model is: $$\begin{align} S(t|X)&=\text{exp}\bigg(-\int_{0}^{t}h_{0}(t)e^{\beta X}\,dt\bigg)\\ &=\text{exp}\big(-H_{0}(t)\big)^{\text{exp}(\beta X)}\\ &=S_{0}(t)^{\text{exp}(\beta X)} \end{align}$$ where $H_{0}(t)=\int_{0}^{t}h_{0}(t)\,dt$, $S_{0}(t)=\text{exp}\big(-H_{0}(t)\big)$, $S(t)$ is the survival function at time $t$, $S_{0}(t)$ is the baseline survival function at time $t$ and $H_{0}(t)$ is the baseline cumulative hazard function at time $t$. The R function basehaz() provides the estimated cumulative hazard function, $H_{0}(t)$, defined above. For example, I can fit a Cox PH model with a single covariate sexn in R as follows: f=formula(sv~factor(sexn)) cox.fit=coxph(f) I can then extract (and plot) the underlying baseline cumulative hazard function as follows: bh=basehaz(cox.fit) plot(bh[,2],bh[,1],main="Cumulative hazard function",xlab="Time",ylab="H0(t)") Now, because of the proportional nature of the Cox model, to obtain the survival curves of the two groups defined by their sexn value I can just raise the cumulative hazard function to the power of the estimated coefficient for sexn. For example, for my variable $sexn=\{0,1\}$, the two survival curves would be: $$S(t|X=0)=\text{exp}(-H_{0}(t))^{\text{exp}(\beta(0))}=\text{exp}(-H_{0}(t))$$ and $$S(t|X=1)=\text{exp}(-H_{0}(t))^{\text{exp}(\beta(1))}=\text{exp}(-H_{0}(t))^{\text{exp}(\beta)}$$ If you want to see the relative survival, you can just plot the curves as follows: plot(bh[,2],exp(-bh[,1])^(exp(cox.fit$coef)),xlim=c(40,85),ylim=c(0,1), col="red",main="Survival curves for two groups",xlab="Time",ylab="S(t|X)") par(new=TRUE) plot(bh[,2],exp(-bh[,1]),xlim=c(40,85),ylim=c(0,1), col="blue",main="Survival curves for two groups",xlab="Time",ylab="S(t|X)") legend("topright",c("sexn=1","sexn=0"),lty=c(1,1),col=c(2,4)) Thus, you can see that the group with $sexn=1$ has a lower survival than the group with $sexn=0$. If you want to measure the relative survival of the two groups you can do so in many ways. You can say that for two individuals (differing in only $sexn$) that start at $\text{Time}=40$, the individual with $sexn=1$ has a lower probability of surviving to any time $t>40$ compared with the individual with $sexn=0$. I believe what you are trying to achieve is to calculate the survival estimate: $$S(t=30|X)$$ This can be achieved by fitting a Cox model to a given survival object and applying the estimated coefficients to each individual depending on their individual covariates. This will scale the baseline survival curve and give you the desired survival estimate for each of your individuals.
Calculating survival probability per person at time (t) from Cox PH The Cox proportional hazards model can be described as follows: $$h(t|X)=h_{0}(t)e^{\beta X}$$ where $h(t)$ is the hazard rate at time $t$, $h_{0}(t)$ is the baseline hazard rate at time $t$, $\beta$
34,362
Analysis of data given as intervals instead of points
The data are censored, specifically interval-censored. Censoring, especially right-censoring (start but no end), is a common feature of time-to-event data and dealt with under survival analysis (Medicine) or reliability analysis (Engineering). For parametric modelling of such data the key insight is that contributions to the joint likelihood from uncensored data are of the form $$f(x_i)$$ while those from censored data are of the form $$F\left(x_i^\mathrm{(end)}\right)-F\left(x_i^\mathrm{(start)}\right),$$ where $f(\cdot)$ is the density & $F(\cdot)$ the distribution function. Under the assumption of independent censoring—which you shouldn't leap to—these are the only part of the likelihood needed for inference as the censoring times contain no additional information about the parameters. If a normal distribution seems appropriate start off with a contour plot of the likelihood against the mean & variance parameters, then improve initial maximum-likelihood estimates numerically.
Analysis of data given as intervals instead of points
The data are censored, specifically interval-censored. Censoring, especially right-censoring (start but no end), is a common feature of time-to-event data and dealt with under survival analysis (Medic
Analysis of data given as intervals instead of points The data are censored, specifically interval-censored. Censoring, especially right-censoring (start but no end), is a common feature of time-to-event data and dealt with under survival analysis (Medicine) or reliability analysis (Engineering). For parametric modelling of such data the key insight is that contributions to the joint likelihood from uncensored data are of the form $$f(x_i)$$ while those from censored data are of the form $$F\left(x_i^\mathrm{(end)}\right)-F\left(x_i^\mathrm{(start)}\right),$$ where $f(\cdot)$ is the density & $F(\cdot)$ the distribution function. Under the assumption of independent censoring—which you shouldn't leap to—these are the only part of the likelihood needed for inference as the censoring times contain no additional information about the parameters. If a normal distribution seems appropriate start off with a contour plot of the likelihood against the mean & variance parameters, then improve initial maximum-likelihood estimates numerically.
Analysis of data given as intervals instead of points The data are censored, specifically interval-censored. Censoring, especially right-censoring (start but no end), is a common feature of time-to-event data and dealt with under survival analysis (Medic
34,363
Analysis of data given as intervals instead of points
A good starting for examining the univariate distribution would be to look at the Non-Parametric Maximum Likelihood Estimator (NPMLE). This is a generalization of the Kaplan-Meier curves (which itself is a generalization of the Empirical Distribution Function), which will give you a non-parametric estimate of the cumulative distribution function. Interestingly, this estimate is not unique (unlike the EDF or Kaplan Meier curves), but rather known up to an interval. So you will get a pair of step functions that bound the NPMLE, rather than a single step function. While this estimator is good for examining the shape of a distribution, it can be a bit unstable, i.e. high variance in the estimates. One can fit standard parametric models, but it is still recommended to use the NPMLE at least for model checking. Many of the standard survival regression models are available (proportional hazards, accelerated failure time and proportional odds, for example). Interestingly, although the NPMLE has high variance for the estimates of the survival curve, the regression parameters in a semi-parametric model which uses the NPMLE for the baseline distrubtion do not suffer from the instability. So semi-parametric regression methods are quite popular for inference. @Scortchi and @whuber bring up important points about the generation of the beginning and end of the observation intervals ($x_i^{start}, x_i^{end}$ as defined by the OP). A standard simplifying assumption (which should be carefully considered) is that there are a set of inspection times $C_0 \leq C_1 \leq, ..., \leq C_k$ that are generated independently of the actual event time / outcome $t$ of interest (equality occurs when we observe the event time exactly). Then, all we observe is the interval $C_j, C_{j+1}$ such that $t \in C_j, C_{j+1}$. But if it seems plausible that the event time could strongly influence inspection time, care must be taken in the analysis. As an example, suppose our event of interest was onset of tooth decay and our inspections were dentist visits. If we go to the dentist fairly regularly, then the assumption of independence seems reasonable. But if we very rarely go to the dentist except when our tooth hurts a lot, then $t$ is definitely influencing $C_j$! A brief tutorial for using these models in my R-package icenReg can be found here.
Analysis of data given as intervals instead of points
A good starting for examining the univariate distribution would be to look at the Non-Parametric Maximum Likelihood Estimator (NPMLE). This is a generalization of the Kaplan-Meier curves (which itself
Analysis of data given as intervals instead of points A good starting for examining the univariate distribution would be to look at the Non-Parametric Maximum Likelihood Estimator (NPMLE). This is a generalization of the Kaplan-Meier curves (which itself is a generalization of the Empirical Distribution Function), which will give you a non-parametric estimate of the cumulative distribution function. Interestingly, this estimate is not unique (unlike the EDF or Kaplan Meier curves), but rather known up to an interval. So you will get a pair of step functions that bound the NPMLE, rather than a single step function. While this estimator is good for examining the shape of a distribution, it can be a bit unstable, i.e. high variance in the estimates. One can fit standard parametric models, but it is still recommended to use the NPMLE at least for model checking. Many of the standard survival regression models are available (proportional hazards, accelerated failure time and proportional odds, for example). Interestingly, although the NPMLE has high variance for the estimates of the survival curve, the regression parameters in a semi-parametric model which uses the NPMLE for the baseline distrubtion do not suffer from the instability. So semi-parametric regression methods are quite popular for inference. @Scortchi and @whuber bring up important points about the generation of the beginning and end of the observation intervals ($x_i^{start}, x_i^{end}$ as defined by the OP). A standard simplifying assumption (which should be carefully considered) is that there are a set of inspection times $C_0 \leq C_1 \leq, ..., \leq C_k$ that are generated independently of the actual event time / outcome $t$ of interest (equality occurs when we observe the event time exactly). Then, all we observe is the interval $C_j, C_{j+1}$ such that $t \in C_j, C_{j+1}$. But if it seems plausible that the event time could strongly influence inspection time, care must be taken in the analysis. As an example, suppose our event of interest was onset of tooth decay and our inspections were dentist visits. If we go to the dentist fairly regularly, then the assumption of independence seems reasonable. But if we very rarely go to the dentist except when our tooth hurts a lot, then $t$ is definitely influencing $C_j$! A brief tutorial for using these models in my R-package icenReg can be found here.
Analysis of data given as intervals instead of points A good starting for examining the univariate distribution would be to look at the Non-Parametric Maximum Likelihood Estimator (NPMLE). This is a generalization of the Kaplan-Meier curves (which itself
34,364
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value
I think you can, but instead of dividing the RMSE by the mean, you may divide it by (max-min) value
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value
I think you can, but instead of dividing the RMSE by the mean, you may divide it by (max-min) value
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value I think you can, but instead of dividing the RMSE by the mean, you may divide it by (max-min) value
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value I think you can, but instead of dividing the RMSE by the mean, you may divide it by (max-min) value
34,365
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value
I think Euan has a right answer. There are ways to calculate the NRMSE, RMSE/(max()-min()) and RMSE/mean(). You should know which is better to be used in your case. For example, when you are calculating the NRMSE of a house appliance, it is better to use the RMSE/(max()-min()). Because in this way it can show the NRMSE when the appliance is running. The reason why your mean value is 0 could be the data has both positive part and negative part, therefore, I think RMSE/(max()-min()) can show how your data spreads.
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value
I think Euan has a right answer. There are ways to calculate the NRMSE, RMSE/(max()-min()) and RMSE/mean(). You should know which is better to be used in your case. For example, when you are calculati
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value I think Euan has a right answer. There are ways to calculate the NRMSE, RMSE/(max()-min()) and RMSE/mean(). You should know which is better to be used in your case. For example, when you are calculating the NRMSE of a house appliance, it is better to use the RMSE/(max()-min()). Because in this way it can show the NRMSE when the appliance is running. The reason why your mean value is 0 could be the data has both positive part and negative part, therefore, I think RMSE/(max()-min()) can show how your data spreads.
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value I think Euan has a right answer. There are ways to calculate the NRMSE, RMSE/(max()-min()) and RMSE/mean(). You should know which is better to be used in your case. For example, when you are calculati
34,366
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value
Euan Russano suggests dividing by the range of observations which is common (e.g. https://en.wikipedia.org/wiki/Root-mean-square_deviation NRMSD). But this would still be dividing by zero in your case because the range of observations is zero. other measures of association (like correlation) will also be undefined because the variance is zero.
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value
Euan Russano suggests dividing by the range of observations which is common (e.g. https://en.wikipedia.org/wiki/Root-mean-square_deviation NRMSD). But this would still be dividing by zero in your case
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value Euan Russano suggests dividing by the range of observations which is common (e.g. https://en.wikipedia.org/wiki/Root-mean-square_deviation NRMSD). But this would still be dividing by zero in your case because the range of observations is zero. other measures of association (like correlation) will also be undefined because the variance is zero.
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value Euan Russano suggests dividing by the range of observations which is common (e.g. https://en.wikipedia.org/wiki/Root-mean-square_deviation NRMSD). But this would still be dividing by zero in your case
34,367
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value
You would normally divide by a measure of "spread". Either max(obs)-min(obs), as already mentioned, or directly the standard deviation of your observations, which is preferred for normally (or quasi-) distributed data. This is objective and gives your NRMSE nice units of "standard deviations of the observed data". You could also divide by the variance. If your observations are not constant, these two quantities should not be zero. Hope it helps.
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value
You would normally divide by a measure of "spread". Either max(obs)-min(obs), as already mentioned, or directly the standard deviation of your observations, which is preferred for normally (or quasi-)
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value You would normally divide by a measure of "spread". Either max(obs)-min(obs), as already mentioned, or directly the standard deviation of your observations, which is preferred for normally (or quasi-) distributed data. This is objective and gives your NRMSE nice units of "standard deviations of the observed data". You could also divide by the variance. If your observations are not constant, these two quantities should not be zero. Hope it helps.
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value You would normally divide by a measure of "spread". Either max(obs)-min(obs), as already mentioned, or directly the standard deviation of your observations, which is preferred for normally (or quasi-)
34,368
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value
Normalizing should be performed depending on the reference value. So, for the forecasting studies, since we are trying to approach the real value, you may consider dividing by the real value. $$\textrm{RMSE} = \sqrt{1/n\sum(y - y_i)^2/n},~ i = 1,\ldots,n $$ $$\textrm{NRMSE} = \textrm{RMSE}/y $$ Keep in mind that if you have only one sample then RMSE would be a wrong choice. Let's say the real value is 80, and the approximation is 60. If you apply RMSE, it will give you the difference between those values, not the percentage error. That is: $$\textrm{RMSE} = \sqrt{(80-60)^2/1}= 20.$$ However, $\textrm{NRMSE }$ will give you the error as a percentage: $\textrm{NRMSE }= 20/80 = 1/4 = 25\%. $
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value
Normalizing should be performed depending on the reference value. So, for the forecasting studies, since we are trying to approach the real value, you may consider dividing by the real value. $$\textr
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value Normalizing should be performed depending on the reference value. So, for the forecasting studies, since we are trying to approach the real value, you may consider dividing by the real value. $$\textrm{RMSE} = \sqrt{1/n\sum(y - y_i)^2/n},~ i = 1,\ldots,n $$ $$\textrm{NRMSE} = \textrm{RMSE}/y $$ Keep in mind that if you have only one sample then RMSE would be a wrong choice. Let's say the real value is 80, and the approximation is 60. If you apply RMSE, it will give you the difference between those values, not the percentage error. That is: $$\textrm{RMSE} = \sqrt{(80-60)^2/1}= 20.$$ However, $\textrm{NRMSE }$ will give you the error as a percentage: $\textrm{NRMSE }= 20/80 = 1/4 = 25\%. $
Normalized Root Mean Square Error (NRMSE) with zero mean of observed value Normalizing should be performed depending on the reference value. So, for the forecasting studies, since we are trying to approach the real value, you may consider dividing by the real value. $$\textr
34,369
Using shrinkage when estimating covariance matrix before doing PCA
The paper you cited (Donoho et al. 2013 Optimal Shrinkage of Eigenvalues in the Spiked Covariance Model) is an impressive piece of work which I confess I did not really study. Nevertheless, I believe that it is easy to see that an answer to your question is negative: using any kind of shrinkage estimator of the covariance matrix will not improve your PCA results and, specifically, will not lead to "better understanding of the structure in the data". In a nutshell, this is because shrinkage estimators only affect the eigenvalues of the sample covariance matrix and not the eigenvectors. Let me quote the beginning of the abstract of Donoho et al.: Since the seminal work of Stein (1956) it has been understood that the empirical covariance matrix can be improved by shrinkage of the empirical eigenvalues. In this paper, we consider a proportional-growth asymptotic framework with $n$ observations and $p_n$ variables having limit $p_n/n \to \gamma \in (0,1]$. We assume the population covariance matrix $\Sigma$ follows the popular spiked covariance model, in which several eigenvalues are significantly larger than all the others, which all equal $1$. Factoring the empirical covariance matrix $S$ as $S = V \Lambda V'$ with $V$ orthogonal and $\Lambda$ diagonal, we consider shrinkers of the form $\hat{\Sigma} = \eta(S) = V \eta(\Lambda) V'$ where $\eta(\Lambda)_{ii} = \eta(\Lambda_{ii})$ is a scalar nonlinearity that operates individually on the diagonal entries of $\Lambda$. The abstract goes on to describe paper's contributions, but what is important for us here is that the sample covariance matrix $S$ and its shrinked version $\hat\Sigma$ have the same eigenvectors. Principal components are given by projections of the data onto these eigenvectors; so they will not be affected by the shrinkage. The only thing that can get affected are the estimates of how much variance is explained by each PC because these are given by the eigenvalues. (And as @Aksakal wrote in the comments, this can affect the number of retained PCs.) But the PCs themselves will not change.
Using shrinkage when estimating covariance matrix before doing PCA
The paper you cited (Donoho et al. 2013 Optimal Shrinkage of Eigenvalues in the Spiked Covariance Model) is an impressive piece of work which I confess I did not really study. Nevertheless, I believe
Using shrinkage when estimating covariance matrix before doing PCA The paper you cited (Donoho et al. 2013 Optimal Shrinkage of Eigenvalues in the Spiked Covariance Model) is an impressive piece of work which I confess I did not really study. Nevertheless, I believe that it is easy to see that an answer to your question is negative: using any kind of shrinkage estimator of the covariance matrix will not improve your PCA results and, specifically, will not lead to "better understanding of the structure in the data". In a nutshell, this is because shrinkage estimators only affect the eigenvalues of the sample covariance matrix and not the eigenvectors. Let me quote the beginning of the abstract of Donoho et al.: Since the seminal work of Stein (1956) it has been understood that the empirical covariance matrix can be improved by shrinkage of the empirical eigenvalues. In this paper, we consider a proportional-growth asymptotic framework with $n$ observations and $p_n$ variables having limit $p_n/n \to \gamma \in (0,1]$. We assume the population covariance matrix $\Sigma$ follows the popular spiked covariance model, in which several eigenvalues are significantly larger than all the others, which all equal $1$. Factoring the empirical covariance matrix $S$ as $S = V \Lambda V'$ with $V$ orthogonal and $\Lambda$ diagonal, we consider shrinkers of the form $\hat{\Sigma} = \eta(S) = V \eta(\Lambda) V'$ where $\eta(\Lambda)_{ii} = \eta(\Lambda_{ii})$ is a scalar nonlinearity that operates individually on the diagonal entries of $\Lambda$. The abstract goes on to describe paper's contributions, but what is important for us here is that the sample covariance matrix $S$ and its shrinked version $\hat\Sigma$ have the same eigenvectors. Principal components are given by projections of the data onto these eigenvectors; so they will not be affected by the shrinkage. The only thing that can get affected are the estimates of how much variance is explained by each PC because these are given by the eigenvalues. (And as @Aksakal wrote in the comments, this can affect the number of retained PCs.) But the PCs themselves will not change.
Using shrinkage when estimating covariance matrix before doing PCA The paper you cited (Donoho et al. 2013 Optimal Shrinkage of Eigenvalues in the Spiked Covariance Model) is an impressive piece of work which I confess I did not really study. Nevertheless, I believe
34,370
Using shrinkage when estimating covariance matrix before doing PCA
I think that shrinkage would not help in interpreting the data with PCA or reducing dimensionality of a given data set. The shrinkage will help to make your analysis robust, i.e. if you have to use the outcome of PCA on other data sets. When you estimate the covariance matrix of a small but high dimensional data set, the estimate becomes unstable, the estimation error is very high. So, if you apply what you have learned from this data set on other data sets, you may be up to an unpleasant surprise. Due to a sampling error you may see that your estimated covariance matrix doesn't match the new observations at all. So, shrinkage may help when there's some kind of a default or prior knowledge of the covariances, maybe theoretical asymptotthical limit etc. On the other hand if this sample is all that you have to use the PCA analysis for then you're dealing essentially with the population. Hence, the sample estimate becomes the population parameter, and you're fine. The example when a shrinkage works is in portfolio theory in finance. There are many strains of this beast, and some of them posit that the variables are highly correlated with common factors, while the residual correlatiuns between variables is small. This leads to a nice shrinkage target: a diagonal residual covariance matrix. It's not always that you have this kind of a case when you know to what to shrink your estimate though
Using shrinkage when estimating covariance matrix before doing PCA
I think that shrinkage would not help in interpreting the data with PCA or reducing dimensionality of a given data set. The shrinkage will help to make your analysis robust, i.e. if you have to use th
Using shrinkage when estimating covariance matrix before doing PCA I think that shrinkage would not help in interpreting the data with PCA or reducing dimensionality of a given data set. The shrinkage will help to make your analysis robust, i.e. if you have to use the outcome of PCA on other data sets. When you estimate the covariance matrix of a small but high dimensional data set, the estimate becomes unstable, the estimation error is very high. So, if you apply what you have learned from this data set on other data sets, you may be up to an unpleasant surprise. Due to a sampling error you may see that your estimated covariance matrix doesn't match the new observations at all. So, shrinkage may help when there's some kind of a default or prior knowledge of the covariances, maybe theoretical asymptotthical limit etc. On the other hand if this sample is all that you have to use the PCA analysis for then you're dealing essentially with the population. Hence, the sample estimate becomes the population parameter, and you're fine. The example when a shrinkage works is in portfolio theory in finance. There are many strains of this beast, and some of them posit that the variables are highly correlated with common factors, while the residual correlatiuns between variables is small. This leads to a nice shrinkage target: a diagonal residual covariance matrix. It's not always that you have this kind of a case when you know to what to shrink your estimate though
Using shrinkage when estimating covariance matrix before doing PCA I think that shrinkage would not help in interpreting the data with PCA or reducing dimensionality of a given data set. The shrinkage will help to make your analysis robust, i.e. if you have to use th
34,371
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis?
As mentioned by previous answers, Stan, JAGS, and WinBUGS require that priors be specified as mathematical functions. If you've already got an MCMC-represented posterior from a previous analysis, and you want to use that MCMC posterior as a prior for subsequent data, you must approximate the MCMC posterior in a mathematical form. Unless you have a simple model with conjugate priors, the mathematical approximation of the MCMC distribution will be only an approximation. As was implicit in Bjorn's answer, it's important to include the correlations of the parameters from the MCMC distribution in the mathematical approximation for the prior. Finally, if the previous data and the novel data have exactly the same structure and you're using exactly the same model, then you can get an exact answer by combining the two data sets and running the model just once on the combined data.
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis?
As mentioned by previous answers, Stan, JAGS, and WinBUGS require that priors be specified as mathematical functions. If you've already got an MCMC-represented posterior from a previous analysis, and
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis? As mentioned by previous answers, Stan, JAGS, and WinBUGS require that priors be specified as mathematical functions. If you've already got an MCMC-represented posterior from a previous analysis, and you want to use that MCMC posterior as a prior for subsequent data, you must approximate the MCMC posterior in a mathematical form. Unless you have a simple model with conjugate priors, the mathematical approximation of the MCMC distribution will be only an approximation. As was implicit in Bjorn's answer, it's important to include the correlations of the parameters from the MCMC distribution in the mathematical approximation for the prior. Finally, if the previous data and the novel data have exactly the same structure and you're using exactly the same model, then you can get an exact answer by combining the two data sets and running the model just once on the combined data.
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis? As mentioned by previous answers, Stan, JAGS, and WinBUGS require that priors be specified as mathematical functions. If you've already got an MCMC-represented posterior from a previous analysis, and
34,372
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis?
Most software such as Stan, WingBUGS, SAS etc. requires you to provide an analytic form for the prior instead of MCMC samples. Possible ways around it are to refit the model with all data or to approximate the posterior with e.g. some mixture distribution (e.g. of bivariate normals to $\mu $ and $\log \sigma$ - e.g. in R using e.g. the mclust package or in SAS using PROC FMM) and to use the mixture log-pdf as the new prior.
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis?
Most software such as Stan, WingBUGS, SAS etc. requires you to provide an analytic form for the prior instead of MCMC samples. Possible ways around it are to refit the model with all data or to approx
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis? Most software such as Stan, WingBUGS, SAS etc. requires you to provide an analytic form for the prior instead of MCMC samples. Possible ways around it are to refit the model with all data or to approximate the posterior with e.g. some mixture distribution (e.g. of bivariate normals to $\mu $ and $\log \sigma$ - e.g. in R using e.g. the mclust package or in SAS using PROC FMM) and to use the mixture log-pdf as the new prior.
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis? Most software such as Stan, WingBUGS, SAS etc. requires you to provide an analytic form for the prior instead of MCMC samples. Possible ways around it are to refit the model with all data or to approx
34,373
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis?
Could you clarify, what did you mean by "I already have a posterior on y?" Do you mean you have generated samples to describe the distribution of the various parameters that were fit to your data (y), in accordance with the model you defined? If this is the case, you still need to apply those parameter fits to your input data to create predictions for each observation - which you can do by "reverse engineering" your model statement, using parameter values supplied by Stan - or generating predictions in the generated_quantities block (example here, or I can provide more detail on this if necessary, just post your original model statement). Once you have predictions, you can certainly apply any other kind of additional modeling to it, within or outside of Stan. You might be also asking if you can "do something" with not only the point values of the parameters that came out of your first model, but also the shape of their distributions? If that's the case, I'd suggest looking into hierarchical models, which are awesome tools but come with their own set of challenges. If you haven't seen it yet, I'd recommend John Kruschke's book Doing Bayesian Data Analysis as a great resource for this kind of thing. Chapter 9 is all about hierarchical models.
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis?
Could you clarify, what did you mean by "I already have a posterior on y?" Do you mean you have generated samples to describe the distribution of the various parameters that were fit to your data (y)
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis? Could you clarify, what did you mean by "I already have a posterior on y?" Do you mean you have generated samples to describe the distribution of the various parameters that were fit to your data (y), in accordance with the model you defined? If this is the case, you still need to apply those parameter fits to your input data to create predictions for each observation - which you can do by "reverse engineering" your model statement, using parameter values supplied by Stan - or generating predictions in the generated_quantities block (example here, or I can provide more detail on this if necessary, just post your original model statement). Once you have predictions, you can certainly apply any other kind of additional modeling to it, within or outside of Stan. You might be also asking if you can "do something" with not only the point values of the parameters that came out of your first model, but also the shape of their distributions? If that's the case, I'd suggest looking into hierarchical models, which are awesome tools but come with their own set of challenges. If you haven't seen it yet, I'd recommend John Kruschke's book Doing Bayesian Data Analysis as a great resource for this kind of thing. Chapter 9 is all about hierarchical models.
In Stan is there a way to use parameter posterior from old analysis as prior in new analysis? Could you clarify, what did you mean by "I already have a posterior on y?" Do you mean you have generated samples to describe the distribution of the various parameters that were fit to your data (y)
34,374
Taking into account the uncertainty of p when estimating the mean of a binomial distribution
There are several problems with your approach. First, you want to use confidence intervals for something that they were not designed for. If $p$ varies, then confidence interval will not show you how does it vary. Check Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? to learn more about confidence intervals. Moreover, using normal approximation for binomial proportion and it's confidence intervals is not a good idea, as described by Brown et al (2001). In fact, from your description it sounds like you want to estimate the Bayesian credible interval, i.e. interval that will contain certain fraction of $p$'s distribution. Yes, I said Bayesian, since in fact you already defined your problem as a Bayesian model. You say that you assume that $p$ is a random variable, while in frequentist setting $p$ would be a fixed parameter. If you already assumed it, why not use a Bayesian model for your data? You would be using beta-binomial model (see also An introduction to the Beta-Binomial model paper by Dan Navarro and Amy Perfors). In cases like this it is extremely easy to estimate such model. We can define it as follows: $$ X \sim \mathrm{Binomial}(N, p) \\ p \sim \mathrm{Beta}(\alpha, \beta) $$ so, your data $X$ follows binomial distribution parametrized by $N$ and $p$, where $p$ is a random variable. We assume beta distribution with parameters $\alpha$ and $\beta$ as a prior for $p$. I guess that if you wanted to use frequentist method, you do not have any prior knowledge about possible distribution of $p$, so you would choose "uninformative" prior parametrized by $\alpha = \beta = 1$, or $\alpha = \beta = 0.5$ (if you prefer, you may translate those parameters to mean and precision, or mean and variance). After updating your prior, posterior distribution of $p$ is simply a beta distribution parametrized by $$ \alpha' = \alpha + \text{total number of successes} \\ \beta' = \beta + \text{total number of failures} $$ with mean $$ E(X) = N \frac{\alpha'}{\alpha'+\beta'} $$ To read more about calculating other quantities of this distribution check Wikipedia article on beta-binomial distribution. You can compute credible intervals numerically either by (a) inverting numerically the cumulative distribution function of beta-binomial distribution, or by (b) sampling large number of random values from beta-binomial distribution and then computing sample quantiles from it. Second approach is pretty easy since you only need to sequentially repeat the following procedure: draw $p$ from beta distribution parametrized by $\alpha'$ and $\beta'$, draw $x$ from binomial distribution parametrized by $p$ and $N$. until you draw sample large enough to find it confident for calculating the quantities of interest. Of course if you know mean and standard deviation of $p$ and you insist on using normal distribution for it, you can use simulation as well, but with using normal distribution for simulating the values of $p$. Below I provide code example in R for such simulation. R <- 1e5 # number of samples to draw in simulation N <- 500 # known N mu <- 0.3 # known mean of p sigma <- 0.07 # known standard deviation of p p <- rnorm(R, mu, sigma) # simulate p x <- rbinom(R, N, p) # simulate X mean(x) # estimate for mean of X quantile(p*N, c(0.025, 0.975)) # 95% interval estimate for variability of E(X) Or you can simply take appropriate quantiles using inverse of normal cumulative distribution function and multiply them by $N$. Remember however that this is not a confidence interval, but a credible interval. Brown, L.D., Cai, T.T., & DasGupta, A. (2001). Interval estimation for a binomial proportion. Statistical science, 101-117.
Taking into account the uncertainty of p when estimating the mean of a binomial distribution
There are several problems with your approach. First, you want to use confidence intervals for something that they were not designed for. If $p$ varies, then confidence interval will not show you how
Taking into account the uncertainty of p when estimating the mean of a binomial distribution There are several problems with your approach. First, you want to use confidence intervals for something that they were not designed for. If $p$ varies, then confidence interval will not show you how does it vary. Check Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? to learn more about confidence intervals. Moreover, using normal approximation for binomial proportion and it's confidence intervals is not a good idea, as described by Brown et al (2001). In fact, from your description it sounds like you want to estimate the Bayesian credible interval, i.e. interval that will contain certain fraction of $p$'s distribution. Yes, I said Bayesian, since in fact you already defined your problem as a Bayesian model. You say that you assume that $p$ is a random variable, while in frequentist setting $p$ would be a fixed parameter. If you already assumed it, why not use a Bayesian model for your data? You would be using beta-binomial model (see also An introduction to the Beta-Binomial model paper by Dan Navarro and Amy Perfors). In cases like this it is extremely easy to estimate such model. We can define it as follows: $$ X \sim \mathrm{Binomial}(N, p) \\ p \sim \mathrm{Beta}(\alpha, \beta) $$ so, your data $X$ follows binomial distribution parametrized by $N$ and $p$, where $p$ is a random variable. We assume beta distribution with parameters $\alpha$ and $\beta$ as a prior for $p$. I guess that if you wanted to use frequentist method, you do not have any prior knowledge about possible distribution of $p$, so you would choose "uninformative" prior parametrized by $\alpha = \beta = 1$, or $\alpha = \beta = 0.5$ (if you prefer, you may translate those parameters to mean and precision, or mean and variance). After updating your prior, posterior distribution of $p$ is simply a beta distribution parametrized by $$ \alpha' = \alpha + \text{total number of successes} \\ \beta' = \beta + \text{total number of failures} $$ with mean $$ E(X) = N \frac{\alpha'}{\alpha'+\beta'} $$ To read more about calculating other quantities of this distribution check Wikipedia article on beta-binomial distribution. You can compute credible intervals numerically either by (a) inverting numerically the cumulative distribution function of beta-binomial distribution, or by (b) sampling large number of random values from beta-binomial distribution and then computing sample quantiles from it. Second approach is pretty easy since you only need to sequentially repeat the following procedure: draw $p$ from beta distribution parametrized by $\alpha'$ and $\beta'$, draw $x$ from binomial distribution parametrized by $p$ and $N$. until you draw sample large enough to find it confident for calculating the quantities of interest. Of course if you know mean and standard deviation of $p$ and you insist on using normal distribution for it, you can use simulation as well, but with using normal distribution for simulating the values of $p$. Below I provide code example in R for such simulation. R <- 1e5 # number of samples to draw in simulation N <- 500 # known N mu <- 0.3 # known mean of p sigma <- 0.07 # known standard deviation of p p <- rnorm(R, mu, sigma) # simulate p x <- rbinom(R, N, p) # simulate X mean(x) # estimate for mean of X quantile(p*N, c(0.025, 0.975)) # 95% interval estimate for variability of E(X) Or you can simply take appropriate quantiles using inverse of normal cumulative distribution function and multiply them by $N$. Remember however that this is not a confidence interval, but a credible interval. Brown, L.D., Cai, T.T., & DasGupta, A. (2001). Interval estimation for a binomial proportion. Statistical science, 101-117.
Taking into account the uncertainty of p when estimating the mean of a binomial distribution There are several problems with your approach. First, you want to use confidence intervals for something that they were not designed for. If $p$ varies, then confidence interval will not show you how
34,375
What is the difference between the theoretical distribution and the empirical distribution?
In an nutshell, when you know what the distribution is and its parameters, you can construct the theoretical distribution. So, in the case of R, the dnorm command returns the Standard Normal distribution. That is the distribution whose probability density function is: $$ f(x|\mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} $$ and where we know $\mu = 0$ and $\sigma = 1$ so we actually have $$ f(x) = \frac{1}{\sqrt{2\pi}}\, e^{-\frac{x^2}{2}} $$ and $$ P(X \leq x) = \int_0^x \frac{1}{\sqrt{2\pi}}\, e^{-\frac{t^2}{2}}\; dt $$ That's because we start knowing everything. With the EMPIRICAL distribution we start knowing nothing. What we have is a collection of observations, and we want to try and derive some knowledge from that collection. Perhaps we will fit a distribution, perhaps if we have enough observations, we'll just measure from those. For example, if I have the following 10 numbers, I can create an empirical distribution: ${1, 2, 3, 4, 4, 5, 8, 9, 9, 10}$ Looking at just these numbers, the empirical probability of choosing a 5 or less is 60%, since I have 6 out of 10 observations of 5 or less. What density does is run through the collection of observations and fit a kernel-smoothed density to them. It isn't normal, binomial, Poisson, Pareto, or anything in particular necessarily. It is a (sometimes) smoothed version of a histogram which can be treated like a density for calculations relating to the observations. We can try and fit theoretical distributions which are "close" in some way to the empirical. These fitted theoretical distributions can then be used as a proxy and we can use their properties for further fun and games.
What is the difference between the theoretical distribution and the empirical distribution?
In an nutshell, when you know what the distribution is and its parameters, you can construct the theoretical distribution. So, in the case of R, the dnorm command returns the Standard Normal distribut
What is the difference between the theoretical distribution and the empirical distribution? In an nutshell, when you know what the distribution is and its parameters, you can construct the theoretical distribution. So, in the case of R, the dnorm command returns the Standard Normal distribution. That is the distribution whose probability density function is: $$ f(x|\mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} $$ and where we know $\mu = 0$ and $\sigma = 1$ so we actually have $$ f(x) = \frac{1}{\sqrt{2\pi}}\, e^{-\frac{x^2}{2}} $$ and $$ P(X \leq x) = \int_0^x \frac{1}{\sqrt{2\pi}}\, e^{-\frac{t^2}{2}}\; dt $$ That's because we start knowing everything. With the EMPIRICAL distribution we start knowing nothing. What we have is a collection of observations, and we want to try and derive some knowledge from that collection. Perhaps we will fit a distribution, perhaps if we have enough observations, we'll just measure from those. For example, if I have the following 10 numbers, I can create an empirical distribution: ${1, 2, 3, 4, 4, 5, 8, 9, 9, 10}$ Looking at just these numbers, the empirical probability of choosing a 5 or less is 60%, since I have 6 out of 10 observations of 5 or less. What density does is run through the collection of observations and fit a kernel-smoothed density to them. It isn't normal, binomial, Poisson, Pareto, or anything in particular necessarily. It is a (sometimes) smoothed version of a histogram which can be treated like a density for calculations relating to the observations. We can try and fit theoretical distributions which are "close" in some way to the empirical. These fitted theoretical distributions can then be used as a proxy and we can use their properties for further fun and games.
What is the difference between the theoretical distribution and the empirical distribution? In an nutshell, when you know what the distribution is and its parameters, you can construct the theoretical distribution. So, in the case of R, the dnorm command returns the Standard Normal distribut
34,376
What is the difference between the theoretical distribution and the empirical distribution?
Simply put, an empirical distribution changes w.r.t. to the empirical sample, whereas a theoretical distribution doesn't w.r.t. to the sample coming from it. Or put it another way, an empirical distribution is determined by the sample, whereas a theoretical distribution can determine the sample coming out of it.
What is the difference between the theoretical distribution and the empirical distribution?
Simply put, an empirical distribution changes w.r.t. to the empirical sample, whereas a theoretical distribution doesn't w.r.t. to the sample coming from it. Or put it another way, an empirical distri
What is the difference between the theoretical distribution and the empirical distribution? Simply put, an empirical distribution changes w.r.t. to the empirical sample, whereas a theoretical distribution doesn't w.r.t. to the sample coming from it. Or put it another way, an empirical distribution is determined by the sample, whereas a theoretical distribution can determine the sample coming out of it.
What is the difference between the theoretical distribution and the empirical distribution? Simply put, an empirical distribution changes w.r.t. to the empirical sample, whereas a theoretical distribution doesn't w.r.t. to the sample coming from it. Or put it another way, an empirical distri
34,377
What is the difference between the theoretical distribution and the empirical distribution?
Empirical Probability of an event is an "estimate" that the event will happen based on how often the event occurs after collecting data or running an experiment (in a large number of trials). It is based specifically on direct observations or experiences. Theoretical Probability of an event is the number of ways that the event can occur, divided by the total number of outcomes. It is finding the probability of events that come from a sample space of known equally likely outcomes. http://www.regentsprep.org/regents/math/algebra/apr5/theoprop.htm
What is the difference between the theoretical distribution and the empirical distribution?
Empirical Probability of an event is an "estimate" that the event will happen based on how often the event occurs after collecting data or running an experiment (in a large number of trials). It is b
What is the difference between the theoretical distribution and the empirical distribution? Empirical Probability of an event is an "estimate" that the event will happen based on how often the event occurs after collecting data or running an experiment (in a large number of trials). It is based specifically on direct observations or experiences. Theoretical Probability of an event is the number of ways that the event can occur, divided by the total number of outcomes. It is finding the probability of events that come from a sample space of known equally likely outcomes. http://www.regentsprep.org/regents/math/algebra/apr5/theoprop.htm
What is the difference between the theoretical distribution and the empirical distribution? Empirical Probability of an event is an "estimate" that the event will happen based on how often the event occurs after collecting data or running an experiment (in a large number of trials). It is b
34,378
Can a random variable be a deterministic function of other random variables yet be independent of them?
A non trivial univariate real random variable that is a deterministic function of another random variable is not independent of it (see Mark L. Stone's answer for an example with a constant random variable). However, when more than two random variables are involved independence shows counterintuitive behaviours. I'll give an example of $Z$ deterministic function of $X$ and $Y$ but independent from $X$ and $Y$. Let $X$ and $Y$ be independent Bernouilli variables with $p=0.5$ (for example, $X$ and $Y$ are the results of tossing a coin each). Let $f(X,Y)$ equal $1$ if $X=Y$ and $0$ if $X$ is different from $Y$. Let $Z=f(X,Y)$. You can easily see that $P(X=0)=0.5=P(X=0\mid Z=1)=P(X=0\mid Z=0)$ and that $P(X=1)=0.5=P(X=1\mid Z=1)=P(X=1\mid Z=0)$, proving that $X$ and $Z$ are independent, or using another definition of independence: \begin{align*}P(x=0\text{ and }Z=0) &= P(X=0\text{ and }X \text{ different from }Y) \\ &= P(X=0\text{ and }Y=1) \\ &= 0.5 \cdot 0.5 = 0.25 \\\\ P(X=0) \cdot P(Z=0) &= 0.5 \cdot 0.5 = 0.25 \end{align*} The same operation can be done for all values of $X$ and $Z$, thus proving that $P(X=a\text{ and }Z=b) = P(X=a)\cdot P(Z=b)$ for every value of $a$ and $b$. Furthermore, the same proof holds for $Y$, therefore proving that $Y$ and $Z$ are independent and $X$ and $Z$ are independent. In fact, $X$, $Y$ and $Z$ are pairwise independent while $X$, $Y$ and $Z$ are not independent considered as a whole (not jointly independent). Interestingly, $Z$ is not independent of $(X, Y)$.
Can a random variable be a deterministic function of other random variables yet be independent of th
A non trivial univariate real random variable that is a deterministic function of another random variable is not independent of it (see Mark L. Stone's answer for an example with a constant random var
Can a random variable be a deterministic function of other random variables yet be independent of them? A non trivial univariate real random variable that is a deterministic function of another random variable is not independent of it (see Mark L. Stone's answer for an example with a constant random variable). However, when more than two random variables are involved independence shows counterintuitive behaviours. I'll give an example of $Z$ deterministic function of $X$ and $Y$ but independent from $X$ and $Y$. Let $X$ and $Y$ be independent Bernouilli variables with $p=0.5$ (for example, $X$ and $Y$ are the results of tossing a coin each). Let $f(X,Y)$ equal $1$ if $X=Y$ and $0$ if $X$ is different from $Y$. Let $Z=f(X,Y)$. You can easily see that $P(X=0)=0.5=P(X=0\mid Z=1)=P(X=0\mid Z=0)$ and that $P(X=1)=0.5=P(X=1\mid Z=1)=P(X=1\mid Z=0)$, proving that $X$ and $Z$ are independent, or using another definition of independence: \begin{align*}P(x=0\text{ and }Z=0) &= P(X=0\text{ and }X \text{ different from }Y) \\ &= P(X=0\text{ and }Y=1) \\ &= 0.5 \cdot 0.5 = 0.25 \\\\ P(X=0) \cdot P(Z=0) &= 0.5 \cdot 0.5 = 0.25 \end{align*} The same operation can be done for all values of $X$ and $Z$, thus proving that $P(X=a\text{ and }Z=b) = P(X=a)\cdot P(Z=b)$ for every value of $a$ and $b$. Furthermore, the same proof holds for $Y$, therefore proving that $Y$ and $Z$ are independent and $X$ and $Z$ are independent. In fact, $X$, $Y$ and $Z$ are pairwise independent while $X$, $Y$ and $Z$ are not independent considered as a whole (not jointly independent). Interestingly, $Z$ is not independent of $(X, Y)$.
Can a random variable be a deterministic function of other random variables yet be independent of th A non trivial univariate real random variable that is a deterministic function of another random variable is not independent of it (see Mark L. Stone's answer for an example with a constant random var
34,379
Can a random variable be a deterministic function of other random variables yet be independent of them?
A random variable which is a constant with probability 1 is independent of itself. I leave the trivial proof to you as an exercise. Consider the deterministic function f(x) = x, applied to the random variable $Y$, which equals a constant, say $e^\pi$, with probability one. Therefore, the random variable $f(Y)$ is independent of $Y$. If you don't like that example because $f(Y)$ is the same as $Y$, then use the function $f(x) = 2x$. Same conclusion. This provides a counterexample to the incorrect statement "An univariate real random variable that is a deterministic function of another random variable is not independent of it." in the first paragraph by the answer by @Pere .
Can a random variable be a deterministic function of other random variables yet be independent of th
A random variable which is a constant with probability 1 is independent of itself. I leave the trivial proof to you as an exercise. Consider the deterministic function f(x) = x, applied to the random
Can a random variable be a deterministic function of other random variables yet be independent of them? A random variable which is a constant with probability 1 is independent of itself. I leave the trivial proof to you as an exercise. Consider the deterministic function f(x) = x, applied to the random variable $Y$, which equals a constant, say $e^\pi$, with probability one. Therefore, the random variable $f(Y)$ is independent of $Y$. If you don't like that example because $f(Y)$ is the same as $Y$, then use the function $f(x) = 2x$. Same conclusion. This provides a counterexample to the incorrect statement "An univariate real random variable that is a deterministic function of another random variable is not independent of it." in the first paragraph by the answer by @Pere .
Can a random variable be a deterministic function of other random variables yet be independent of th A random variable which is a constant with probability 1 is independent of itself. I leave the trivial proof to you as an exercise. Consider the deterministic function f(x) = x, applied to the random
34,380
Where to find a guide to encoding categorical features?
Binary variables No encoding is needed: use them as is. Nominal data When you have an variable that can take on a finite number of values, that's called a categorical variable. When the values can't be ordered (e.g., red, blue, green), that's called a nominal variable. A nominal variable is one kind of categorical variable. For nominal variables, the usual way to encode them is with a one-hot encoding. If there are $N$ possible values for the variable, you map each value to a $N$-vector that has a $1$ in the position corresponding to that value and $0$ elsewhere. For instance: red $\mapsto (1,0,0)$, blue $\mapsto (0,1,0)$, green $\mapsto (0,0,1)$. Ordinal data When you have a categorical variable where the values can be ordered (sorted), but the ordering doesn't imply anything about how much they differ, that's called a ordinal variable (see ordinal data). For example, suppose you have a ranking: John finished in 3rd place, Jane in 6th place. You know that John finished before Jane, but that doesn't necessarily mean that John was $6/3=2$ times as fast as Jane. You can encode ordinal data using the thermometer trick. If there are $N$ possible values for the variable, then you map each value to a $N$-vector, where you put a $1$ in the position that matches the value of the variable and all subsequent position. For instance: first place $\mapsto (1,1,1)$, second place $\mapsto (0,1,1)$, third place $\mapsto (0,0,1)$. You can also apply binning if $N$ is too large, but usually it's better not to do that. Numerical variables Finally, you may encounter variables that directly measure a number, and where they can be not only ordered, but also subtracted or divided. Then, it's typically best to use the number directly, or possibly use the logarithm of the number. (You might take the logarithm if the number represents a ratio, or if there is a very wide range of values.) Useful background To understand these terms, it's helpful to learn about "level of measurements": https://en.wikipedia.org/wiki/Level_of_measurement. Scaling Finally, when you're using neural networks or "deep learning", you'll normally want to standardize/rescale all numerical attributes before applying deep learning. I suggest you treat that as a separate process from the feature mappings mentioned above, to be performed after you apply the feature mapping.
Where to find a guide to encoding categorical features?
Binary variables No encoding is needed: use them as is. Nominal data When you have an variable that can take on a finite number of values, that's called a categorical variable. When the values can't
Where to find a guide to encoding categorical features? Binary variables No encoding is needed: use them as is. Nominal data When you have an variable that can take on a finite number of values, that's called a categorical variable. When the values can't be ordered (e.g., red, blue, green), that's called a nominal variable. A nominal variable is one kind of categorical variable. For nominal variables, the usual way to encode them is with a one-hot encoding. If there are $N$ possible values for the variable, you map each value to a $N$-vector that has a $1$ in the position corresponding to that value and $0$ elsewhere. For instance: red $\mapsto (1,0,0)$, blue $\mapsto (0,1,0)$, green $\mapsto (0,0,1)$. Ordinal data When you have a categorical variable where the values can be ordered (sorted), but the ordering doesn't imply anything about how much they differ, that's called a ordinal variable (see ordinal data). For example, suppose you have a ranking: John finished in 3rd place, Jane in 6th place. You know that John finished before Jane, but that doesn't necessarily mean that John was $6/3=2$ times as fast as Jane. You can encode ordinal data using the thermometer trick. If there are $N$ possible values for the variable, then you map each value to a $N$-vector, where you put a $1$ in the position that matches the value of the variable and all subsequent position. For instance: first place $\mapsto (1,1,1)$, second place $\mapsto (0,1,1)$, third place $\mapsto (0,0,1)$. You can also apply binning if $N$ is too large, but usually it's better not to do that. Numerical variables Finally, you may encounter variables that directly measure a number, and where they can be not only ordered, but also subtracted or divided. Then, it's typically best to use the number directly, or possibly use the logarithm of the number. (You might take the logarithm if the number represents a ratio, or if there is a very wide range of values.) Useful background To understand these terms, it's helpful to learn about "level of measurements": https://en.wikipedia.org/wiki/Level_of_measurement. Scaling Finally, when you're using neural networks or "deep learning", you'll normally want to standardize/rescale all numerical attributes before applying deep learning. I suggest you treat that as a separate process from the feature mappings mentioned above, to be performed after you apply the feature mapping.
Where to find a guide to encoding categorical features? Binary variables No encoding is needed: use them as is. Nominal data When you have an variable that can take on a finite number of values, that's called a categorical variable. When the values can't
34,381
How do I quantify the uniformity of sampling time?
There are many metrics. They are best used in conjunction with visualizing the data appropriately. Among the solutions worth considering are to compare the distributions of the frequencies (regardless of time) to your reference distribution, the uniform one. Theory suggests that the deviations from perfect uniformity--the residuals--should be about the size of the square root of the average frequency. You can exploit that to compare datasets with different absolute frequencies: standardize the residuals (by dividing them by their expected deviations). This has a close mathematical relationship to chi-squared tests. Indeed, we can use the standard Normal distribution as a reference for the standardized residuals, whence the sum of their squares is the usual chi-squared statistic. When it's small--around the number of distinct times or less--you have near-perfect uniformity. That gives you a good reference value for comparison. Let's look at your data from this point of view. Here are versions of your three datasets: We can order these residuals and plot them against the expected values of the first, second, ..., twenty fourth order statistics of the standard Normal distribution. The horizontal deviations of these plots around a diagonal line signal non-uniformity: Notice the chi-squared statistics posted in each plot. The value of $15.8$ at the left isn't even as great as $24$ (the number of data values), perfectly consistent with a uniform distribution. The middle value of $563$ is large. What it means is that although the residuals line up in the plot, their values are too spread out: this is an over-dispersed dataset. Finally, the right hand value of $28000$ is huge. It signals major variations in this dataset. Even more insight can be had by redrawing these plots, each on its own axis, so we can see the details of the variation. Now you can see clearly how uniformly dispersed the first two datasets are. But by inspecting their vertical scales, you can see that the "dispersed" data are spread out around seven times more than the "uniform" data: that measures the over-dispersion. Just about all statistical software produces plots like these: they are called "QQ" (quantile-quantile) plots. This method works well for any dataset. Interpreting the chi-squared statistic becomes a little delicate when the average frequency drops below $5$ or so, but for almost any exploratory application that's no problem.
How do I quantify the uniformity of sampling time?
There are many metrics. They are best used in conjunction with visualizing the data appropriately. Among the solutions worth considering are to compare the distributions of the frequencies (regardles
How do I quantify the uniformity of sampling time? There are many metrics. They are best used in conjunction with visualizing the data appropriately. Among the solutions worth considering are to compare the distributions of the frequencies (regardless of time) to your reference distribution, the uniform one. Theory suggests that the deviations from perfect uniformity--the residuals--should be about the size of the square root of the average frequency. You can exploit that to compare datasets with different absolute frequencies: standardize the residuals (by dividing them by their expected deviations). This has a close mathematical relationship to chi-squared tests. Indeed, we can use the standard Normal distribution as a reference for the standardized residuals, whence the sum of their squares is the usual chi-squared statistic. When it's small--around the number of distinct times or less--you have near-perfect uniformity. That gives you a good reference value for comparison. Let's look at your data from this point of view. Here are versions of your three datasets: We can order these residuals and plot them against the expected values of the first, second, ..., twenty fourth order statistics of the standard Normal distribution. The horizontal deviations of these plots around a diagonal line signal non-uniformity: Notice the chi-squared statistics posted in each plot. The value of $15.8$ at the left isn't even as great as $24$ (the number of data values), perfectly consistent with a uniform distribution. The middle value of $563$ is large. What it means is that although the residuals line up in the plot, their values are too spread out: this is an over-dispersed dataset. Finally, the right hand value of $28000$ is huge. It signals major variations in this dataset. Even more insight can be had by redrawing these plots, each on its own axis, so we can see the details of the variation. Now you can see clearly how uniformly dispersed the first two datasets are. But by inspecting their vertical scales, you can see that the "dispersed" data are spread out around seven times more than the "uniform" data: that measures the over-dispersion. Just about all statistical software produces plots like these: they are called "QQ" (quantile-quantile) plots. This method works well for any dataset. Interpreting the chi-squared statistic becomes a little delicate when the average frequency drops below $5$ or so, but for almost any exploratory application that's no problem.
How do I quantify the uniformity of sampling time? There are many metrics. They are best used in conjunction with visualizing the data appropriately. Among the solutions worth considering are to compare the distributions of the frequencies (regardles
34,382
How do I quantify the uniformity of sampling time?
The uniform distribution has the highest entropy. Entropy can be used as a measure of uniformity. $$S=-\sum_{i=1}^np(x_i)\log(p(x_i))$$ Minimum is $0$. Maximum is $\log(n)$. The exponential version is more intuitive : it is somehow the percentage of values covered : $$p=e^S/n$$ Examples :
How do I quantify the uniformity of sampling time?
The uniform distribution has the highest entropy. Entropy can be used as a measure of uniformity. $$S=-\sum_{i=1}^np(x_i)\log(p(x_i))$$ Minimum is $0$. Maximum is $\log(n)$. The exponential version is
How do I quantify the uniformity of sampling time? The uniform distribution has the highest entropy. Entropy can be used as a measure of uniformity. $$S=-\sum_{i=1}^np(x_i)\log(p(x_i))$$ Minimum is $0$. Maximum is $\log(n)$. The exponential version is more intuitive : it is somehow the percentage of values covered : $$p=e^S/n$$ Examples :
How do I quantify the uniformity of sampling time? The uniform distribution has the highest entropy. Entropy can be used as a measure of uniformity. $$S=-\sum_{i=1}^np(x_i)\log(p(x_i))$$ Minimum is $0$. Maximum is $\log(n)$. The exponential version is
34,383
How do I quantify the uniformity of sampling time?
You could construct a Monte carlo-esque test with the given parameters and test how similar they are. That is generate a large number of uniform data points and measure how much of them overlap with your data. Since your data is already frequency you can just count how many points there are in your simulation in a time period versus how much there are in your data. for example your generate 240000 points (since the total sum of the frequencies is roughly 240000) on a uniform distribution with parameters (0,2400) and measure how much points there are in each 100 interval. Then $\dfrac{\sum_{i=1}^{24} frequency_{ith hour}-numberofobservations_{ith interval}}{totalsumoffrequencies}$ this will give you the fraction of data points that behave different then you would expect. You could (applying CLT) test this against a normal distribution. You have to re multiply with the sum of frequency though.
How do I quantify the uniformity of sampling time?
You could construct a Monte carlo-esque test with the given parameters and test how similar they are. That is generate a large number of uniform data points and measure how much of them overlap with y
How do I quantify the uniformity of sampling time? You could construct a Monte carlo-esque test with the given parameters and test how similar they are. That is generate a large number of uniform data points and measure how much of them overlap with your data. Since your data is already frequency you can just count how many points there are in your simulation in a time period versus how much there are in your data. for example your generate 240000 points (since the total sum of the frequencies is roughly 240000) on a uniform distribution with parameters (0,2400) and measure how much points there are in each 100 interval. Then $\dfrac{\sum_{i=1}^{24} frequency_{ith hour}-numberofobservations_{ith interval}}{totalsumoffrequencies}$ this will give you the fraction of data points that behave different then you would expect. You could (applying CLT) test this against a normal distribution. You have to re multiply with the sum of frequency though.
How do I quantify the uniformity of sampling time? You could construct a Monte carlo-esque test with the given parameters and test how similar they are. That is generate a large number of uniform data points and measure how much of them overlap with y
34,384
How do I quantify the uniformity of sampling time?
I am not sure what you are after. Whether you need some test as well (ChiSquare might do it)? Anyway. To explore this kind of data I would plot a line graph of multiple frequency distributions as a function of time (for different periods). So 'frequency' instead of 'absolute numbers' such that the variation in amount of events per period does not have an effect. Numeric example: say you start from some table of period x time (I use dummy data, but you could make such a thing with your data) rand_table <- matrix(qnorm(runif(24*20,0,1),200,5),24) day_effect <- sin(c(1:24)/12*3.14)*10 year_effect <- runif(20,-100,100) table <- rand_table + matrix(rep(day_effect,20),24,byrow=0) + matrix(rep(year_effect,24),24,byrow=1) colnames(table) <- (1997:2016) table 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 [1,] 260.0667 233.5953 120.2019 173.8588 256.3401 204.9747 177.7684 151.1975 138.7639 177.9552 240.7447 205.3817 265.8322 283.5408 262.5569 270.4157 174.4056 128.5618 290.2204 211.4510 [2,] 249.5244 239.8960 120.5107 192.3158 253.0286 213.0315 169.0203 151.1792 148.4686 189.3689 243.1242 208.7573 262.9110 284.1739 258.1356 277.2521 169.0257 126.3684 294.5213 215.3311 [3,] 247.9368 241.0808 122.3087 187.8790 254.8316 221.1269 171.4014 153.3314 141.9503 190.8465 245.8284 215.3114 267.7581 293.8443 261.3726 280.1178 177.0729 132.3668 297.9602 204.8105 [4,] 259.6248 236.9574 131.4456 181.1347 254.8909 215.5816 167.8606 162.4480 143.0204 197.6665 248.4771 207.5274 261.8768 288.2050 262.8231 279.2299 173.8911 128.8646 304.1465 211.6245 [5,] 258.5498 235.3907 124.2587 199.1903 256.9186 201.3183 175.2693 158.9103 150.4161 199.6419 245.8176 201.3408 267.2395 296.9404 271.8963 281.5610 170.9048 135.5900 307.9319 225.0224 [6,] 254.2567 238.7180 125.5673 206.0158 258.8102 218.9670 173.4169 156.2924 145.0003 194.5099 252.9973 217.6832 264.4040 290.2716 267.9381 276.0918 166.2304 133.1175 307.3783 223.6769 [7,] 263.8015 240.6280 128.1640 189.2185 260.7813 213.8075 179.2625 153.9522 150.5207 192.3161 245.8665 216.3049 275.7316 299.2184 271.1639 273.7122 179.7725 132.4717 304.0137 216.3689 [8,] 251.1121 229.2925 125.1061 199.1912 251.5387 213.5779 173.9617 161.8092 143.6716 191.9641 245.3583 219.0199 266.3403 297.8517 271.8039 277.2551 184.1632 138.4491 303.5132 212.9774 [9,] 262.7223 228.7230 123.4121 188.1462 256.0832 214.9447 168.3505 160.6458 141.1269 189.5807 252.1686 211.2347 258.2162 286.4550 257.3694 277.0731 164.2448 142.6826 297.7478 214.7545 [10,] 249.4369 237.1748 121.9888 187.5911 247.8664 205.6249 164.0456 158.7430 146.7741 194.0744 234.6995 208.7744 267.5273 277.2283 269.1293 276.5450 172.7910 141.0752 290.0011 206.9505 [11,] 251.5613 232.9232 122.3697 181.4821 239.0499 212.5056 166.7117 148.2396 146.7692 184.4266 242.7564 213.7748 259.5819 274.8612 259.7164 279.9896 167.4115 128.5881 295.3626 212.8600 [12,] 254.9230 220.9191 114.5943 171.9050 242.3551 209.4729 168.1985 131.6664 131.7506 182.0200 243.0256 204.5998 256.6656 282.8597 265.3859 272.2841 161.0621 122.4714 301.9434 207.6163 [13,] 244.3254 220.8979 109.9595 174.6536 247.1941 207.7537 165.4744 145.6344 133.9057 185.2712 242.3778 198.1467 255.9428 271.8196 257.6499 269.8779 156.8241 124.0382 294.1261 207.0789 [14,] 244.6379 222.5761 111.4693 174.1807 247.1866 194.8534 168.4106 143.9778 137.0152 179.4115 233.5407 201.8264 260.9790 279.9378 249.5447 267.6091 152.3627 117.3667 285.6467 202.8995 [15,] 241.6316 224.1021 113.3249 164.2898 242.0647 203.1320 152.5925 139.5406 143.6150 182.4157 229.9307 207.7646 253.3011 273.1239 254.5200 267.1596 152.2835 113.4475 290.8282 193.2609 [16,] 236.1205 217.1519 103.7728 177.2118 235.3146 196.6978 165.7939 138.1489 135.7740 179.4173 240.3620 196.5932 251.1842 276.0529 242.9755 259.4906 157.0868 113.2675 277.5009 198.8784 [17,] 244.0012 224.7148 106.9760 172.4818 237.0167 197.9934 152.9706 140.7621 118.6633 176.8517 227.0027 191.5034 248.9712 265.6904 242.9542 271.0733 150.5976 112.6630 284.8537 193.8020 [18,] 241.4985 213.2320 111.5817 170.3487 237.5654 188.5293 155.9515 143.3172 121.2416 176.1494 231.1264 198.5020 253.3602 269.5122 251.1248 269.7529 149.3875 114.2572 271.4105 202.0268 [19,] 235.2160 221.5018 108.2831 170.1034 238.3747 200.2492 156.0825 135.2035 124.5123 171.6472 233.6136 204.2768 247.2021 278.7483 247.4591 263.8084 157.1870 121.6157 284.1122 202.8897 [20,] 251.9713 226.7143 102.0953 170.3932 236.9628 199.0248 172.7265 137.3076 134.7663 171.9151 238.1566 188.2712 252.4209 275.7938 262.8806 259.6602 148.8275 114.1513 284.5481 204.1796 [21,] 245.5455 223.6492 109.9065 177.2015 241.7890 193.4855 156.2468 138.8739 138.1672 179.0033 233.4918 197.4053 247.6878 274.7670 251.1480 260.8251 148.7647 118.0140 284.0358 201.4156 [22,] 245.4394 217.7951 115.2334 178.6744 249.4580 204.2626 152.8850 137.1021 130.8307 182.5549 232.3583 200.7744 256.4592 277.6917 248.8487 262.9982 153.6351 122.6311 287.4040 195.8459 [23,] 252.1241 225.9910 113.6773 186.8966 243.3909 208.2465 161.2245 143.8986 133.0586 186.5644 224.3530 205.1357 263.8198 283.8469 255.0983 265.6281 161.3566 128.1783 294.0537 199.1899 [24,] 238.2365 229.5241 117.0020 187.2129 250.2044 208.5918 174.3084 142.9330 133.0209 187.7178 242.7608 209.7624 259.7822 277.7875 256.6742 276.9512 157.4175 125.0456 298.1174 210.7647 and then plot the frequency for each year separately norm_table <- t(t(table)/colSums(table)) #plot plot(-100,-100,xlim=c(0,24),ylim=c(0,0.1),xlab="hour",ylab="frequency") plotcolor <- hsv(seq(0.1,0.8,length.out=20),1,1) for (i_year in 1:20) { lines(x<-c(1:24),y<-norm_table[,i_year],col=plotcolor[i_year]) } If a reduction to categorical data is fine four your purpose and you'd wish to have a quick test, then you can do a chi-squared-test on the cross table. To see which values are strongest outliers you could plot their variation from the predicted values.
How do I quantify the uniformity of sampling time?
I am not sure what you are after. Whether you need some test as well (ChiSquare might do it)? Anyway. To explore this kind of data I would plot a line graph of multiple frequency distributions as a fu
How do I quantify the uniformity of sampling time? I am not sure what you are after. Whether you need some test as well (ChiSquare might do it)? Anyway. To explore this kind of data I would plot a line graph of multiple frequency distributions as a function of time (for different periods). So 'frequency' instead of 'absolute numbers' such that the variation in amount of events per period does not have an effect. Numeric example: say you start from some table of period x time (I use dummy data, but you could make such a thing with your data) rand_table <- matrix(qnorm(runif(24*20,0,1),200,5),24) day_effect <- sin(c(1:24)/12*3.14)*10 year_effect <- runif(20,-100,100) table <- rand_table + matrix(rep(day_effect,20),24,byrow=0) + matrix(rep(year_effect,24),24,byrow=1) colnames(table) <- (1997:2016) table 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 [1,] 260.0667 233.5953 120.2019 173.8588 256.3401 204.9747 177.7684 151.1975 138.7639 177.9552 240.7447 205.3817 265.8322 283.5408 262.5569 270.4157 174.4056 128.5618 290.2204 211.4510 [2,] 249.5244 239.8960 120.5107 192.3158 253.0286 213.0315 169.0203 151.1792 148.4686 189.3689 243.1242 208.7573 262.9110 284.1739 258.1356 277.2521 169.0257 126.3684 294.5213 215.3311 [3,] 247.9368 241.0808 122.3087 187.8790 254.8316 221.1269 171.4014 153.3314 141.9503 190.8465 245.8284 215.3114 267.7581 293.8443 261.3726 280.1178 177.0729 132.3668 297.9602 204.8105 [4,] 259.6248 236.9574 131.4456 181.1347 254.8909 215.5816 167.8606 162.4480 143.0204 197.6665 248.4771 207.5274 261.8768 288.2050 262.8231 279.2299 173.8911 128.8646 304.1465 211.6245 [5,] 258.5498 235.3907 124.2587 199.1903 256.9186 201.3183 175.2693 158.9103 150.4161 199.6419 245.8176 201.3408 267.2395 296.9404 271.8963 281.5610 170.9048 135.5900 307.9319 225.0224 [6,] 254.2567 238.7180 125.5673 206.0158 258.8102 218.9670 173.4169 156.2924 145.0003 194.5099 252.9973 217.6832 264.4040 290.2716 267.9381 276.0918 166.2304 133.1175 307.3783 223.6769 [7,] 263.8015 240.6280 128.1640 189.2185 260.7813 213.8075 179.2625 153.9522 150.5207 192.3161 245.8665 216.3049 275.7316 299.2184 271.1639 273.7122 179.7725 132.4717 304.0137 216.3689 [8,] 251.1121 229.2925 125.1061 199.1912 251.5387 213.5779 173.9617 161.8092 143.6716 191.9641 245.3583 219.0199 266.3403 297.8517 271.8039 277.2551 184.1632 138.4491 303.5132 212.9774 [9,] 262.7223 228.7230 123.4121 188.1462 256.0832 214.9447 168.3505 160.6458 141.1269 189.5807 252.1686 211.2347 258.2162 286.4550 257.3694 277.0731 164.2448 142.6826 297.7478 214.7545 [10,] 249.4369 237.1748 121.9888 187.5911 247.8664 205.6249 164.0456 158.7430 146.7741 194.0744 234.6995 208.7744 267.5273 277.2283 269.1293 276.5450 172.7910 141.0752 290.0011 206.9505 [11,] 251.5613 232.9232 122.3697 181.4821 239.0499 212.5056 166.7117 148.2396 146.7692 184.4266 242.7564 213.7748 259.5819 274.8612 259.7164 279.9896 167.4115 128.5881 295.3626 212.8600 [12,] 254.9230 220.9191 114.5943 171.9050 242.3551 209.4729 168.1985 131.6664 131.7506 182.0200 243.0256 204.5998 256.6656 282.8597 265.3859 272.2841 161.0621 122.4714 301.9434 207.6163 [13,] 244.3254 220.8979 109.9595 174.6536 247.1941 207.7537 165.4744 145.6344 133.9057 185.2712 242.3778 198.1467 255.9428 271.8196 257.6499 269.8779 156.8241 124.0382 294.1261 207.0789 [14,] 244.6379 222.5761 111.4693 174.1807 247.1866 194.8534 168.4106 143.9778 137.0152 179.4115 233.5407 201.8264 260.9790 279.9378 249.5447 267.6091 152.3627 117.3667 285.6467 202.8995 [15,] 241.6316 224.1021 113.3249 164.2898 242.0647 203.1320 152.5925 139.5406 143.6150 182.4157 229.9307 207.7646 253.3011 273.1239 254.5200 267.1596 152.2835 113.4475 290.8282 193.2609 [16,] 236.1205 217.1519 103.7728 177.2118 235.3146 196.6978 165.7939 138.1489 135.7740 179.4173 240.3620 196.5932 251.1842 276.0529 242.9755 259.4906 157.0868 113.2675 277.5009 198.8784 [17,] 244.0012 224.7148 106.9760 172.4818 237.0167 197.9934 152.9706 140.7621 118.6633 176.8517 227.0027 191.5034 248.9712 265.6904 242.9542 271.0733 150.5976 112.6630 284.8537 193.8020 [18,] 241.4985 213.2320 111.5817 170.3487 237.5654 188.5293 155.9515 143.3172 121.2416 176.1494 231.1264 198.5020 253.3602 269.5122 251.1248 269.7529 149.3875 114.2572 271.4105 202.0268 [19,] 235.2160 221.5018 108.2831 170.1034 238.3747 200.2492 156.0825 135.2035 124.5123 171.6472 233.6136 204.2768 247.2021 278.7483 247.4591 263.8084 157.1870 121.6157 284.1122 202.8897 [20,] 251.9713 226.7143 102.0953 170.3932 236.9628 199.0248 172.7265 137.3076 134.7663 171.9151 238.1566 188.2712 252.4209 275.7938 262.8806 259.6602 148.8275 114.1513 284.5481 204.1796 [21,] 245.5455 223.6492 109.9065 177.2015 241.7890 193.4855 156.2468 138.8739 138.1672 179.0033 233.4918 197.4053 247.6878 274.7670 251.1480 260.8251 148.7647 118.0140 284.0358 201.4156 [22,] 245.4394 217.7951 115.2334 178.6744 249.4580 204.2626 152.8850 137.1021 130.8307 182.5549 232.3583 200.7744 256.4592 277.6917 248.8487 262.9982 153.6351 122.6311 287.4040 195.8459 [23,] 252.1241 225.9910 113.6773 186.8966 243.3909 208.2465 161.2245 143.8986 133.0586 186.5644 224.3530 205.1357 263.8198 283.8469 255.0983 265.6281 161.3566 128.1783 294.0537 199.1899 [24,] 238.2365 229.5241 117.0020 187.2129 250.2044 208.5918 174.3084 142.9330 133.0209 187.7178 242.7608 209.7624 259.7822 277.7875 256.6742 276.9512 157.4175 125.0456 298.1174 210.7647 and then plot the frequency for each year separately norm_table <- t(t(table)/colSums(table)) #plot plot(-100,-100,xlim=c(0,24),ylim=c(0,0.1),xlab="hour",ylab="frequency") plotcolor <- hsv(seq(0.1,0.8,length.out=20),1,1) for (i_year in 1:20) { lines(x<-c(1:24),y<-norm_table[,i_year],col=plotcolor[i_year]) } If a reduction to categorical data is fine four your purpose and you'd wish to have a quick test, then you can do a chi-squared-test on the cross table. To see which values are strongest outliers you could plot their variation from the predicted values.
How do I quantify the uniformity of sampling time? I am not sure what you are after. Whether you need some test as well (ChiSquare might do it)? Anyway. To explore this kind of data I would plot a line graph of multiple frequency distributions as a fu
34,385
Does likelihood ratio test control for overfitting?
Your reasoning is too pessimistic. Given the $K$ additional features, the LR test statistic will follow an asymptotic $\chi^2$ distribution with $K$ degrees of freedom if the null is true (and other auxiliary assumptions, e.g., a suitable regression setting, weak dependence assumptions etc.), i.e., if the additional predictors in $B$ are just noise features that lead to "overfitting". The figure below plots the 0.95%-quantiles of the $\chi^2_K$ distribution as a function of $K$, i.e. the value that the LR statistic needs to exceed to reject the null that $A$ is the "good" model. As you can see, higher and higher values of the test statistic are needed the larger your set in $B$ that "overfits" the data. So the test suitably makes it more difficult for the (inevitable) better fit (or log-likelihood) of the larger model to be judged "sufficiently" large to reject model $A$. Of course, for any given application of the test, you might get spurious overfitting that is so "good" that you still falsely reject the null. This "type-I" error is however inherent in any statistical test, and will occur in about 5% of the cases in which the null is true if (like in the figure) we use the 95%-quantiles of the test's null distribution as our critical values.
Does likelihood ratio test control for overfitting?
Your reasoning is too pessimistic. Given the $K$ additional features, the LR test statistic will follow an asymptotic $\chi^2$ distribution with $K$ degrees of freedom if the null is true (and other
Does likelihood ratio test control for overfitting? Your reasoning is too pessimistic. Given the $K$ additional features, the LR test statistic will follow an asymptotic $\chi^2$ distribution with $K$ degrees of freedom if the null is true (and other auxiliary assumptions, e.g., a suitable regression setting, weak dependence assumptions etc.), i.e., if the additional predictors in $B$ are just noise features that lead to "overfitting". The figure below plots the 0.95%-quantiles of the $\chi^2_K$ distribution as a function of $K$, i.e. the value that the LR statistic needs to exceed to reject the null that $A$ is the "good" model. As you can see, higher and higher values of the test statistic are needed the larger your set in $B$ that "overfits" the data. So the test suitably makes it more difficult for the (inevitable) better fit (or log-likelihood) of the larger model to be judged "sufficiently" large to reject model $A$. Of course, for any given application of the test, you might get spurious overfitting that is so "good" that you still falsely reject the null. This "type-I" error is however inherent in any statistical test, and will occur in about 5% of the cases in which the null is true if (like in the figure) we use the 95%-quantiles of the test's null distribution as our critical values.
Does likelihood ratio test control for overfitting? Your reasoning is too pessimistic. Given the $K$ additional features, the LR test statistic will follow an asymptotic $\chi^2$ distribution with $K$ degrees of freedom if the null is true (and other
34,386
Does likelihood ratio test control for overfitting?
How can we determine whether the added features cause the overfitting problem? It depends on what you intend to use your model for. If you indeed are interested in testing a hypothesis that the $K$ extra features have zero coefficients in your model, then the likelihood ratio (LR) test is the relevant tool to use. It is not flawed in this respect, as shown by @ChristophHanck. If you intend to use your model for prediction, you care whether the extra features improve predictive performance. For that it is not sufficient that the features truly belong in the nesting model; their contribution also needs to be estimated with sufficient accuracy. (If their are estimated with poor accuracy, including them in the model may harm rather than help in prediction.) AIC is the relevant measure in this setting, while the LR test is not particularly well suited for it. How can we determine whether the added features cause the overfitting problem? Does likelihood ratio test always return the correct answer? As per @ChristophHanck's answer, there is always a possibility for committing a type I error. But you can control the error rate by setting the significance level sufficiently low, e.g. at 5% or 1%.
Does likelihood ratio test control for overfitting?
How can we determine whether the added features cause the overfitting problem? It depends on what you intend to use your model for. If you indeed are interested in testing a hypothesis that the $K$
Does likelihood ratio test control for overfitting? How can we determine whether the added features cause the overfitting problem? It depends on what you intend to use your model for. If you indeed are interested in testing a hypothesis that the $K$ extra features have zero coefficients in your model, then the likelihood ratio (LR) test is the relevant tool to use. It is not flawed in this respect, as shown by @ChristophHanck. If you intend to use your model for prediction, you care whether the extra features improve predictive performance. For that it is not sufficient that the features truly belong in the nesting model; their contribution also needs to be estimated with sufficient accuracy. (If their are estimated with poor accuracy, including them in the model may harm rather than help in prediction.) AIC is the relevant measure in this setting, while the LR test is not particularly well suited for it. How can we determine whether the added features cause the overfitting problem? Does likelihood ratio test always return the correct answer? As per @ChristophHanck's answer, there is always a possibility for committing a type I error. But you can control the error rate by setting the significance level sufficiently low, e.g. at 5% or 1%.
Does likelihood ratio test control for overfitting? How can we determine whether the added features cause the overfitting problem? It depends on what you intend to use your model for. If you indeed are interested in testing a hypothesis that the $K$
34,387
Does likelihood ratio test control for overfitting?
I think you are looking for one of the information criteria such as AIC or BIC which penalise you for adding parameters. https://en.wikipedia.org/wiki/Akaike_information_criterion has some discussion of both of them. Note that you should only compare them using the same software as they are only defined up to an additive constant so you cannot compare one computed with R with one computed with Stata.
Does likelihood ratio test control for overfitting?
I think you are looking for one of the information criteria such as AIC or BIC which penalise you for adding parameters. https://en.wikipedia.org/wiki/Akaike_information_criterion has some discussion
Does likelihood ratio test control for overfitting? I think you are looking for one of the information criteria such as AIC or BIC which penalise you for adding parameters. https://en.wikipedia.org/wiki/Akaike_information_criterion has some discussion of both of them. Note that you should only compare them using the same software as they are only defined up to an additive constant so you cannot compare one computed with R with one computed with Stata.
Does likelihood ratio test control for overfitting? I think you are looking for one of the information criteria such as AIC or BIC which penalise you for adding parameters. https://en.wikipedia.org/wiki/Akaike_information_criterion has some discussion
34,388
Random Forest Regression and trended time-series
RFs, of course, can identify and model a long-term trend in the data. However, the issue becomes more complicated when you are trying to forecast out to never seen before values, as you often are trying to do with time-series data. For example, if see that activity increases linearly over a period between 1915 and 2015, you would expect it to continue to do so in the future. RF, however, would not make that forecast. It would forecast all future variables to have the same activity as 2015. from sklearn import ensemble import numpy as np years = np.arange(1916, 2016) #the final year in the training data set is 2015 years = [[x] for x in years] print 'Final year is %s ' %years[-1][0] #say your ts goes up by 1 each year - a perfect linear trend ts = np.arange(1,101) est = ensemble.RandomForestClassifier().fit(years,ts) print est.predict([[2013], [2014], [2015], [2016] , [2017], [2018]]) The above script will print 2013, 2014, 2015, 2015, 2015, 2015. Adding lag variables into the RF does not help in this regard. So careful. I'm not sure if adding trend data to your RF is gonna do what you think it will.
Random Forest Regression and trended time-series
RFs, of course, can identify and model a long-term trend in the data. However, the issue becomes more complicated when you are trying to forecast out to never seen before values, as you often are try
Random Forest Regression and trended time-series RFs, of course, can identify and model a long-term trend in the data. However, the issue becomes more complicated when you are trying to forecast out to never seen before values, as you often are trying to do with time-series data. For example, if see that activity increases linearly over a period between 1915 and 2015, you would expect it to continue to do so in the future. RF, however, would not make that forecast. It would forecast all future variables to have the same activity as 2015. from sklearn import ensemble import numpy as np years = np.arange(1916, 2016) #the final year in the training data set is 2015 years = [[x] for x in years] print 'Final year is %s ' %years[-1][0] #say your ts goes up by 1 each year - a perfect linear trend ts = np.arange(1,101) est = ensemble.RandomForestClassifier().fit(years,ts) print est.predict([[2013], [2014], [2015], [2016] , [2017], [2018]]) The above script will print 2013, 2014, 2015, 2015, 2015, 2015. Adding lag variables into the RF does not help in this regard. So careful. I'm not sure if adding trend data to your RF is gonna do what you think it will.
Random Forest Regression and trended time-series RFs, of course, can identify and model a long-term trend in the data. However, the issue becomes more complicated when you are trying to forecast out to never seen before values, as you often are try
34,389
Random Forest Regression and trended time-series
Just change the variable you are trying to predict to the difference in the dependent variable. As the other posts point out, the random forest will not know how to treat time variables that occur after the training set. Let's say your training set has data from Minute 1 to Minute 60. The random forest might make a rule that after forty minutes the dependent variable is 100. Even if there is a trend, if you get out to Minute 10000 in the test data, the same rule will be applied. If you predict the difference though, this can have the same effect of including a trend. As to whether RF's are decent forecasters, I have had MUCH greater luck with RF's than other econometric models like VAR, VECM, etc. but especially for short-term forecasts. Some other models do seem to work better on most data, however, such as well-tuned GBM models.
Random Forest Regression and trended time-series
Just change the variable you are trying to predict to the difference in the dependent variable. As the other posts point out, the random forest will not know how to treat time variables that occur aft
Random Forest Regression and trended time-series Just change the variable you are trying to predict to the difference in the dependent variable. As the other posts point out, the random forest will not know how to treat time variables that occur after the training set. Let's say your training set has data from Minute 1 to Minute 60. The random forest might make a rule that after forty minutes the dependent variable is 100. Even if there is a trend, if you get out to Minute 10000 in the test data, the same rule will be applied. If you predict the difference though, this can have the same effect of including a trend. As to whether RF's are decent forecasters, I have had MUCH greater luck with RF's than other econometric models like VAR, VECM, etc. but especially for short-term forecasts. Some other models do seem to work better on most data, however, such as well-tuned GBM models.
Random Forest Regression and trended time-series Just change the variable you are trying to predict to the difference in the dependent variable. As the other posts point out, the random forest will not know how to treat time variables that occur aft
34,390
Why is the assumption of a normally distributed residual relevant to a linear regression model? [duplicate]
The usual small-sample inference -- confidence intervals, prediction intervals, hypothesis tests - rely on normality. You can of course make different parametric assumptions. While Gauss-Markov gives you BLUE, the problem is if you're far enough from normality, all linear estimators may be bad, so choosing the best among them may be nearly useless.
Why is the assumption of a normally distributed residual relevant to a linear regression model? [dup
The usual small-sample inference -- confidence intervals, prediction intervals, hypothesis tests - rely on normality. You can of course make different parametric assumptions. While Gauss-Markov gives
Why is the assumption of a normally distributed residual relevant to a linear regression model? [duplicate] The usual small-sample inference -- confidence intervals, prediction intervals, hypothesis tests - rely on normality. You can of course make different parametric assumptions. While Gauss-Markov gives you BLUE, the problem is if you're far enough from normality, all linear estimators may be bad, so choosing the best among them may be nearly useless.
Why is the assumption of a normally distributed residual relevant to a linear regression model? [dup The usual small-sample inference -- confidence intervals, prediction intervals, hypothesis tests - rely on normality. You can of course make different parametric assumptions. While Gauss-Markov gives
34,391
Why is the assumption of a normally distributed residual relevant to a linear regression model? [duplicate]
You're correct that the assumption of normality is not required to prove unbiasedness. Without the assumption of normality you can also prove efficiency in the class of linear, unbiased estimators via the Gauss-Markov theorem. If the errors are normally distributed, you can also establish that the least-squares estimators coincide with the maximum likelihood estimators. This lets you talk about things like the asymptotic efficiency of the MLEs in terms of the Cramér-Rao lower bound. From this you can establish that the OLS estimators are asymptotically best in the class of regular estimators - estimators whose distributions "are not affected by small changes in the parameter", according to Larry Wasserman. So, the normality assumption is not required but nets you some stronger results.
Why is the assumption of a normally distributed residual relevant to a linear regression model? [dup
You're correct that the assumption of normality is not required to prove unbiasedness. Without the assumption of normality you can also prove efficiency in the class of linear, unbiased estimators vi
Why is the assumption of a normally distributed residual relevant to a linear regression model? [duplicate] You're correct that the assumption of normality is not required to prove unbiasedness. Without the assumption of normality you can also prove efficiency in the class of linear, unbiased estimators via the Gauss-Markov theorem. If the errors are normally distributed, you can also establish that the least-squares estimators coincide with the maximum likelihood estimators. This lets you talk about things like the asymptotic efficiency of the MLEs in terms of the Cramér-Rao lower bound. From this you can establish that the OLS estimators are asymptotically best in the class of regular estimators - estimators whose distributions "are not affected by small changes in the parameter", according to Larry Wasserman. So, the normality assumption is not required but nets you some stronger results.
Why is the assumption of a normally distributed residual relevant to a linear regression model? [dup You're correct that the assumption of normality is not required to prove unbiasedness. Without the assumption of normality you can also prove efficiency in the class of linear, unbiased estimators vi
34,392
Reviewer questioning my stats, need a second opinion (multiple linear regression)
If you think there is a discontinuity in the effect of the exposure at an exposure (toxin level) of zero, you can test a more general hypothesis using at least 2 predictors: an indicator of toxin > 0 and something like log(toxin + 1). The 2 d.f. "chunk" test for the combined effects of these two predictors tests the null hypothesis that toxin level is associated with the outcome, allowing for a discontinuity at zero. You can get the chunk test using a general contrast with 2 d.f. or by omitting both variables and doing the "difference in $R^2$" test. The reviewer is incorrect. It is very important to make sure that you have chosen the right model for the clinical outcome score. You are assuming the score is a continuous variable without a great number of ties, and that the residuals from the model have a Gaussian distribution. Avoid any removal of variables on the basis of $P$-values.
Reviewer questioning my stats, need a second opinion (multiple linear regression)
If you think there is a discontinuity in the effect of the exposure at an exposure (toxin level) of zero, you can test a more general hypothesis using at least 2 predictors: an indicator of toxin > 0
Reviewer questioning my stats, need a second opinion (multiple linear regression) If you think there is a discontinuity in the effect of the exposure at an exposure (toxin level) of zero, you can test a more general hypothesis using at least 2 predictors: an indicator of toxin > 0 and something like log(toxin + 1). The 2 d.f. "chunk" test for the combined effects of these two predictors tests the null hypothesis that toxin level is associated with the outcome, allowing for a discontinuity at zero. You can get the chunk test using a general contrast with 2 d.f. or by omitting both variables and doing the "difference in $R^2$" test. The reviewer is incorrect. It is very important to make sure that you have chosen the right model for the clinical outcome score. You are assuming the score is a continuous variable without a great number of ties, and that the residuals from the model have a Gaussian distribution. Avoid any removal of variables on the basis of $P$-values.
Reviewer questioning my stats, need a second opinion (multiple linear regression) If you think there is a discontinuity in the effect of the exposure at an exposure (toxin level) of zero, you can test a more general hypothesis using at least 2 predictors: an indicator of toxin > 0
34,393
Reviewer questioning my stats, need a second opinion (multiple linear regression)
It would be important to (1) see the regression plots of the relation between toxin and clinical score and (2) to know in more detail what your experimental treatment consisted of. I created an oversimplified data example in R to illustrate the problem. Data example: data1<-data.frame(tox=c(0,0,0,0,0,0,0,0,0,1,1,1,2,2,2,3,3,3), clin=c(10,10,10,10,10,10,10,10,10,20,30,40,20,30,40,20,30,40)) model1<-lm(data1$clin ~ data1$tox) data2<-data.frame(tox=c(1,1,1,2,2,2,3,3,3), clin=c(20,30,40,20,30,40,20,30,40)) model2<-lm(data2$clin ~ data2$tox) par(mfrow=c(1,2)) plot(data1$clin ~ data1$tox, xlim=c(0,4), ylim=c(0,40), xlab="toxin", ylab="clinical score") abline(model1, col="blue") plot(data2$clin ~ data2$tox, xlim=c(0,4), ylim=c(0,40), xlab="toxin", ylab="clinical score") abline(model2, col="yellow") Model1 would show a significant regression model. However, the effect may entirely disappear if we removed clinical scores at tox=0 values, as shown in model2. It would be essential to know how the scatterplot of data looks like when you removed tox=0. In this case it would be hard to believe there is some linear (or higher level) relation between the dose of toxin and clinical scores. It may however still be worthwhile to perform a group comparison, no-toxin (tox=0) versus toxin (tox>0). From a research methodological point of view it would be important to know what tox=0 really means. What kind of treatment did patients receive at tox=0 and tox>0? If a placebo treatment was used (i.e., the only difference is that tox>0 received really a toxin and tox=0 received something fake) then a simple group comparison may be still valid to test the effect of no-toxin vs toxin.
Reviewer questioning my stats, need a second opinion (multiple linear regression)
It would be important to (1) see the regression plots of the relation between toxin and clinical score and (2) to know in more detail what your experimental treatment consisted of. I created an oversi
Reviewer questioning my stats, need a second opinion (multiple linear regression) It would be important to (1) see the regression plots of the relation between toxin and clinical score and (2) to know in more detail what your experimental treatment consisted of. I created an oversimplified data example in R to illustrate the problem. Data example: data1<-data.frame(tox=c(0,0,0,0,0,0,0,0,0,1,1,1,2,2,2,3,3,3), clin=c(10,10,10,10,10,10,10,10,10,20,30,40,20,30,40,20,30,40)) model1<-lm(data1$clin ~ data1$tox) data2<-data.frame(tox=c(1,1,1,2,2,2,3,3,3), clin=c(20,30,40,20,30,40,20,30,40)) model2<-lm(data2$clin ~ data2$tox) par(mfrow=c(1,2)) plot(data1$clin ~ data1$tox, xlim=c(0,4), ylim=c(0,40), xlab="toxin", ylab="clinical score") abline(model1, col="blue") plot(data2$clin ~ data2$tox, xlim=c(0,4), ylim=c(0,40), xlab="toxin", ylab="clinical score") abline(model2, col="yellow") Model1 would show a significant regression model. However, the effect may entirely disappear if we removed clinical scores at tox=0 values, as shown in model2. It would be essential to know how the scatterplot of data looks like when you removed tox=0. In this case it would be hard to believe there is some linear (or higher level) relation between the dose of toxin and clinical scores. It may however still be worthwhile to perform a group comparison, no-toxin (tox=0) versus toxin (tox>0). From a research methodological point of view it would be important to know what tox=0 really means. What kind of treatment did patients receive at tox=0 and tox>0? If a placebo treatment was used (i.e., the only difference is that tox>0 received really a toxin and tox=0 received something fake) then a simple group comparison may be still valid to test the effect of no-toxin vs toxin.
Reviewer questioning my stats, need a second opinion (multiple linear regression) It would be important to (1) see the regression plots of the relation between toxin and clinical score and (2) to know in more detail what your experimental treatment consisted of. I created an oversi
34,394
Reviewer questioning my stats, need a second opinion (multiple linear regression)
Try to turn your toxin exposure into a categorical predictor and run the same model. And if the IV still significant run a reduced model with only the significant predictors (toxin exposure, along with age and education) as a continuous predictor for the 80 participant who has exposed to the toxin. I think your reviewer has a valid concern of the 70 non exposure participant. Alternatively you can randomly sample some participant from your 70 non-exposures and try to run the same model.
Reviewer questioning my stats, need a second opinion (multiple linear regression)
Try to turn your toxin exposure into a categorical predictor and run the same model. And if the IV still significant run a reduced model with only the significant predictors (toxin exposure, along wit
Reviewer questioning my stats, need a second opinion (multiple linear regression) Try to turn your toxin exposure into a categorical predictor and run the same model. And if the IV still significant run a reduced model with only the significant predictors (toxin exposure, along with age and education) as a continuous predictor for the 80 participant who has exposed to the toxin. I think your reviewer has a valid concern of the 70 non exposure participant. Alternatively you can randomly sample some participant from your 70 non-exposures and try to run the same model.
Reviewer questioning my stats, need a second opinion (multiple linear regression) Try to turn your toxin exposure into a categorical predictor and run the same model. And if the IV still significant run a reduced model with only the significant predictors (toxin exposure, along wit
34,395
Convolutional neural networks: shared weights?
The main advantage of shared weights, is that you can substantially lower the degrees of freedom of your problem. Take the simplest case, think of a tied autoencoder, where the input weights are $W_{x} \in \mathbb{R}^d$ and the output weights are $W_{x}^T$. You have lowered the parameters of your model by half from $2d \rightarrow d$. You can see some visualizations here: link. Similar results would be obtained in a Conv Net. This way you can get the following results: less parameters to optimize, which means faster convergence to some minima, at the expense of making your model less flexible. It is interesting to note that, this "less flexibility" can work as a regularizer many times and avoiding overfitting as the weights are shared with some other neurons. Therefore, it is a nice tweak to experiment with and I would suggest you to try both. I've seen cases where sharing information (sharing weights), has paved the way to better performance, and others, that made my model become significantly less flexible.
Convolutional neural networks: shared weights?
The main advantage of shared weights, is that you can substantially lower the degrees of freedom of your problem. Take the simplest case, think of a tied autoencoder, where the input weights are $W_{x
Convolutional neural networks: shared weights? The main advantage of shared weights, is that you can substantially lower the degrees of freedom of your problem. Take the simplest case, think of a tied autoencoder, where the input weights are $W_{x} \in \mathbb{R}^d$ and the output weights are $W_{x}^T$. You have lowered the parameters of your model by half from $2d \rightarrow d$. You can see some visualizations here: link. Similar results would be obtained in a Conv Net. This way you can get the following results: less parameters to optimize, which means faster convergence to some minima, at the expense of making your model less flexible. It is interesting to note that, this "less flexibility" can work as a regularizer many times and avoiding overfitting as the weights are shared with some other neurons. Therefore, it is a nice tweak to experiment with and I would suggest you to try both. I've seen cases where sharing information (sharing weights), has paved the way to better performance, and others, that made my model become significantly less flexible.
Convolutional neural networks: shared weights? The main advantage of shared weights, is that you can substantially lower the degrees of freedom of your problem. Take the simplest case, think of a tied autoencoder, where the input weights are $W_{x
34,396
Convolutional neural networks: shared weights?
@iassael emphasized more on regularization effect as a result of reduced parameters, but I think better performance of weight sharing method is more about finding local features instead of global one. This reduces exponential possibilities to a linear scale or at least to a scale that can be more easily managed. Here is a simple example. Let's say we have an input with only 4 pixels and each pixel has only binary values '0' or '1'. there are 2^4 = 16 possible configuration for a global feature to learn. However, if a local feature, let's say it has only 1 pixel receptive field, is used, it is enough to learn 2 simple feature '0' and '1'. As receptive field size increases, number of feature needed to learnt also increases. As a result, local features reduces number of features that are needed to learnt. By convolving this local features on all input space it can be found where this features are present exaclty. Let's apply the same analogy to an object detection test. If fully connected first layer attempts to extract all possible configurations of edges at given images, it needs to learn many combination of different edges which is plenty. However, if it tries to learn local features, number of possible edges greatly reduced to different edge orientations. Then via convolution it can reveals which location mostly activates a specific edge orientation.
Convolutional neural networks: shared weights?
@iassael emphasized more on regularization effect as a result of reduced parameters, but I think better performance of weight sharing method is more about finding local features instead of global one.
Convolutional neural networks: shared weights? @iassael emphasized more on regularization effect as a result of reduced parameters, but I think better performance of weight sharing method is more about finding local features instead of global one. This reduces exponential possibilities to a linear scale or at least to a scale that can be more easily managed. Here is a simple example. Let's say we have an input with only 4 pixels and each pixel has only binary values '0' or '1'. there are 2^4 = 16 possible configuration for a global feature to learn. However, if a local feature, let's say it has only 1 pixel receptive field, is used, it is enough to learn 2 simple feature '0' and '1'. As receptive field size increases, number of feature needed to learnt also increases. As a result, local features reduces number of features that are needed to learnt. By convolving this local features on all input space it can be found where this features are present exaclty. Let's apply the same analogy to an object detection test. If fully connected first layer attempts to extract all possible configurations of edges at given images, it needs to learn many combination of different edges which is plenty. However, if it tries to learn local features, number of possible edges greatly reduced to different edge orientations. Then via convolution it can reveals which location mostly activates a specific edge orientation.
Convolutional neural networks: shared weights? @iassael emphasized more on regularization effect as a result of reduced parameters, but I think better performance of weight sharing method is more about finding local features instead of global one.
34,397
Convolutional neural networks: shared weights?
A typical weight sharing technique found in CNN treats the input as a hierarchy of local regions. It imposes a general assumption (prior knowledge) that the input going to be processed by the network can be decomposed into a set of local regions with the same nature and thus each of them can be processed with the same set of transformations. With the prior assumption, we could reduce the amount of parameters in the network (as compared with a fully-connected network) and increase the network’s generalisation ability, given that this prior assumption is a correct one for the problem.
Convolutional neural networks: shared weights?
A typical weight sharing technique found in CNN treats the input as a hierarchy of local regions. It imposes a general assumption (prior knowledge) that the input going to be processed by the network
Convolutional neural networks: shared weights? A typical weight sharing technique found in CNN treats the input as a hierarchy of local regions. It imposes a general assumption (prior knowledge) that the input going to be processed by the network can be decomposed into a set of local regions with the same nature and thus each of them can be processed with the same set of transformations. With the prior assumption, we could reduce the amount of parameters in the network (as compared with a fully-connected network) and increase the network’s generalisation ability, given that this prior assumption is a correct one for the problem.
Convolutional neural networks: shared weights? A typical weight sharing technique found in CNN treats the input as a hierarchy of local regions. It imposes a general assumption (prior knowledge) that the input going to be processed by the network
34,398
VAR or VECM for a mix of stationary and nonstationary variables?
So you have three nonstationary series and one stationary series. Let us call them $x_1$, $x_2$, $x_3$, and $x_4$, respectively. Suppose the nonstationarity of $x_1$, $x_2$, $x_3$ is of a unit-root kind (rather than of some other kind); that is, each of $x_1$, $x_2$, $x_3$ is integrated of order one, I(1). You can determine the order of integration using, for example, the augmented Dickey-Fuller test (ADF test). Test each pair of the nonstationary series ($x_1$ and $x_2$; $x_1$ and $x_3$; $x_2$ and $x_3$) for cointegration using the Johansen or the Engle-Granger test. Then test all three series ($x_1$, $x_2$, $x_3$) for cointegration using the Johansen test. Depending on the results of the tests, you may find yourself in one of the following situations: (A) No cointegration (B) Two of the variables (say, $x_1$ and $x_2$) are cointegrated while the third variable (say, $x_3$) is not (C) The three variables ($x_1$, $x_2$, $x_3$) are cointegrated In general, you want the following: Models for cointegrated variables should have an error-correction representation; otherwise the model would be misspecified (cointegration goes hand-in-hand with the error correction representation). Models for stationary dependent variables should not have nonstationary explanatory variables (except perhaps for stationary combinations of cointegrated nonstationary variables); otherwise the linear combination of the regressors would diverge from the regressand. Models for nonstationary dependent variables should have at least one nonstationary explanatory variable; otherwise the regressand would diverge from the linear combination of the regressors. Mind nonstandard distributions of estimators for the integrated variables. Based on these principles, you may do the following: If (A) then first-difference each of the three variables ($x_1$, $x_2$, $x_3$), and use them together with the stationary variable $x_4$ to build a VAR model. If (B) then build a model where $\Delta x_1$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $\Delta x_2$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $\Delta x_3$ depends on lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $x_4$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$. If (C) then build a model where $\Delta x_1$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $\Delta x_2$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $\Delta x_3$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $x_4$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$. These are pretty general models with lots of regressors. You may find it beneficial to exclude some variables from some equations or use penalization to avoid overfitting. Relevant additional keywords: I(0), I(1).
VAR or VECM for a mix of stationary and nonstationary variables?
So you have three nonstationary series and one stationary series. Let us call them $x_1$, $x_2$, $x_3$, and $x_4$, respectively. Suppose the nonstationarity of $x_1$, $x_2$, $x_3$ is of a unit-root ki
VAR or VECM for a mix of stationary and nonstationary variables? So you have three nonstationary series and one stationary series. Let us call them $x_1$, $x_2$, $x_3$, and $x_4$, respectively. Suppose the nonstationarity of $x_1$, $x_2$, $x_3$ is of a unit-root kind (rather than of some other kind); that is, each of $x_1$, $x_2$, $x_3$ is integrated of order one, I(1). You can determine the order of integration using, for example, the augmented Dickey-Fuller test (ADF test). Test each pair of the nonstationary series ($x_1$ and $x_2$; $x_1$ and $x_3$; $x_2$ and $x_3$) for cointegration using the Johansen or the Engle-Granger test. Then test all three series ($x_1$, $x_2$, $x_3$) for cointegration using the Johansen test. Depending on the results of the tests, you may find yourself in one of the following situations: (A) No cointegration (B) Two of the variables (say, $x_1$ and $x_2$) are cointegrated while the third variable (say, $x_3$) is not (C) The three variables ($x_1$, $x_2$, $x_3$) are cointegrated In general, you want the following: Models for cointegrated variables should have an error-correction representation; otherwise the model would be misspecified (cointegration goes hand-in-hand with the error correction representation). Models for stationary dependent variables should not have nonstationary explanatory variables (except perhaps for stationary combinations of cointegrated nonstationary variables); otherwise the linear combination of the regressors would diverge from the regressand. Models for nonstationary dependent variables should have at least one nonstationary explanatory variable; otherwise the regressand would diverge from the linear combination of the regressors. Mind nonstandard distributions of estimators for the integrated variables. Based on these principles, you may do the following: If (A) then first-difference each of the three variables ($x_1$, $x_2$, $x_3$), and use them together with the stationary variable $x_4$ to build a VAR model. If (B) then build a model where $\Delta x_1$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $\Delta x_2$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $\Delta x_3$ depends on lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $x_4$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$. If (C) then build a model where $\Delta x_1$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $\Delta x_2$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $\Delta x_3$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$; $x_4$ depends on the error correction term and lags of $\Delta x_1$, $\Delta x_2$, $\Delta x_3$, $x_4$. These are pretty general models with lots of regressors. You may find it beneficial to exclude some variables from some equations or use penalization to avoid overfitting. Relevant additional keywords: I(0), I(1).
VAR or VECM for a mix of stationary and nonstationary variables? So you have three nonstationary series and one stationary series. Let us call them $x_1$, $x_2$, $x_3$, and $x_4$, respectively. Suppose the nonstationarity of $x_1$, $x_2$, $x_3$ is of a unit-root ki
34,399
VAR or VECM for a mix of stationary and nonstationary variables?
Should I use VAR or VECM to find relation between them? In practice, it depends on the power of cointegration tests: If your variables are cointegrated and you used a VAR model: you could have done better by estimating a VECM model. Your estimations are still consistent (in fact superconsistent), but inefficient. If your variables are not cointegrated and you use a VECM: You have used wrong information. The estimations are not consistent. Will VAR or VECM give me relation in terms of equation which can be used for forecasting? In practice, You can use both for forecasting. Of course, if the goal is to forecast, there are other criteria to check for models performance. This article introduces the concept. Do I need to perform Johansen's test of cointegration? If the goal is forecast, estimate as much possible model as you can and compare their forecast performance. If the goal is to estimate the structure of the model, sure, you should test for cointegration. Of course, in this case you should do sensitivity analysis by estimating other unrestricted VAR models, because the power of statistical tests are limited.
VAR or VECM for a mix of stationary and nonstationary variables?
Should I use VAR or VECM to find relation between them? In practice, it depends on the power of cointegration tests: If your variables are cointegrated and you used a VAR model: you could have done
VAR or VECM for a mix of stationary and nonstationary variables? Should I use VAR or VECM to find relation between them? In practice, it depends on the power of cointegration tests: If your variables are cointegrated and you used a VAR model: you could have done better by estimating a VECM model. Your estimations are still consistent (in fact superconsistent), but inefficient. If your variables are not cointegrated and you use a VECM: You have used wrong information. The estimations are not consistent. Will VAR or VECM give me relation in terms of equation which can be used for forecasting? In practice, You can use both for forecasting. Of course, if the goal is to forecast, there are other criteria to check for models performance. This article introduces the concept. Do I need to perform Johansen's test of cointegration? If the goal is forecast, estimate as much possible model as you can and compare their forecast performance. If the goal is to estimate the structure of the model, sure, you should test for cointegration. Of course, in this case you should do sensitivity analysis by estimating other unrestricted VAR models, because the power of statistical tests are limited.
VAR or VECM for a mix of stationary and nonstationary variables? Should I use VAR or VECM to find relation between them? In practice, it depends on the power of cointegration tests: If your variables are cointegrated and you used a VAR model: you could have done
34,400
Is there a way to force a relationship between coefficients in logistic regression?
This is fairly easy to do with the optim function in R. My understanding is that you want to run a logistic regression where y is binary. You simply write the function and then stick it into optim. Below is some code I didn't run (pseudo code). #d is your data frame and y is normalized to 0,1 your.fun=function(b) { EXP=exp(d$x1*b +d$x2*b^2) VALS=( EXP/(1+EXP) )^(d$y)*( 1/(1+EXP) )^(1-d$y) return(-sum(log(VALS))) } result=optim(0,your.fun,method="BFGS",hessian=TRUE) # estimates result$par #standard errors sqrt(diag(inv(result$hessian))) # maximum log likelihood -result$value Notice that your.fun is the negative of a log likelihood function. So optim is maximizing the log likelihood (by default optim minimizes everything which is why I made the function negative). If Y is not binary go to http://fisher.osu.edu/~schroeder.9/AMIS900/ch5.pdf for multinomial and conditional function forms in logit models.
Is there a way to force a relationship between coefficients in logistic regression?
This is fairly easy to do with the optim function in R. My understanding is that you want to run a logistic regression where y is binary. You simply write the function and then stick it into optim.
Is there a way to force a relationship between coefficients in logistic regression? This is fairly easy to do with the optim function in R. My understanding is that you want to run a logistic regression where y is binary. You simply write the function and then stick it into optim. Below is some code I didn't run (pseudo code). #d is your data frame and y is normalized to 0,1 your.fun=function(b) { EXP=exp(d$x1*b +d$x2*b^2) VALS=( EXP/(1+EXP) )^(d$y)*( 1/(1+EXP) )^(1-d$y) return(-sum(log(VALS))) } result=optim(0,your.fun,method="BFGS",hessian=TRUE) # estimates result$par #standard errors sqrt(diag(inv(result$hessian))) # maximum log likelihood -result$value Notice that your.fun is the negative of a log likelihood function. So optim is maximizing the log likelihood (by default optim minimizes everything which is why I made the function negative). If Y is not binary go to http://fisher.osu.edu/~schroeder.9/AMIS900/ch5.pdf for multinomial and conditional function forms in logit models.
Is there a way to force a relationship between coefficients in logistic regression? This is fairly easy to do with the optim function in R. My understanding is that you want to run a logistic regression where y is binary. You simply write the function and then stick it into optim.